id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
211078731 | pes2o/s2orc | v3-fos-license | Primordial odontogenic tumor: A systematic review
Background The primordial odontogenic tumor (POT) is a recently described benign entity with histopathological and immunohistochemical features suggesting its origin during early odontogenesis. Aim: To integrate the available data published on POT into a comprehensive analysis to better define its clinicopathological and molecular features. Material and Methods An electronic systematic review was performed up to September 2019 in multiple databases. Results A total of 13 publications were included, representing 16 reported cases and 3 molecular studies. The mean age of the affected patients was 11.6 years (range 2-19), with a slight predominance in males (56.25%). The posterior mandible was the main location (87.5%), with only two cases affecting the posterior maxilla. All cases appeared as a radiolucent lesion in close relationship to an unerupted tooth. Recurrences have not been reported to date. Microscopically, POT comprises fibromyxoid tissue with variable cellularity surrounded by a cuboidal to columnar odontogenic epithelium but without unequivocal dental hard tissue formation. A delicate fibrous capsule surrounds (at least partially) the tumor. The epithelial component shows immunohistochemical positivity for amelogenin, CK19, and CK14, and variable expression of Glut-1, Galectin-3 and Caveolin-1, Vimentin, p-53, PITX2, Bcl-2, Bax and Survivin; the mesenchymal tissue is positive for Vimentin, CD90, p-53, PITX2, Bcl-2, Bax, and Survivin, and the subepithelial region exhibits the strong expression of Syndecan-1 and CD34. The Ki-67 index is low (<5%). The negative or weak expression of dentinogenesis-associated genes could explain the inhibition of dentin and subsequent enamel formation in this neoplasm. Conclusions POT is an entity with a well-defined clinicopathological, immunohistochemical and molecular profile that must be properly diagnosed and differentiated from other odontogenic lesions and treated consequently. Key words:Primordial odontogenic tumor, systematic review.
Introduction
In 2014, the primordial odontogenic tumor (POT) was described for the first time (1), and subsequently this entity was included in the World Health Organization (WHO) Classification of Head and Neck Tumours in the group of benign mixed neoplasms (2). The name was coined due to its possible development from the early stages of odontogenesis (Fig. 1).
-Study selection The titles and abstracts of all studies found in the database search were independently reviewed by two authors; subsequently, the articles that met the inclusion criteria and those lacking information in the title or abstract were completely evaluated using the EBLIP Critical Appraisal Checklist. The studies selected by each author were crossed to ensure that they were properly chosen, according to the inclusion criteria and the checklist (Fig. 2). Since its original description in 2014, sixteen cases of POT have been reported in the literature, presenting as a well-defined radiolucent lesion in close proximity to the crown of an unerupted tooth, producing bone expansion, radicular resorption and tooth displacement of variable extent (3)(4)(5)(6)(7)(8)(9). The aim of this systematic review is to collect and integrate the available data published on POT into a comprehensive analysis to better define its clinicopathological, radiological and molecular features.
Material and Methods
This study followed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines (10). -Search strategies An electronic search was performed up to September 2019, with date restriction since 2014 (first description of the POT). The following databases were accessed: PubMed/MEDLINE, Cochrane, and SpringerLink. The search strategy used in all databases consisted of the following keywords: primordial odontogenic tumor.
-Inclusion and exclusion criteria The inclusion criteria were as follows: 1) cases diagnosed as POT with sufficient clinical, microscopic, and immunohistochemical information to confirm a definite diagnosis based on the WHO histological classification of odontogenic tumors (2), including case reports and molecular studies, 2) articles in English language, and 3) articles included in PubMed/MEDLINE, Cochrane, and SpringerLink databases. The exclusion criteria were as follows: 1) case reports without sufficient information to confirm a definite diagnosis, 2) book chapters, reviews and meta-analyses, 3) documents in a different language from English, or 4) articles published before 2014. -Data extraction All relevant information, such as reference, year of publication, patient nationality or country of publication, number of cases, patient sex and age, tumor location, signs and/or symptoms, imaging presentation, treatment, follow-up, immunohistochemical and molecular features, were extracted (when available) using a specific table.
-Analysis The abovementioned data were analyzed with descriptive statistics. Since POT is a benign tumor with indolent behavior and no recurrences reported to date, it was not necessary to perform statistical tests to evaluate the association of clinicopathological variables with prognosis or survival.
Results
-Literature search The process of searching and screening the articles is summarized in Fig. 2. The initial search recorded a total of 277 articles; of these, 116 articles were excluded due to duplication. Subsequently, two reviewers, independently, with a substantial concordance indicated by a kappa coefficient of k = 0.76, assessed the titles and abstracts, resulting in the exclusion of 146 publications. In the commonly found (in 75% and 87.5% of cases, respectively). Most of these cases involved unerupted teeth, particularly the third molar (62.5%), followed by the second deciduous molar. The size of the lesions ranged from 9 to 90 mm; however, most of these anomalies were ≥ 30 mm, with a mean of 41 mm ( Table 2, Table 3). evaluation of the 16 remaining papers, 2 were excluded because they were reviews and 1 was excluded for not meeting the criteria to confirm the diagnosis of POT.
-Description of the selected studies A total of 13 publications were included, representing 16 reported cases and 3 molecular studies ( Table 1). The studies and cases originated from nine countries. The study of Mosqueda-Taylor et al., 2014 included six cases from four different countries (México, Guatemala, Brazil and Spain), and additional cases were reported in the U.S.A., Japan, Egypt, India, México, Brazil and the Philippines ( Table 2). -Descriptive analysis of the clinical and radiographic features The age of the patients varies in a range of 2-19 years old, with a mean of 11.6. Males were slightly more affected (56.25%) with a male to female ratio of 9:7. All cases occurred in the posterior region of the jaw, mainly in the mandible (87.5%). All patients were asymptomatic, with most presenting swelling in the affected region, and in only two cases, the tumor was discovered as an incidental radiographic finding (4,8). Radiographically, all cases were associated with at least one unerupted tooth, with the majority presenting as unilocular and welldefined radiolucencies, and approximately one-third of the cases exhibiting biloculated or multiloculated appearances. Tooth displacement and root resorption were All cases, except one, were treated with simple excision/enucleation with extraction of the involved teeth and to date, there are no reports of recurrences (followup range: 3 months to 20 years) ( Table 2). One case was treated with a partial mandibulectomy due to a misdiagnosis of odontogenic myxoma in the incisional biopsy (11). Macroscopically, POT is a solid, multilobulated whitish and glossy mass, without cystic spaces, and this tumor is surrounded by a capsule or at least well demarcated from the surrounding structures (1,2,9). In some cases, the dental follicle can be identified in the surgical specimen with a dark reddish color (4,5,9). The demographic, clinicopathological and radiographic features of the 16 POT cases are summarized in Table 2, and the descriptive statistics are summarized in Table 3.
-Histopathological features POT comprises mesenchymal fibromyxoid tissue with variable cellularity that in most cases resembles the dental papilla, surrounded at the periphery by a cuboidal to columnar epithelium similar to the inner epithelium of the enamel organ. Occasionally, suprabasal stellate reticulum-like areas may be observed. The tumor is at least partially enclosed by a thin fibrous capsule (Fig. 3). A condensation of mesenchymal cells in the subepithelial region is also commonly observed (Fig. 3). Invaginations of the surrounding epithelium can be focally present within the mesenchymal component, resulting in ameloblastic fibroma-like islands on tangential sections.
This tumor is devoid of dental hard tissues; nevertheless, four authors have described the presence of small foci of calcifications within the epithelial tissue, specifically, the stellate reticulum-like areas. However, no evidence of odontoblastic differentiation or the induction of dental hard tissue deposition has been described thus far. These intraepithelial calcifications are round small masses of hard material with a globular or concentric appearance (5,8,9,11 -Molecular characterization To date, 151 cancer-and 42 odontogenesis-associated genes have been analyzed in POT by next-generation sequencing, and no mutations were detected. Nevertheless, the expression of the dentinogenesis-associated genes Bglap, Ibsp and Nfic was negative or very weak. DSPP mRNA is highly expressed in POT (12).
Discussion
Although POT is a recently described and accepted entity (1,2), this systematic review showed that the 16 cases reported to date exhibit a well-defined profile of clinicopathologic, radiographic, immunohistochemical and molecular features that can be summarized as follows: this tumor occurs in the first and second decades of life, affecting the posterior region of the jaw, particularly the mandible. Radiographically, POT appears as a welldefined radiolucent unilocular lesion always associated with unerupted teeth, such as deciduous or third molars, with a mean size of 4.1 cm, showing tooth displacement and frequent root resorption ( Table 1, Table 2).
Considering that most lesions appear as unilocular radiolucencies in close relationship with the crown of an unerupted tooth, the main radiographic differential diagnosis includes a dentigerous cyst and an ameloblastic fibroma; however, multilocular lesions with tooth displacement and root resorption could mimic other odontogenic tumors and cysts, such as ameloblastoma, odontogenic myxoma and odontogenic keratocyst (2,4,11). POT is a benign tumor with an indolent course, as there are no reported recurrences after conservative excision (Table 1). Macroscopically, POT is a solid, multilobulated whitish and glossy mass with no evidence of cystic changes, and this tumor is well demarcated from the surrounding structures. Considering these aspects, it is convenient to rule out the possibility of ameloblastic fibroma in excision specimens and of odontogenic myxofibroma in incisional biopsies (1,2,9,11). Microscopically, POT essentially shows mesenchymal fibromyxoid tissue resembling in large areas the dental papilla, surrounded by a cuboidal to columnar epithelium that resembles the inner epithelium of the enamel organ (Fig. 3) (2,4,5). These features differ from their main histopathological differential diagnoses: ameloblastic fibroma, hyperplastic dental follicle and odontogenic myxoma (1,4). In the last case, it is mandatory to identify the highly distinctive features of POT to avoid the misdiagnosis of odontogenic myxoma, particularly in small incisional biopsies and subsequent overtreatment, a situation that has been reported previously (11). In a representative biopsy or a complete surgical specimen of POT, it is possible to exclude an odontogenic myxoma, since this tumor is not surrounded by an odontogenic epithelium, and to differentiate it from an ameloblastic fibroma, which presents cords and islands of odontogenic epithelium within the dental papilla-like myxoid stroma (2). Some authors have stated that the areas of ameloblastic epithelium of POT are similar to those of the lining of unicystic ameloblastoma (4); however, POT presents as a tumor mass and not as a cystic lesion. Additionally, the differences in the mesenchymal component (fibrous tissue for ameloblastoma and fibromyxoid for POT) are helpful in confirming the diagnosis (Fig. 3) (2,4). We excluded a recently published POT case report from this study because most of the clinicopathological and radiographic characteristics did not match the profile of the POT, such as its location (anterior), lack of association with an unerupted tooth, small size, subepithelial odontoblastic differentiation, and dentinoid production (13). Consequently, we consider that this case represents a developing odontoma, which must be included as another differential diagnosis of POT, mainly in very small lesions in children. In contrast to odontoma, POT does not present the production of dental hard tissues or odontoblastic differentiation (Table 2) (3,5,7,14). Initially, the term POT was coined due to the histologic resemblance of the tumor with the appearance of the epithelial and mesenchymal elements during the early stages of tooth development (Fig. 3), when the epithelium is located peripherally, mimicking the relation between the inner epithelium of the immature enamel organ and the primitive dental papilla, without inductive effects over the ectomesenchyme (Fig. 2) but lacking the ability to follow a normal evolution toward histoand morphodifferentiation (1,12). Heterogeneity in epithelial differentiation within the same tumor has been documented previously by histopathological analysis, describing features, such as localized areas of ameloblastic differentiation (3,4) and focal stratum intermedium-like structures, as well as some superficial stellate reticulum-like areas, which are not necessarily representative of the findings of POT (5,9). The histomorphological variability may reflect the heterogeneity of the tumor caused by, for example, slight differences in the state of differentiation among the reported cases (5,12). Although POT is devoid of dental hard tissues, four cases showed small foci of calcifications within the epithelial layer (5,8,9,11), which may suggest that alkaline phosphatase could play a role in the formation of these calcifications, as occurs in the calcifying epithelial odontogenic tumor (15,16).
Immunohistochemically, POT has shown a low proliferation index (<5%), which defines this lesion as a slow-growing benign tumor in which cell proliferation does not seem to be the main mechanism implicated in tumoral growth and expansion (17). Nonetheless, one study has shown that the condensed subepithelial region exhibits higher vascularization and positivity for Glut-1 and antiapoptotic markers, suggesting a major participation of metabolic, antiapoptotic and angiogenic events in tumor growth (5,17). The immunoexpression patterns described in POT corresponded to those found in the normal early stages of tooth development: the condensed mesenchymal subepithelial area seems to be the most proliferative and shows high expression of Syndecan-1 and CD34, which, along with the epithelial expression patterns of CK14, CK18 and CK19, and focal PITX2, as well as the mesenchymal and epithelial staining for vimentin, resembles the initial period of tooth formation, suggesting that POT origins around the 10th to 20th week of embryonal development (cap stage to late bell stage of tooth germ) (Fig. 2) (1,5,12,18). In fact, the epithelial portion of the tumor consistently expresses CK14 and CK19, while other markers, such as Vimentin, Amelogenin, Glut-1, MOC-31, and Caveolin-1. Galectin-3, PITX2, p53, Bax, Bcl-2, Survivin and PTEN, are variably expressed in focal areas, suggesting a dynamic tissue presenting cells in different stages of maturation and exhibiting a transition between early stages of tooth development to those with ameloblastic maturation but without induction of odontoblastic maturation or production of mineralized tissues (5,17,18). This immunohistochemical profile supports the denomination of "primordial" for this tumor.
Recently, somatic mutations of cancer-associated genes, such as BRAF V600E (19), SMO (20), KRAS (21), PTCH1 (22) and CTNNB1 (23,24), have been reported in some benign odontogenic tumors and cysts. These findings indicate that neoplastic proliferation of some odontogenic tumors may be triggered by genetic alterations affecting oncogenic signaling pathways (25). As odontogenic tumors are derived from the cells of the tooth-forming apparatus and their remnants (2), the evaluation of odontogenesis-associated genes is also important to elucidate the pathogenesis of POT. The absence of immunoexpression of the mutant protein BRAF V600E confirms that this mutation is not implicated in the pathogenesis of POT, excluding it from the category of the BRAF mutated ameloblastic tumors (17).
In the study of Mikami et al., [2018], no somatic mutations were detected when 151 cancer-and 42 odontogenesis-associated genes were examined, and the mRNA expression level of odontogenesis-associated genes in POT was determined by next-generation sequencing. Nevertheless, the expression of the dentinogenesisassociated genes Bglap, Ibsp and Nfic was negative or very weak, likely due to epigenetic silencing mechanisms, explaining the inhibition of dentin (and consequently, enamel) formation in POT (12). In tooth germ development, DSPP is immunohisto-chemically positive in preameloblasts and preodontoblasts (26). Although DSPP is immunoexpressed in the epithelium and mesenchyme of POT, and DSPP mRNA is also highly expressed, neither odontoblast differentiation nor the induction of dentin formation is morphologically observed (12,18). In brief, with the data obtained by immunohistochemical and genetic studies, we can conclude the following: POT is a benign, slow-growing odontogenic neoplasm, which shows a low proliferation rate and moderate vascularization (17). The epithelial tissue surrounding the tumor mass is nonstatic and shows varying degrees of maturation, with a transition from an inner enamel epithelium morphology to areas of ameloblastic maturation, but without evidence of induction leading to mineralized tissue production (5,17). The subepithelial area shows the expression of several proteins, suggesting that it is a highly active tumor region (17,18). The pathogenesis of the tumor does not appear to be linked with any type of known gene mutation; however, there is an inhibition of enamel and dentin formation by the downregulated expression of genes and proteins associated with dentinogenesis (12). In summary, in this systematic review, we showed that the 16 cases reported to date exhibit a well-defined profile of clinicopathologic, radiographic, immunohistochemical and molecular features. Despite the indolent clinical course of POT, it is crucial to identify its highly distinctive presentation to avoid misdiagnosis, mainly in small incisional biopsies and subsequent overtreatment. | 2020-02-12T14:04:17.692Z | 2020-02-10T00:00:00.000 | {
"year": 2020,
"sha1": "97494d923a87146b19c948310173aa2aad9e3159",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4317/medoral.23432",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ac917ff67b589361583d80ca3de64d09c6df7ff2",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266881057 | pes2o/s2orc | v3-fos-license | Solubility and Thermodynamic Parameters of H2S/CH4 in Ionic Liquids Determined by 1H NMR
Natural gas remains an important global source of energy. Usually, sour gas from the well or refinery stream contains H2S among other contaminants that should be removed to fulfill permissible standards of use. Despite the use of different gas–liquid sour gas upgrading technologies, ionic liquids (ILs) have been recognized as promising materials to remove H2S from sour gas. However, data concerned with thermodynamic solution functions of H2S in ILs have scarcely been reported in the literature. In this work, solution 1H NMR spectroscopy was employed for quantifying H2S soluble in [BMIM][Cl] and for gaining a better understanding of the H2S–IL interaction. Experiments were carried out in a Young-Tap NMR tube containing a saturated solution of H2S/CH4/[BMIM][Cl] and recording spectra from 298 to 333 K. The thermodynamic solution functions, determined from the Van’t Hoff equation, showed that solubility of the H2S in the [BMIM][Cl] is an exothermic gas–liquid physisorption process (ΔsolH° = −66.13 kJmol–1) with a negative entropy change (ΔsolS° = −168.19 JK–1 mol–1). 1H NMR spectra of the H2S/[BMIM][Cl] solution show a feature of strong solute–solvent interactions. However, solubility enthalpy is a fifth of the H–S bond energy value. Results from 1H NMR spectroscopy also agree with those from the bench dynamic experiments.
INTRODUCTION
Hydrogen sulfide present in natural and refinery sour gas streams must be removed to avoid critical damage to the industrial platform, due to their high corrosivity, and to fulfill standard requirements of safety and environment. 1−3 Chemical absorption in amine solutions is the most widespread industrial process used for the removal of CO 2 and H 2 S from sour gas.In the gas−liquid absorption process, sour gases are contacted with an amine solution to achieve a selective and intensive mass transfer of the CO 2 and/or H 2 S, contained in the natural gas, to the inflow liquid.As the amine solution saturates with acid gas, it is regenerated by taking off the H 2 S or CO 2 in a second and parallel process. 4,5Amine solution is further continually reused.The gas absorption step requires low temperature and high pressure, while the desorption step requires high temperature and low pressure.−9 Although the amine technology is effective for sour gas sweetening, the chemical nature of the amine used as a sorbent accounts for several disadvantages.First, the lower the chemical stability, the amines undergo chemical degradation to form corrosive byproducts.Second, their high volatility results in an important loss of mass of amine during the regeneration step, which is carried out at a high temperature, and contacting sour gas with the amine solution results in the transfer of water into the sweet-gas stream.Nevertheless, this technology is energy cost-extensive due to the amines are not regenerated under mild conditions. 2,5,8,9ince ionic liquids (ILs) were proposed as an eco-friendly alternative solvent for many industrial processes used in sweetening technology of sour-natural gas, 10−13 they have raised a lot of expectations due to their exceptional characteristics of no volatility, negligible vapor pressure, and high chemical, and thermal stability.Furthermore, all the physicochemical properties are tuned by a judicious combination of cations and anions. 14o design sour gas sweetening processes based on the IL, knowledge of apparent solution standard enthalpy, entropy, and Gibbs free energy (Δ sol H o , Δ sol S o , and Δ sol G o ) and of acid gases in the selected liquids is required.These parameters may be readily estimated from solubility data at different temperatures using the Van't Hoff equation. 15sually, experimental approaches for determining the solubility of H 2 S in different liquids are based on gravimetric, electrochemical, semiconducting, thermal, spectroscopic, and optical sensors.Nevertheless, 1 H NMR spectroscopy has been used for qualitative and quantitative analysis of H 2 S in ILs.For instance, 1 H NMR has been recognized as a useful and nonambiguous tool for (i) quantifying sulfide e-trapping efficiency of 1,3,5-tris(2-hydroxyethyl)-1,3,5-triazinane, 16 (ii) demonstrating the mechanistic aspect of the capture of H 2 S gas in an organic superbase, DBU, through ionic solid formation (the formation of salt was also confirmed by 13 C NMR analysis), 11 and (iii) monitoring the formation of TrtSH from the reaction between TrtSSH and different nucleophiles, 17 among others.
While solubility and diffusivity of H 2 S have been widely studied using different experimental and theoretical approaches, 18−21 to the best of our knowledge, scarcely data for solution thermodynamic parameters in ILs have been reported.In this work, we estimate the solution thermodynamic parameter of H 2 S of a CH 4 /H 2 S gas mixture in [BMIM][Cl], using 1 H NMR spectroscopy solution techniques.To achieve that, in situ experiments at different temperatures were designed to determine solubility under a quasi-dynamic condition inside the NMR tube.Nevertheless, as the industrial removal of H 2 S from sour gas is carried out under dynamic conditions, we evaluate the dynamic selective H 2 S uptake in ILs using their characteristic breakthrough curve.
General Considerations.
All manipulations were carried out under an anaerobic atmosphere of nitrogen using standard Schlenk and cannula techniques.The reagents and solvents were purchased from Aldrich, Merck, Acros Organics, Sigma, and Fluka and used as received.Solvents were refluxed over an appropriate drying agent, distilled, and degassed before use.
For H 2 S sorption on ILs experiments, a gas mixture of H 2 S/ CH 4 (Praxair, 0.1% and 5.0% v/v) and carrier gases for desorption (Praxair, Ar 99.999% and O 2 99.999%) were employed.
Analytical Details.
The 1 H and 13 C NMR spectra of organic compounds were recorded on a Bruker AVANCE-500 operating at 500 and 125 MHz, respectively, or on an AVANCE-300 spectrometer, operating at 300 and 75 MHz, respectively, at 298 K using CDCl 3 (Sigma) or D 2 O (Aldrich) as solvent.Chemical shifts are reported as δ (ppm) relative to the 1 H and 13 C residues of the deuterated solvents.
Synthesis of IL.
The ILs were synthesized by slight modifications of a literature procedure, 22 as follows.
1 H NMR Experiments at Different
Temperatures.H 2 S solubility was determined by NMR spectroscopy experiments.A sample of IL was added to a Young-Tap NMR tube containing a sealed glass capillary with D 2 O. Air was evacuated under vacuum and, subsequently, the IL was exposed to a gas mixture of H 2 S/CH 4 5% v/v to saturate the internal atmosphere of the NMR tube.The mixture of the acid gas flowed at a rate of 30 mL/min for 1 h at 298 K.After that the Young-Tap NMR tube was hermetically sealed and 1 H NMR spectra were recorded at different temperatures from 298 to 333 K, at intervals of 5 K. Before taking each spectrum at a fixed temperature, the system was allowed to reach equilibrium by keeping it under isothermal condition for a long resting period.Experiments were carried out by quintupled to get a statistical average of the integration value.The mole fraction of H 2 S in IL was determined by comparing the relative intensities of the H 2 S proton signal and the more acidic proton signal of the imidazolium ring in the same 1 H NMR spectrum (Supporting Information, S1).The chemical shifts are reported as δ (in ppm) relative to the 1 H residues of the HOD.The temperature-dependent chemical shift of the residual solvent peak (HOD) was corrected as stated by Gottlieb et al. and Hoffman 23−25 according to eq 1: 2.5.Dynamic H 2 S Experiments.Dynamic H 2 S sorption experiments were carried out to evaluate the capacity of ILs for H 2 S removal.Approximately 0.5 g of IL was added to the bottom of a suitable trap-like glass reactor.A gas mixture of H 2 S/CH 4 with an acid concentration of 0.1% v/v was bubbled into the bottom of the liquid bed at a flow rate of 30.0 mL/ min.The outlet concentration of H 2 S was monitored online using an Interscan LD-17 detector.Experiments were performed until the outlet H 2 S concentration equaled the starting inlet concentration.However, dilution of the outlet acid gas with air was done to extend the detector's lifetime.All experiments were kept under an isothermal condition of 298 K.
The breakthrough capacity (mg-H 2 S/g-IL) was calculated by eq 2 26 : Equation 2 is readily solved by integrating the corresponding breakthrough curve obtained by plotting the outlet H 2 S concentration vs time, and using the following set of variables: BC is the breakthrough capacity expressed in mg/g; Q is the total inlet flow rate (m 3 /s); w is the weight of the IL introduced into the column (g); MW is the molecular weight of H 2 S (34.0 mg/mmol); V M is the molar volume of H 2 S (22.4 mL/mmol); C 0 is the inlet gas H 2 S concentration (ppmv); C(t) is the gas outlet concentration (ppmv); dt is the saturation-exhaustion time (s).
Further details of the BC calculation are given in the Supporting Information (S2).
2.6.H 2 S Desorption Experiment.The H 2 S-contained IL was bubbled with a streaming gas (Ar or O 2 ) at a flow of 30.0 mL/min at room temperature (298 K) for 10 min.The outlet gas mixture was permanently online monitored.When no further H 2 S was detected, the sorption experiments were repeated under the same experimental conditions until no detectable difference in H 2 S removal within consecutive runs was observed.
RESULTS AND DISCUSSION
The ILs 1−4 were prepared by a procedure previously reported in the literature. 22Analytically pure products were typically obtained in high yield (>90%).All compounds were characterized by microanalysis and IR, 1 H, and 13 C { 1 H} NMR spectroscopy.In all cases, elemental analyses were consistent with the proposed formulation.The IR spectra of 1−4 possessed characteristic bands of the expected functional groups, whereas the 1 H and 13 C { 1 H} NMR spectra of 1−4 are consistent with the expected structures.
The 1 H NMR spectrum for the saturated H 2 S-[BMIM][Cl] solution showed the expected resonances for protons on the cation, with three sets of aromatic resonances rather than a significant shift in comparison to the pure IL (Figure 1).Solubilization of H 2 S in [BMIM][Cl] results in a highfrequency shift (downfield) of about 1.0 ppm for resonances of the imidazolium methylene protons (−NCH 2 �CH 2 N−).Contrarily, the resonance of the most acidic proton (−NCHN−) is shifted to low frequency (high field) for about 0.3 ppm.For all resonances, loss of the splitting pattern is observed (Supporting Information, S3).
In addition, a signal assigned to the protons of H 2 S dissolved in the [BMIM][Cl] appears as a broad singlet at high frequency (δ 15 ppm), likely from an N•••H-SH intermolecular bond formation.It is known that the chemical shift of N−H protons can occur virtually anywhere in 1 H NMR spectra because the δ H value depends on various factors such as the chemical environment of the proton in the molecular structure, solvent nature, temperature, acidity, and hydrogen intra-or intermolecular bonding.Protons interacting with nitrogen atoms may be exchangeable resulting in broad peaks.Further evidence for the assignment of the N•••H-SH resonance and neglecting the possible CH 4 interactions with the IL is obtained from the HMBC (heteronuclear multiple bond coherence) experiment where no cross-peaks in the downfield region of the spectrum were observed (Supporting Information, S4).
The chemical shifts for protons of H n -S m species in common organic and inorganic solvents are observed at very low frequency, below δ 1 ppm. 27However, a significant shift to higher frequency has been reported for H−S species in [N 2224 ][DMG], with the resonance of the proton on the N••• H−S bond occurring at δ 6.85 ppm. 28Other NMR experiments have been carried out and reported elsewhere to determine the solubility of H 2 S in different ILs.As the H 2 S is in contact with the IL-containing deuterated solvents, it is not clear whether the H 2 S is reacting with the mixture of the IL and the deuterated solvent, the solvent, or the IL alone.To overcome the possible interference of the solvent in this work, experiments were carried out by isolating the deuterated solvent into a closed glass capillary inset to the NMR tube.
The temperature dependence of N•••H-SH proton estimated as the ΔδNH/ΔT value indicates the efficacy of intermolecular hydrogen bonding.The nonhydrogen bonded N•••H protons generally show a small temperature dependence of less than 3.0 ppb/K. 29,30The value of ΔδNH/ΔT found in this work for N•••H-SH was 11.2 ppb/K which is much higher than 3.0 ppb/ K, indicating a strong intermolecular hydrogen bonding between the acidic gas and the IL (Supporting Information, S5).
The solubility of H 2 S in the IL is also highly dependent on temperature.Figure 2 shows the variation of signal intensities corresponding to the N•••H-SH, due to the H 2 S dissolved in [BMIM][Cl], as a function of temperature.It is noteworthy that this signal decreases with increasing temperature, indicating a lower amount of H 2 S dissolved in the IL at a higher temperature due to lower solubility.
To estimate the magnitude of the IL and the acid gas interaction, the level of order that takes place in the liquid/gas mixture and whether the process occurs spontaneously during H 2 S dissolution in the IL, the solution standard enthalpy (Δ sol H o ), the solution standard entropy (Δ sol S o ), and the solution standard Gibbs energy (Δ sol G o ) were calculated by using the classical thermodynamic Van't Hoff's approach described in eqs 3−5, respectively. 31−37 = i k j j j j j j j j j j j y { z z z z z z z z z z z ( ) where, R is the universal gas constant; T is the temperature; T hm is the mean harmonic temperature (the use of the mean harmonic temperature in eqs 3−5 is advantageous for enthalpy−entropy compensation and separates the chemical from statistical effects).
A typical Van't Hoff plot of the H 2 S solubility in [BMIM][Cl], from 298 to 333 K is displayed in Figure 3.As can be seen, the expected curve shows a straight line with a slight deviation of the linearity at higher temperatures and an acceptable determination coefficient value of 0.954.
The curve is best described by eq 6: From the slope, Δ sol H o was readily estimated according to eq 3.
Values of the Gibbs energy, enthalpy, and entropy of the acid gas dissolution process in the IL are collected in Table 1.The values of relative contributions of the thermodynamic parameter to solution processes ζ H and ζ TS are also reported.As can be seen, H 2 S dissolution in [BMIM][Cl] is a spontaneous process (Δ sol G o < 0).The enthalpy and entropy contributions to the gas dissolution are slightly different, with Following the fact of the spontaneous H 2 S sorption on [BMIM][Cl] and its readily thermal desorption, we step forward to investigate the H 2 S breakthrough capacity of four different ILs under bench dynamic conditions.Data from experiments of dynamic sorption capacity expressed as the mass fraction of H 2 S retained in the liquid phase are useful to set parameters for the scale-up process when a dimensional approach is used.
The H 2 S dynamic sorption profiles as a function of time are shown in Figure 4.For all the ILs, the breakthrough curves presented a typical S-shape line.Initially, the IL retains selectively the H 2 S from the gas mixture and the evolved gas resulting in a steady zone of the curve near zero.As the gas mixture is bubbling out into the liquid phase, a homogeneous mass transfer zone is established, resulting in an efficient H 2 S retainer liquid bed.A breaking point, the breakthrough, is reached when a fast change in the slope occurs.After that, the curve sharply increases to a saturation level where the H 2 S outlet concentration equals the inlet concentration (C 0,H2S ) at the equilibration time, the time at which the entire bed is in equilibrium with the feed.
The difference between breakthrough and equilibration times depends on the mass transfer rate.In Figure 4 On the other hand, the breakthrough curve for [BMIM][Br] approaches a symmetrical S-shape, which would be the ideal case for the sorption process, with a flat mass transfer front.These features resulted in the most efficient system for H 2 S removal.
The saturation time increases in the order [Br] with breakthrough capacity ranging between 0.16 and 0.38 mg/g.
Ideally, for an upscaling industrial process, a system that generates a breakthrough curve with a symmetrical S-shape with a flat mass transfer front is desirable.The breakthrough time for this ideal process, which occurs at the midpoint of the real S-shaped breakthrough curve, is known as the stoichiometric time.Its variables allow us to determine practical and operational parameters for an industrial process.
The difference in the IL breakthrough capacity is rather interesting.For ILs based on BPy, the effect of the anion on the sorption capacity is not apparent, whereas in those based on BMIM, anions seem to determine largely their sorption capacity.These results seem more likely due to the effect of cation−anion association in each case, which would determine the H 2 S−solvent interaction.
The enthalpy value of the solution process suggests that [BMIM][Br] can be efficiently regenerated at a low-energy cost (atmospheric pressure and room temperature) to be further reused for the same purpose.The recyclability of ILs was studied by using consecutive H 2 S sorption−desorption cycles.While sorption experiments consist of bubbling the H 2 S/CH 4 gas mixture into the IL until saturation, desorption consists of bubbling air or Argon into the liquid for some time at 298 K, usually for 10 min.Figure 5 resumes the efficiency (%) and total removed H 2 S vs cycle.From Figure 5, it is clear that [BMIM][Br] could be reused 10 times before losing less than 9% of its H 2 S removal efficiency at a rate of less than 1.0% per cycle.From the linear decay tendency of the H 2 S removal efficiency, the mass of H 2 S removed witting cycles of sorption−desorption until reaching 50% of ILs efficiency, amounts to 16.20 mg/g, which is equivalent to a loading or solubility of 0.11 mol/mol at 298 K and 1.0 bar.
Although the total H 2 S removal capacity of [BMIM][Br] is in the range for alkanol amine and a mixture of the alkanol amine solutions technology (0.1−1.0 mol/mol), 38 it is rather lower for the task-specific IL for acid gas absorption.For instance, [N 2224 ] 2 [maleate] has shown an H 2 S solubility of 1.43 mol/mol. 38 experimental settings and approaches used.Given the fact that [BMIM][Br] should be considered as a physical absorbent, it is unlikely to replace alkanolamines for acid gas removal.
CONCLUSIONS
In this work, 1 H NMR spectroscopy was employed to determine the solubility of H 2 S in [BMIM][Cl] at different temperatures.Experiments were carefully carried out to eliminate the possibility of H 2 S−deuterated solvent interactions.From the results, the thermodynamic solution parameters of H 2 S in [BMIM][Cl] were estimated.The results indicated that the solubility of H 2 S in [BMIM][Cl] is an exothermic gas−liquid physisorption with an enthalpy value of −66.15 KJ mol −1 . 1 H NMR spectra of the H 2 S dissolved in [BMIM][Cl] showed a large chemical shift to low frequencies of the H−S signal, which is in good concordance with the negative enthalpy value.These results indicate strong solute− solvent interactions most probably as a result of a strong S−H association with the ILs anion.Partial immobilization of the H 2 S molecules into the ionic solvent is also attributed to lower local entropy.Although strong interaction between H 2 S and the IL is evident from 1 H NMR experiments, no H−S bond activation is achieved since the magnitude of the enthalpy is significantly lower than the H−S bond energy.
Results from dynamic experiments using representative operating conditions of sour gas "sweetening" are in great agreement with NMR results.Solubility seems to be conditioned by the nature of the anion.Regenerability of the IL is readily afforded by bubbling it with a stream inert gas at 298 K.After 50 cycles of gas sorption−desorption into [BMIM][Br], it is removed up to 16.20 mg/g.Since ILs advantage amine solution technology in their low volatility and chemical stability, ILs remain a promising alternative in upgrading sour gas.
Although the dynamic approach is rather useful in determining the amount of H 2 S retained in the IL bed on a realistic experimental setting results demonstrate that 1 H NMR experiments are precise, exact, robust, and useful for both the quantitative determination of the dissolved acid gas in the IL as for gaining a better understanding of the interaction between H 2 S and IL.
Further details for the calculation of the mole fraction of H 2 S in the ionic liquids, calculation of the harmonic mean temperature, calculation of the breakthrough capacity and calculation of the temperature dependence of N•
, it can be seen that [BMIM][Cl] has a higher mass transfer rate.The breakthrough curve is almost a vertical line.Meanwhile, the curves for [BPy][Cl] and [BPy][Br] show intermediate mass transfer rates between those of [BMIM][Cl] and [BMIM][Br].In this case, the increase in H 2 S concentration in the effluent after the initial breakthrough is steeper because the ILs are performed more efficiently than [BMIM][Cl].
Figure 4 .
Figure 4. Breakthrough curves of 2 S intake for different ILs at 298 K. | 2024-01-10T16:14:45.141Z | 2024-01-08T00:00:00.000 | {
"year": 2024,
"sha1": "0f86159bd99011cd18751be41022f3870f675747",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.3c07594",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a8bb31e660c29cb146ef5aa7f44e54af876d821a",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5788528 | pes2o/s2orc | v3-fos-license | Acute thoracic aortic dissection presenting as sore throat: report of a case.
Acute dissection of the aorta can be one of the most dramatic of cardiovascular emergencies. Its symptoms can occur abruptly and progress rapidly. Prompt recognition and appropriate intervention is crucial. However, not all aortic dissections present with classic symptoms of abrupt chest, back, or abdominal pain, and the diagnosis may be missed. Aortic dissection presenting as a sore throat is quite unusual. A 53-year-old man presented with sore throat as the early symptom of an acute thoracic aortic dissection. Unfortunately, the diagnosis was delayed, and the patient died. Given the high morbidity and mortality after delayed recognition or misdiagnosis, aortic dissection should be considered in the differential diagnosis of a patient presenting with sore throat and normal findings of neck and throat, even when there is no classic symptoms.
INTRODUCTION
Acute thoracic aortic dissection, one of the most common and serious diseases of the aorta, carries a high morbidity and mortality rate when it is not recognized and treated promptly. The mortality of untreated aortic dissection may be as high as 1 percent within 1 hour, and 40 to 50 percent of patients died within 48 hours [1,2]. For those fortunate enough to survive the initial 48 hours, the disease was thought to carry a 90 percent one-year mortality rate [1,2]. Since the introduction of modern treatment regimens, the fatality rate has declined dramatically. Patients with proximal ascending dissections who rapidly undergo surgery in experienced tertiary centers have a 30-day survival rate of 80 to 85 percent and a 10-year survival of 55 percent [3]. Realization of the dramatic benefits of early intervention is dependent upon rapid establishment of the diagnosis of aortic dissection.
Aortic dissection may not always present with classic symptoms of abrupt chest, back, or abdominal pain that suggest an acute cardiovascular event. By understanding the pathophysiology of aortic dissection, the clinician may better understand the relationship between the dissection process and the resulting symptomatology. We present one case of acute thoracic aortic dissection with the main feature of sore throat.
CASE REPORT
A 53-year-old man came to the emergency room of China Medical University, Taichung, Taiwan, in the early morning with a few hours history of sore throat after betel nut chewing last night. The discomfort was first noted as a foreign body sensation of throat while he was resting in bed. The pain got worse rapidly, prompting him to seek medical evaluation.
He had habits of betel nut chewing and cigarette smoking (one pack per day) for more than 30 years. His medical history included gastric ulcer and left renal calculi. He had received proton pump inhibitor and eradicative treatment of Helicobacter pylori completely. He had no any predisposing factors of aortic dissection such as hypertension, Marfan syndrome, bicuspid aortic valve, or history of cardiac surgery, etc.
He had no history of trauma, cough, rhinorrhea, dyspnea, dysphonia, fevers, chills, chest, or back pain. On arrival, the patient's temperature was 36.8°C, his blood pressure was 134/86 mmHg, and he had a pulse of 78 beats per minute, and a respiratory rate of 18 breaths per minute.
On examination, he was conscious but uncomfortable. There was no tenderness, palpable mass of the neck, or enlargement of thyroid gland. There were clear breathing sounds and regular heart beats with no murmur. The abdomen was soft and flat, with no tenderness or palpable mass. There was no foreign body, wound, or any abnormal finding in throat by laryngoscopy. The radiography of neck soft tissue showed no notable abnormal finding. Due to no obvious abnormal finding of neck and throat, the patient was prescribed analgesic (ketorolac 30 mg intramuscular injection) and observation in the emergency room. After a one-hour observation, the patient felt better and was discharged.
Ten hours later, the patient was sent to the emergency room again due to aggravation of sore throat, severe chest pain, diaphoresis, and syncope at home. On examination, he was drowsy. His face was pale and sweating. Blood pressure was 78/34 mmHg, pulse rate was 127 beats per minute, and respiratory rate was 26 breaths per minute. The jugular vein distension was noted. There was no obvious difference of pulse rate and blood pressure between the four extremities. There were clear breathing sounds and regular heart beats without murmur. The abdomen was soft and non-tender to palpation with no mass. An endotracheal tube was inserted, and mechanical ventilation support and fluid resuscitation were given immediately.
The electrocardiogram (ECG) † showed sinus rhythm and no abnormal change. The creatinine phosphokinase, myocardial band, Troponin I, and other laboratory data were within normal limits. The chest radiography showed widening of mediastinum, deviation of the trachea to the right, depression of the left main stem bronchus, obliteration of the aortic knob, enlargement of heart size, and left pleural effusion ( Figure 1). According to the symptoms, signs, laboratory data, and chest radiography, aortic dissection was highly suspected.
Under the impression of aortic dissection, we requested a computed tomography (CT) of chest and abdomen (from the fourth cervical spine level to pelvis). This was interpreted as showing intimal flaps in ascending aorta with extension to proximal arch, hemopericardium, and hemothorax (Figures 2A and 2B). Stanford type A aortic dissection with cardiac tamponade and hemothorax was the diagnosis. Emergency operation was performed, but the patient died during the operation process.
DISCUSSION
Dissection of the aorta begins with a tear in the intimal layer. This tear permits blood to enter the aortic wall, creating an intramural hematoma progressing distally in the aorta. A common site for the initiation of an intimal tear is at the proximal portion of the ascending aorta due to the thrust of blood ejecting from the left ventricle. Upon intimal disruption, blood enters the media permitting dissection. Medial abnormalities are a result of atherosclerosis, cystic medial necrosis, and systemic hypertension. Hypertension is considered the most significant contributing factor in the pathogenesis of aortic dissection [4].
Aortic dissections have been classified by two separate systems. DeBakey classifies aortic dissection by the extent of the dissecting process and its anatomic location [5]. There are three classifications: Type I involves the ascending aorta and the remaining distal portions of the aorta. When the dissection is limited to the ascending aorta, it is classified as Type II. Type II usually has a transverse tear in the intima anteriorly just above the aortic valve with separation of the intramural layers that terminate proximally to the innominate artery. Type III arises distal to the left subclavian artery and extends distally. The Stanford classification is based on the presence or absence of involvement of the ascending aorta. Type A includes the ascending aorta, whereas Type B does not [6]. Lesions of the ascending arch (Type A) have an unfavorable outcome and usually require surgical intervention. Type B lesions may be amenable to medical management with antihypertensives [7]. This patient had a Type II or Type A aortic dissection.
Because a dissection of the aorta has high morbidity and mortality, its protean symptoms must be appreciated. The symptoms may be secondary to compression of the surrounding nerves, involvement of branch vessels or adjacent structures by the expanding aneurysm. The symptoms and signs may present as abrupt chest pain, abdominal pain, back pain, acute cerebral infarction, myocardial infarction, spinal cord ischemia, intraabdominal disorders, peripheral arterial occlusion diseases, and so on [1,[8][9][10][11][12][13][14][15].
Patients presenting with sore throat to emergency rooms are very common. The common differential diagnosis of sore throat includes pharyngitis, tonsillitis, epiglottitis, peritonsillar abscess, retropharyngeal abscess, mild trauma, and foreign bodies in the throat. This patient had sore throat for a few hours after betel nut chewing. Up to 76 percent of patients who had a betel nut chewing habit experienced burning sensation, sore throat and xerostomia [16]. Due to no abnormal finding in physical examination, radiography of neck soft tissue, and laryngoscopy, the patient was discharged. Pharyngitis or mild pharyngeal injury by mis-swallowing of a betel nut was suspected at that time. However, these conditions, if ever, must present local abnormal finding. We had a serious mistake by making a thoughtless diagnosis.
Neck discomfort may be a symptom of unstable angina and myocardial infarction. Patients with myocardial ischemia suffered from neck discomfort about 13 percent to 31 percent of the time [17]. Notwithstanding the final diagnosis of this case was aortic dissection, the myocardial ischemia should be considered initially. The ECG and cardiac enzyme did not be examined at the first visit.
The aortic dissection was initially manifested only by sore throat. Because the patient did not have chest pain or back pain on arrival to the emergency room, aortic dissection was not recognized then. After severe chest pain, hypotension, and chest radiography findings, aortic dissection was diagnosed. In our knowledge, aortic dissection presenting as sore throat has not been previously reported. In this patient, there was no involvement of head and neck vessels on CT. The pain was most likely secondary to compression of the surrounding nerves or involvement of adjacent structures by the expanding aneurysm of the ascending aorta close to the throat.
Physicians are dependent upon the clinical history and examination to determine which patients require further study. There are various tools to help physicians diagnose aortic dissection. Plain chest radiographys are quickly available in emergency room. The sensitivity of chest radiography is 64 percent, and the specificity is 86 percent [18]. Transthoracic echocardiography (TTE) is limited in its ability to examine the descending thoracic aorta. The sensitivity and specificity of TTE for diagnosis of aortic dissection ranges from 77 to 80 percent, and 93 to 96 percent, respectively [19,20]. Erbel et al. [21] evaluated the usefulness of transesophageal echocardiography (TEE) in assessment of aortic dissection. The sensitivity and specificity of TEE were 99 percent and 98 percent, respectively.
Aortography remains the most definitive tool for confirming this disease [2]. A number of investigators evaluating the effectiveness of contrast-enhanced CT scanning in diagnostic aortic dissection have demonstrated a sensitivity of 83 to 100 percent and a specificity of 90 to 100 percent [22][23][24]. CT could be utilized in the diagnosis of aortic dissection. However, the full extent of the dissection is likely to be underestimated since even by scanning at multiple levels it is unlikely that extension into the carotid artery will be demonstrated. Early studies of the usefulness of magnetic resonance imaging in the diagnosis of aortic dissection were encouraging, with a sensitivity of 90 to 100 percent and a specificity of 100 percent [25,26]. If there are any suspicions of aortic dissection, these methods should be considered and may be useful.
While advanced imaging techniques can confirm the diagnosis of thoracic aortic dissection in patients, it is obviously inefficient, uneconomic, and unrealistic to image every patient. Indiscriminate use of diagnostic imaging in poorly chosen patients with very low pretest probability of having dissection has been predicted to yield up to an 85 percent rate of false-positive results depending on the imaging modality chosen [27]. An extensive investigation of aortic dissection would not be practical in all cases of sore throat.
In conclusion, aortic dissection presented by sore throat is very rare but prompt recognition and expeditious surgical treatment may increase the rate of survival of this catastrophic injury. Given the high morbidity and mortality after delayed recognition or misdiagnosis, aortic dissection should be considered in the differential diagnosis of a patient presenting with sore throat with no abnormal finding of neck, throat, ECG, and cardiac enzymes, even when there is no history of chest pain. Because treatment is relatively simple and effective if instituted in time, emphasis should be placed on early diagnosis. | 2014-10-01T00:00:00.000Z | 2004-05-01T00:00:00.000 | {
"year": 2004,
"sha1": "07a9606fc030c1c41cacf23dc6e9d4e3e185b9e9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "07a9606fc030c1c41cacf23dc6e9d4e3e185b9e9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257852730 | pes2o/s2orc | v3-fos-license | Automatic Classification of Histopathology Images across Multiple Cancers Based on Heterogeneous Transfer Learning
Background: Current artificial intelligence (AI) in histopathology typically specializes on a single task, resulting in a heavy workload of collecting and labeling a sufficient number of images for each type of cancer. Heterogeneous transfer learning (HTL) is expected to alleviate the data bottlenecks and establish models with performance comparable to supervised learning (SL). Methods: An accurate source domain model was trained using 28,634 colorectal patches. Additionally, 1000 sentinel lymph node patches and 1008 breast patches were used to train two target domain models. The feature distribution difference between sentinel lymph node metastasis or breast cancer and CRC was reduced by heterogeneous domain adaptation, and the maximum mean difference between subdomains was used for knowledge transfer to achieve accurate classification across multiple cancers. Result: HTL on 1000 sentinel lymph node patches (L-HTL-1000) outperforms SL on 1000 sentinel lymph node patches (L-SL-1-1000) (average area under the curve (AUC) and standard deviation of L-HTL-1000 vs. L-SL-1-1000: 0.949 ± 0.004 vs. 0.931 ± 0.008, p value = 0.008). There is no significant difference between L-HTL-1000 and SL on 7104 patches (L-SL-2-7104) (0.949 ± 0.004 vs. 0.948 ± 0.008, p value = 0.742). Similar results are observed for breast cancer. B-HTL-1008 vs. B-SL-1-1008: 0.962 ± 0.017 vs. 0.943 ± 0.018, p value = 0.008; B-HTL-1008 vs. B-SL-2-5232: 0.962 ± 0.017 vs. 0.951 ± 0.023, p value = 0.148. Conclusions: HTL is capable of building accurate AI models for similar cancers using a small amount of data based on a large dataset for a certain type of cancer. HTL holds great promise for accelerating the development of AI in histopathology.
Introduction
Cancer is a leading cause of death worldwide, with common types including colorectal cancer (CRC), breast cancer, and others. In 2020, the global fatality rates for CRC and breast cancer were 9.4% and 6.9%, respectively [1]. Histopathology is an accurate method for diagnosing cancer [2], but it requires specialized knowledge and clinical experience from pathologists. Unfortunately, there is a shortage of pathologists worldwide, with the number of active pathologists decreasing by 17.53% in the United States from 2007 to 2017 [3]. In low-income countries, such as those in sub-Saharan Africa, there are fewer than one pathologist per 500,000 people [4].
The use of artificial intelligence in histopathology (HAI) has the potential to address the aforementioned limitations, and improve the accuracy and efficiency of diagnosis [5]. For instance, Wang et al. developed an innovative automated AI approach for CRC diagnosis, which achieved a testing accuracy of 98.11% [6]. Kanavati et al. also trained a convolutional neural network based on the EfficientNet-B3 architecture to differentiate between lung carcinoma and non-neoplastic tissues, achieving highly promising results [7]. These achievements have been made possible by leveraging deep learning methods, which require massive amounts of data collection and annotation. For instance, ref. [6] gathered 14,680 whole slide images (WSIs) from 9631 subjects and labeled 170,099 patches. Similarly, ref. [7] utilized a dataset of 3704 WSIs acquired from Kyushu Medical Centre for training and validation purposes. Furthermore, data preparation must be repeated for each cancer, resulting in an extremely heavy workload and becoming a bottleneck for HAI.
Recently, significant progress has been made in reducing the number of annotations [8], including semi-supervised [9][10][11] and unsupervised learning [12][13][14]. However, despite these advancements, large amounts of unlabeled images are still needed [15]. The use of generative adversarial networks for data generation has shown promise in decreasing the amount of annotation and data collection required [16][17][18]. However, the generated data is often limited by the existing data distribution [19], which can lead to the generation of incorrect or misleading data that can negatively impact the training of the model [19]. Additionally, most studies focused on a single type of cancer [5,20], necessitating repetitive data preparation for each new type of cancer.
In fact, some cancer cells from different types of cancer share similar characteristics and features, such as large nuclei and strong adhesion among cells [21], indicating the potential for building AI models across multiple cancer types. Heterogeneous transfer learning (HTL) [22] is a method that transfers these similar features between different distributed datasets and has been widely applied in natural images and some medical images, such as CT images [23] and MR images [24]. However, its effectiveness in histopathology images has not yet been proven.
We discuss here three cancers including CRC, breast cancer, and sentinel lymph node metastasis, all originating from glandular epithelium and falling under the category of adenocarcinoma. These cancers display similar tissue morphology and structure, such as the shape of cancer nests, morphology of single cancer cells, and overlapping molecular phenotypes. Furthermore, the interstitium of these carcinomas also share similarities [21].
An HTL framework is proposed in this study. The framework extracts general features of cancer cells from NCT-CRC-HE-100K, a large CRC dataset [25], and transfers them to the classification task of sentinel lymph node metastasis and breast cancer. The framework only uses a small number of labeled images [26,27] for training across multiple cancers and demonstrates that a robust model can be obtained by incorporating features from CRC. The main contributions of this study can be summarized as follows: (1) We demonstrate that features extracted from CRC can aid in the learning of lymph node metastasis and breast cancer, potentially reducing the amount of data needed for these cancer types; (2) The presented HTL method demonstrates generalizability across different types of cancers and has the potential to accelerate the development of HAI.
Datasets
We utilized three different datasets comprising of three types of cancers, namely NCT-CRC-HE-100K [25], Camelyon16 [26], and BreaKHis [27]. NCT-CRC-HE-100K is a large dataset containing 100,000 non-overlapping patches of size 224 × 224, derived from 86 Hematoxylin-eosin (H&E) stained WSIs of CRC. Out of the 100,000 patches, 14 More detailed information and sample images of the three datasets can be found in Table 1 and Figure 1, respectively.
Data Preprocess Pipeline
We utilized all 14,317 malignant patches and randomly selected 14,317 benign patches from the NCT-CRC-HE-100K dataset to construct a balanced dataset (Dataset-CRC). In this study, Dataset-CRC serves as the source domain dataset, and all of its samples were used as the training set for training the CRC model of source domain.
The Camelyon16 dataset has fixed WSIs for training and testing. In the benign WSIs, all tissue regions are cut into non-overlapping 300 * 300 patches, while in malignant WSIs, only malignant tumor tissue regions are used to extract the patches. To avoid extracting excessive redundant patches and to balance the number of malignant and benign patches, we randomly select 40 patches from each malignant WSI and 28 patches from each benign WSI in the training set. Furthermore, the patches are divided into a training set and a validation set based on an 8:2 ratio of the WSIs. Moreover, we used all 54,105 malignant patches and 54,014 randomly selected benign patches from the test set to evaluate the performance of the model. These patches were used to create the Dataset-SLN.
Non-overlapping patches are extracted from 2013 images of 82 patients in the BreaKHis dataset, resulting in 3738 benign patches and 8340 malignant patches. From the 8340 malignant patches, 3738 patches are randomly selected and combined with all 3738 benign patches to form Dataset-BRE. These patches are then divided into training, validation, and test set at a ratio of 7:1:2, ensuring that patches from the same patient do not appear in multiple sets. The preprocessed three datasets are shown in Table 2.
HTL Framework
The HTL framework proposed in this study comprises of two modules, namely the source domain model and the target domain model, both of which utilize Resnet50 [28]. Each module includes a feature extractor and two fully connected layers (FCs). The feature extractor is composed of several bottleneck residual blocks that output a 2048-dimensional feature vector. The FCs are used to convert the feature vector into categories, starting with 2048 dimensions and reducing it to 256 dimensions, and finally classifying it into two categories-benign or malignant cancer.
As illustrated in Figure 2, the source domain model has been trained end-to-end using Dataset-CRC to extract general features of CRC. A target domain model is developed for each of the other cancers. The input images of both models undergo conventional image augmentation techniques such as resizing, random horizontal flipping, random cropping, and normalization [29]. Patches of CRC are fed into the trained source domain model to obtain 256-dimensional features, while patches of breast or sentinel lymph node are input into the target domain model to obtain predicted labels and 256-dimensional feature vectors. The HTL loss, computed using an improved Maximum Mean Discrepancy (MMD) method [22], aligns the features across cancers based on the 256-dimensional vectors from CRC and breast or sentinel lymph node. Moreover, the supervised loss guides the output of the target domain model to be consistent with the labels of breast or sentinel lymph node.
Cross-Cancer Domain Adaptation Using HTL Operation
The traditional MMD performs global alignment between the source and target domains without considering the distributions of different categories within each domain. This may not effectively transfer the differences between the benign (normal tissues) and malignant (cancerous tissues) categories [30,31]. Since the features of benign and malignant categories are distinctly different, global alignment may cause confusion between them, resulting in incorrect HTL operation. Our proposed HTL operation across cancers involves aligning the distributions of subdomains (i.e., categories) to perform effective feature transfer. Unlike traditional MMD, which performs global alignment without considering differences between categories in two domains, our HTL operation reduces the feature distribution differences between CRC and sentinel lymph node metastasis or breast cancer, as depicted in Figure 3. The HTL loss is calculated using the improved MMD, which is defined as: where c represents the benign or malignant category, t and s indicate the source domain and target domain respectively, n and m are the numbers of samples in a batch of source domain and target domain, H represents Hilbert space, φ is a mapping function that transforms the features of Euler space to the Hilbert space, S j and T j are 256-dimensional feature vectors that represents CRC and the target domain (either sentinel lymph node metastasis or breast cancer), respectively, w s j and w t i are the weights of the category of S j and T j , that are calculated as follows: For the source domain, the one-hot vector y i is derived from the actual label of CRC, which takes a value of 0 (benign) or 1 (malignant). For target domain, y i refers to the predicted class probability for sentinel lymph node metastasis or breast cancer, generated by the target domain model.
The SL loss is used for supervised learning of sentinel lymph node metastasis or breast cancer. It is obtained by calculating the cross-entropy loss between the predicted probability distribution of classes and the ground-truth labels of sentinel lymph node metastasis or breast cancer, as defined in Equation (3).
where n is the number of samples in a batch, y i and p i denote the actual label and the predicted probability, respectively. The total loss function is the weighted sum of SL loss and HTL loss.
where Loss t , Loss s and Loss h represent the total loss, SL loss and HTL loss, respectively, α is constant coefficient, and g(epoch) is a monotonically increasing function of the number of epochs, defined by Formula (5).
where e is the Euler number and nepoch represents the total epoch.
Experiment Setting
To demonstrate that HTL can reduce the amount of labeled data required, HTL is needed to be compared with massively labeled supervised learning (SL) as well as SL with insufficient labeled data. Moreover, HTL models trained with a small number of labeled data should perform comparably to massively labeled training models and significantly outperform models trained with insufficient labeled data. Therefore, we trained three different versions of models for each cancer: one HTL version and two SL versions (SL-1 and SL-2). SL-1 is trained on insufficient labeled data, while SL-2 is trained on sufficient labeled data. The code is implemented in PyTorch (version 1.8) [32] and runs on a graphics processing unit (GPU) of Tesla V100 32 GB (NVIDIA company, Santa Clara, CA, USA). We compared the performance of Resnet18, Resnet50, and Resnet101 and found that Resnet50, initialized on ImageNet [33], achieved the best performance.
Sentinel Lymph Node Metastasis Models
The models for sentinel lymph node metastasis include L-HTL-1000, L-SL-1-1000, and L-SL-2-7104. L-HTL-1000 and L-SL-1-1000 use the same training and validation set, which consists of approximately 13% of all training and validation patches. L-SL-2-7104 is trained and validated using 7104 and 1776 patches, respectively. The test set for all three models comprises 108,119 patches. Table 3 shows the number of patches used for training, validation, and testing for each model.
Breast Cancer Models
The breast cancer models consist of three models: B-HTL-1008, B-SL-1-1008, and B-SL-2-5232. B-HTL-1008 is trained and validated with 1008 and 114 patches, respectively, which account for approximately 19% of all training and validation patches. B-SL-1-1008 uses exactly the same data as B-HTL-1008 for training and validating. Additionally, B-SL-2-5232 is trained and validated with all patches in the training and validation set. The test set comprises 1496 patches and is used to evaluate the performance of all three models.
The dataset was randomly split, and each model was trained eight times for crossvalidation. The hyperparameter selection process for these models was the same, and various hyperparameters were tested, including learning rate (0.05, 0.01, 0.015), batch size (16,32,64), and others, until the model's performance was optimal. The hyperparameter settings for SL-1 and SL-2 were consistent with the HTL version. Additionally, the SL-2 version only increased the number of samples in the training set for two datasets compared to the SL-1 version, while the others remained the same. Detailed hyperparameters are listed in Table 4.
Classification of CRC, Breast and Sentinel Lymph Node Metastasis by Source Domain Model
In order to compare the difference between CRC, sentinel lymph node metastasis and breast cancer, we tested the source domain model on them, where the CRC-VAL-HE-7K is provided alongside NCT-CRC-HE-100K for CRC testing purposes [25]. The results are shown in Table 5. The AUC, accuracy, sensitivity and specificity are 0.986, 0.948, 0.951 and 0.944, respectively, which show that the source domain model can accurately identify CRC. In contrast, this model struggled to effectively identify breast cancer and sentinel lymph node metastasis. These results indicate that despite all three cancers being adenocarcinomas, their image features differ. While the source model trained on CRC can achieve high accuracy for CRC, it falls short for breast cancer and sentinel lymph node metastasis. Moreover, the significant difference in AUC for breast cancer and sentinel lymph node metastasis suggests that although lymph node metastasis originates from breast cancer, there may be morphological changes between the metastatic and primary cancer.
The results across multiple cancers are also provided in Sections 3.2 and 3.3, where we compare the performance of the three models (SL-1, HTL, and SL-2) for each cancer. The two SL versions describe the model differences trained on a small dataset and large dataset, respectively, while the HTL version shows how CRC image features can improve performance on small datasets through domain adaptation. We report the area under the curve (AUC) to demonstrate the comprehensive performance of all models, as well as accuracy, sensitivity, specificity, F1 score and precision. The eight-fold cross-validation for three models is performed for statistical comparisons. All presented results are based on patch-level analysis.
Classification of Sentinel Lymph Node Metastasis
The results of eight-fold cross-validation on Dataset-SLN are presented in Figure 4, where the area under the curve (AUC) is shown. The Wilcoxon-signed rank test is performed on the results, and two-sided p values are reported. The HTL version trained on 1000 sentinel lymph node patches (L-HTL-1000) outperformed the SL-1 version trained on the same data (L-SL-1-1000) with an average AUC and standard deviation of 0.949 ± 0.004 vs. 0.931 ± 0.008, respectively (p value = 0.008). Moreover, there was no significant difference between the performance of L-HTL-1000 and L-SL-2-7104 (AUC: 0.949 ± 0.004 vs. 0.948 ± 0.008, p value = 0.742). These results further confirm the excellent performance of HTL on small datasets.
Classification of Breast Cancer
The AUC results for Dataset-BRE are presented in Figure 6, and the Wilcoxon-signed rank test was conducted on the results of eight-fold cross-validation with two-sided p values reported. The HTL on 1008 breast patches (B-HTL-1008) demonstrated superiority over supervised learning on the same dataset (B-SL-1-1008), with an average AUC and standard deviation of 0.962 ± 0.017 vs. 0.943 ± 0.018, respectively, and a p value of 0.008. Furthermore, there was no significant difference between the HTL on 1008 patches (B-HTL-1008) and SL on 5232 patches (B-SL-2-5232), with AUCs of 0.962 ± 0.017 and 0.951 ± 0.023, respectively, and a p value of 0.148. These results indicate that HTL performs better than SL when the amount of data is small and can achieve comparable performance to that of large datasets.
Discussion
Histopathology is a critical component of clinical diagnosis, and while HAI holds promise as an effective tool for improving diagnostic accuracy and reducing misdiagnosis resulting from heavy workloads or limited pathologists, the cost of data preparation for model establishment has become a bottleneck in HAI development.
While techniques such as semi-supervised and unsupervised learning can help decrease the cost of data annotation, the collection of massive amounts of unlabeled data remains a necessity. Furthermore, obtaining enough samples of each type of cancer can be challenging or even impossible in clinical practice due to a shortage of disease-specific samples.
The histopathological diagnosis of cancer relies on examining the morphology and tissue structure of cancer cells [21]. We postulate that deep learning can detect similarities in image features across different cancers. Specifically, a feature extractor from a highly accurate source model built on a large cancer dataset may offer general image features for cancers, which could reduce the required amount of data and facilitate model construction for other types of cancers.
Given the hypothesis that HTL could enhance AI model training for similar cancers, we chose to examine CRC, breast cancer, and lymph node metastasis. These cancers all originate from epithelial tissue and fall under adenocarcinoma, demonstrating comparable tissue morphology and structure such as cancer nest shape, individual cancer cell morphology, and overlapping molecular phenotypes.
We first built a model of the source domain based on a large CRC dataset. Although breast cancer and sentinel lymph node metastasis, like CRC, are both adenocarcinomas, the CRC model cannot effectively recognize the former two types of cancer, indicating that the source domain model considers the image features of breast cancer and sentinel lymph node metastasis to be different from those of CRC. Moreover, the CRC model shows a significant difference in AUC for breast cancer and sentinel lymph node metastasis, which suggests that although lymph node metastasis originates from breast cancer, there may be morphological changes between the metastatic and primary cancer.
When using a certain amount of breast cancer and sentinel lymph node metastasis images and combining them with the CRC model in heterogeneous transfer learning, precise classification results for the first two types of cancer can be achieved. However, without using the CRC model, the performance of the models trained on these images would significantly decrease. These experiments may demonstrate that the colon cancer model can provide some common image features of adenocarcinomas, while the images of other adenocarcinomas provide unique image features for each specific type of adenocarcinoma. Heterogeneous transfer learning can integrate both types of features to obtain accurate recognition models for other adenocarcinomas, similar to the results of massive labeled SL.
Our work demonstrates that when there is an accurately trained HAI based on a large dataset, it is not necessary to collect and label a large amount of data for other similar cancers. Therefore, HTL can reduce the data and labeling costs of these cancers, especially for some cancers that are difficult to obtain data for. In clinical practice, it is often observed that a large amount of data has been collected for one type of cancer, but not enough data has been collected for similar types of cancer, Therefore, HTL has broad application prospects.
We have demonstrated, for the first time, that the presented HTL method has the potential to quickly develop HAI models for similar cancers by reducing the amount of required data. However, a main limitation of this study is the limited number of cancer types and validation data. In future studies, we aim to investigate the applicability of the HTL method to other cancers to further validate our findings. If HTL can be widely applied to learning across cancers, it may overcome the data bottleneck and accelerate the deployment of HAI across diseases.
Conclusions
We proposed a novel HTL approach for HAI across various cancers. We conducted experiments on publicly available datasets for sentinel lymph node metastasis and breast cancer and demonstrated that our proposed method can create high-accuracy models using limited datasets by transferring features across different types of cancer. Our findings verify the ability of HTL to reduce data volume in the target domain, indicating its potential for deployment in HAI applications. Informed Consent Statement: Patient consent was waived due to all data come from the public dataset of the network.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-03-31T15:03:01.228Z | 2023-03-28T00:00:00.000 | {
"year": 2023,
"sha1": "6e3640487e76ad77d19572d0299fe64a6c06a2ea",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4418/13/7/1277/pdf?version=1680007023",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3e87a8867d5b00383e5ea9975978f50926a9f444",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55049916 | pes2o/s2orc | v3-fos-license | Analysis and Voice Recognition in Indonesian Language Using MFCC and SVM Method
Voice recognition technology is one of biometric technology. Sound is a unique part of the human being which made an individual can be easily distinguished one from another. Voice can also provide information such as gender, emotion, and identity of the speaker. This research will record human voices that pronounce digits between 0 and 9 with and without noise. Features of this sound recording will be extracted using Mel Frequency Cepstral Coefficient (MFCC). Mean, standard deviation, max, min, and the combination of them will be used to construct the feature vectors. This feature vectors then will be classified using Support Vector Machine (SVM). There will be two classification models. The first one is based on the speaker and the other one based on the digits pronounced. The classification model then will be validated by performing 10-fold cross-validation.The best average accuracy from two classification model is 91.83%. This result achieved using Mean + Standard deviation + Min + Max as features.
INTRODUCTION
The human voice contains a lot of information such as gender, emotion, and identity of the speaker Lindasalwa et al. (2010).The purpose of the voice recognition is to identify the speaker or the words pronounced by the individual (Yee & Ahmad, 2008).Many techniques have been proposed to reduce the mismatch between testing and training environments.Most of these methods are operated in spectral domain (Lockwood & Boudy, 1992;Rosenberg, Lee, & Soong;1994) or the cepstral domain.Gracieth et al. (2014) implemented support vector machine (SVM) for automated speech digit recognition.The digit was limited to '0', '1 ', '2', '3', '4', '5', '6', '7', '8', '9' in Portuguese.The feature was extracted using Mel Frequency Cepstral Coefficients.Discrete Cosine Transform (DCT) was used to produce a two-dimensional matrix that became the input of the SVM.The study produced excellent numerical classification except for the digit '9'.Digit '1' to '8' had the best accuracy.The mean and variance were chosen as the features.Fokoué and Ma (2013) had demonstrated that the combination of MFCC and SVM produces a great tool in identifying the sex of the speaker.RBF kernel and polynomial kernel give accurate results in cross-validation.MFCC needs more time in the calculation of computing because of the complexity of the calculations.Putra and Resmawan (2011) wanted to classify gender base on the speech in Bahasa.The researcher is also using MFCC for extraction method and DTW for classification method.They collect ComTech Vol. 7 No. 2 June 2016: 131-139 speeches of 27 men and eight women.These people will speak five words and repeat it seven times.For the evaluation, Darma and Adi used the 7-fold cross validation.Based on the result, the best accuracy is 93.254% and the worst accuracy is 59.664%.This paper will discuss about the voice recognition of digit numeric '0 ', '1', '2', '3', '4', '5', '6', '7', '8', '9' in Indonesian.The human voice is converted into a digital signal form to produce digital data representing every level of the signal at each different time.Digital sound is then processed using the MFCC for extracting voice features.After that, Support Vector Machine (SVM) is used as classification method to determine the features and combinations of features that generate the most minimal error.The validation process will use 10-fold cross validation.This paper will be separated as follows: background research, the principle voice recognition, the methodology, which will be followed by the results, and conclusions are given.
After taking voice input using a microphone from the speaker, the sound will be analyzed.System design involves the manipulation of the audio signal.At some level, the operation is displayed on the input signal is pre-emphasis, framing, windowing, Mel Ceptrum analysis, and recognition of spoken words.Voice recognition algorithm includes two distinct phases.Figure 1 shows the voice algorithm.It can be shown that the first phase is the training phase while the second phase is the testing phase.
Voice Recognition Algorithms
Training Phase Each speaker has to provide samples of their voice so that the reference tamplate model can be build
Testing Phase
To ensure the input test voice is match with stored reference template model and recognition decision are made
Figure 1 Voice Recognition Algorithms
Mel Frequency Cepstral Coefficients (MFCC) algorithm is a sampling technique.MFCC is one of the most popular feature extraction techniques used in voice recognition based on the frequency domain.MFCC using the Mel scale which is based on the human ear scale.MFCC which is being considered as frequency domain features, are much more accurate than time domain features.The simplicity and ease of the procedures used to implement the method MFCC make this the most favored technique for speech recognition.
MFCC considers the sensitivity of human perception of frequency and this makes the best in voice recognition.Figure 2 shows the following steps used in MFCC.When feature extraction using MFCC in pre-emphasis block, voice signal is filtered with high pass filter.Pre-emphasis improves the voice signal and compensates the suppressed part of the signal during voice production.Then, the pre-emphasized signal is segmented into frames with an optional overlap of 1/3 until 1/2 of the frame size.This step is important to create good results because the variation of amplitude is more in larger signals compared to smaller signals.Then, framing signal will be multiplied with a Hamming window to the keep continuity of the first and last points in signal frame.Then, signal will be converted into frequency domain signal using Fast Fourier Transform.The output of Fast Fourier Transform block is multiplied by triangular band pass filters for getting log energies of each filter.
MFCC is defined as follows: F mel is a logarithmic scale of normal frequency scale.Mel-cepstral features [2], can be illustrated by MFCCs, which is calculated from the Fast Fourier Transform (FFT) power coefficient.Power coefficient filtered by triangular bandpass filter bank.When c( 5) is in the range of 250-350, the number of filters triangles that fall in the frequency range of 200-1200 Hz (the frequency range of audio information that is dominant) is higher than the other values of c.Therefore, it is efficient to set the value of c in the range to calculate MFCCs.The output is shown from the filter bank S k (k = 1, 2, …., K), then MFCCs are calculated as follows: Support Vector Machine is a statistical machine learning techniques that are useful and successfully applied in pattern recognition.The SVM classification method is based on the Structural Risk Minimization principle from computational learning theory.
Data can be separated linearly.Data provided is denoted as d whereas each class label is denoted y n {+1,−1} for 1,2, … , where is the number of data.SVM is looking for the best hyperplane that separates all data sets corresponding to the class by measuring margin hyperplane and looking for the biggest margin.Margin is the distance between the nearest hyperplane with the data from each class.The subset of the data set with the nearest distance is called a support vector.Class -1 and +1 can be completely separated by hyperplane dimension d, which is defined by the following equation which includes class -1 (negative samples) can be defined as data that meets inequality • 1 for 1.While which includes class +1 (positive samples) can be defined as data that meets inequality • 1 for 1. is normal field, and b is the position of the field about the origin.Value-defined margins is Maximum margin is obtained when the value of ||w|| minimum of hyperplane equation is • 0. Therefore, to get the biggest margin, it can be formulated as a constrained optimization problem as follows subject to • 1 0.
One method for the settlement of constraint optimization problems is by multiplying Lagrange.Thus, it can be formulated as follows subject to 0.
Then the formula (primal problem) was converted into the formula (dual problem) as follows With the above formula, then is obtained with a positive value.The value of w then obtained by the formula as follows (9) Data in which the value is more than zero is called a support vector.By knowing the support vector, the value of b can be obtained by using support vector obtained as follows 1 . (10) By recognizing the value w and b, the hyperplane equation ( 1) is obtained.After finding the hyperplane equation, the data classification into class 1, 1 can be done as follows SVM formulation for linearly separable data cannot be used for non-linearly separable data.Searching the best hyperplane can be obtained by transforming data from input space ( ) into feature space ( . Thus, the data can be separated linearly in the feature space. Dimension data in the feature space is higher than dimension data in the input space.This situation can make a very large computation in feature space.These problems can be solved by used kernel trick.By using the kernel trick, transformation functions does not need to be known.Kernel functions that are often used are Linear Kernel, Polinomial Kernel (dimension D), and Radial Basis Function (RBF) Kernel.
,
• 20 Next, the following is the equation of Polinomial Kernel (dimension D) The equation of Radial Basis Function (RBF) Kernel is Variable is hyperparameter.
Cross Validation is a method to assess the accuracy and validation of statistical models.The available dataset is divided into two parts.The first part is used to data modeling (Payam, Lei, & Huan, 2009).The data modeling from first part used to predict the values in the second part.A valid model should show good prediction accuracy.
The procedure of Cross Validation is as follows.First, the data will be divided into three sets; Training, Testing, and Validation .
METHODS
This research will be conducted in several stages.To have a better understanding, see the following Figure 6.Data is collected by recording the voices of 6 participants.Each participant will pronounce numbers between '0' to '9'.The recording process will be repeated up to 5 times; it consists of 3 times recording without noise and two times recording with noise.At the end, there will be 300 voice recording.The voice will be recorded using various devices such as smartphone, PC, and laptop with different types and specifications.The voice recording is saved with frequencies of 44.1 KHz, bit rate 16-bit and Wave Audio (WAV) file format.Each voice recording will be named by numbers pronounced, recording order, and participant name.The best accuracy for classification based on the speaker is 87.00%.This result is achieved using Mean + Standard deviation + Min as features.The worst accuracy for classification based on the speaker is 52.67%.This result is achieved using Max as features.The best accuracy for classification based on the number pronounced is 97.00%.This result is achieved using Mean + Standard deviation + Min + Max as features.The worst accuracy for classification based on the number pronounced is 60.67%.This result achieved using Standard deviation as features.The best average accuracy from both classifications is 91.83%.This result is achieved using Mean + Standard deviation + Min + Max as features.The worst average accuracy from both classifications is 60.17%.This result is achieved using Max as features.
CONCLUSIONS
Experimental results showed interesting results, feature or combination of features which have the highest accuracy in classification based on the numbers spoken is Mean + Standard Deviation + Min (87%).Feature or combination of features which have the highest accuracy in classification based on the speaker is Mean + Standard Deviation + Min + Max (97%).The best result is obtained by using combination of Mean + Standard Deviation + Min + Max.
Figure 2
Figure 2 Block Diagram to Get the Coefficients MFCC
Figure 3 First
Figure 3 First Step of Cross Validation
Figure 4
Figure 4 Second Step of Cross Validation
Figure 5
Figure 5 Third Step of Cross Validation
Figure 6
Figure 6 Diagram of Research Methods
Table 2
Experiment Result | 2018-12-07T17:02:04.890Z | 2016-06-01T00:00:00.000 | {
"year": 2016,
"sha1": "f0a4393f728088c2f47424a217d4320a5ad1e0ca",
"oa_license": "CCBYSA",
"oa_url": "https://journal.binus.ac.id/index.php/comtech/article/download/2252/1673",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f0a4393f728088c2f47424a217d4320a5ad1e0ca",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
27439188 | pes2o/s2orc | v3-fos-license | Secondary Metabolites Produced during the Germination of Streptomyces coelicolor
Spore awakening is a series of actions that starts with purely physical processes and continues via the launching of gene expression and metabolic activities, eventually achieving a vegetative phase of growth. In spore-forming microorganisms, the germination process is controlled by intra- and inter-species communication. However, in the Streptomyces clade, which is capable of developing a plethora of valuable compounds, the chemical signals produced during germination have not been systematically studied before. Our previously published data revealed that several secondary metabolite biosynthetic genes are expressed during germination. Therefore, we focus here on the secondary metabolite production during this developmental stage. Using high-performance liquid chromatography-mass spectrometry, we found that the sesquiterpenoid antibiotic albaflavenone, the polyketide germicidin A, and chalcone are produced during germination of the model streptomycete, S. coelicolor. Interestingly, the last two compounds revealed an inhibitory effect on the germination process. The secondary metabolites originating from the early stage of microbial growth may coordinate the development of the producer (quorum sensing) and/or play a role in competitive microflora repression (quorum quenching) in their nature environments.
INTRODUCTION
A large variety of compounds is produced by various microorganisms by means of specialized biosynthetic pathways. Although the special (or secondary) metabolites (Hopwood, 2007;Baltz, 2008;van Keulen and Dyson, 2014) are not essential for growth and reproduction, they often provide the producing organism with a bioactive role (Keller et al., 2005). Reaching further than a cell itself physically can, the small diffusible molecules may give an advantage to its producer by effectively adapting to extracellular conditions to some degree. They may provide defense (or attack), competition, signaling, or interspecies interactions, depending on the environmental cues, thus increasing the likelihood of survival in an inhospitable environment (Brachmann et al., 2013;Martinez et al., 2017). The bioactivity of the small molecules is mostly achieved by affecting transcription in receiving cells (Camilli and Bassler, 2006). Secondary metabolism is of special interest in Streptomyces, a clade of multicellular bacteria that occupies a high position in the developmental hierarchy of bacteria due to their advanced morphology and physiology. Streptomycetes have evolved a plethora of biosynthetic pathways to produce various secondary metabolites, especially signal molecules (see below), or antibiotics (van Keulen and Dyson, 2014). These compounds provide the organism with a competitive advantage, protection from unfavorable living conditions and/or facilitate interspecies interactions (Maxwell et al., 1989).
The genes for the biosynthesis of streptomycete secondary metabolites are mostly clustered and their expression is highly regulated (Bentley et al., 2002;Tanaka et al., 2013). The model S. coelicolor possesses the best annotated genome that encodes biosynthetic pathways for more than 20 secondary metabolites (Bentley et al., 2002). The chemical structure has so far been elucidated in less than 30 percent of the compounds, belonging to the following groups of natural substances: polyketides, pyrones, peptides, siderophores, γ-butyrolactones, butenolides, furans, terpenoids, fatty acids, oligopyrroles, and deoxysugars (van Keulen and Dyson, 2014). The remaining 70 percent are called "cryptic compounds" as they are not produced at standard laboratory conditions (Bentley et al., 2002;Ikeda et al., 2003;Ohnishi et al., 2008;Tanaka et al., 2013;van Keulen and Dyson, 2014). To activate these cryptic pathways, streptomycetes are cultivated under non-standard physical and nutritional conditions or co-cultured with other microorganisms (Wakefield et al., 2017). Genetic manipulations within the genes (Luo et al., 2013) or the transfer of the whole biosynthetic gene cluster into a heterologous producer (Kalan et al., 2013;Tanaka et al., 2013) are also commonly used strategies. The successful activation of the biosynthetic pathways often leads to biosynthesis of previously unknown compounds (Ikeda et al., 2003;Ohnishi et al., 2008;Gomez-Escribano et al., 2012;Tanaka et al., 2013). For example, a polyketide alkaloid, coelimycin P1 (so-called yellow pigment), is produced from the cpk cryptic gene cluster in S. coelicolor (Gomez-Escribano et al., 2012).
The complicated development of streptomycetes requires highly sophisticated control mechanisms mediated by multiple molecules linked to signaling cascades (Kelemen and Buttner, 1998;Claessen et al., 2006;Gao et al., 2012). A widely studied signaling system is quorum sensing, in which stimuli are spread within a population and induce appropriate responses (Phelan et al., 2011). Based on the signal assessment, the organism can adapt to its environment and coordinate further development in response to local population densities (Waters and Bassler, 2005).One of the assumptions made in this work is that cellular signaling is also employed in spore germination (see below). However, the nature of the signaling in this developmental phase remains unclear, as do the chemical characteristics and the possible regulatory effect of the produced substances.
Streptomyces undergo a cellular differentiation that resembles the fungal life cycle (Seipke et al., 2012). Their growth starts with germinating spores that develop into a vegetative mycelium of branching hyphae. Subsequent development of aerial hyphae is considered to be a cell response to nutrient depletion; most of the secondary metabolites are formed at this developmental stage (Sello and Buttner, 2008;Seipke et al., 2012). The aerial hyphae are dissected into chains of uninucleoid spores. Spores are subjected to maturation which ensures their survival in unfavorable conditions and allows them to spread into new niches.
The dormant state of spores is characterized by limited metabolic activity or its complete stagnation (McCormick and Flardh, 2012). Subsequent germination is the spore's transition into a metabolically active vegetative phase. Reactivation of the dormant exospore takes place in an aqueous environment. In addition to energy sources (e.g., trehalose) and various nutrients (Ranade and Vining, 1993), the dormant spores of streptomycetes also contain transcriptome which is a remnant of sporulation and spore maturation (Mikulik et al., 2002). The residual pool of mRNA appears to be necessary for the initial germination phase, serving as a template for the early synthesis of proteins, such as chaperones and hydrolases. Whereas chaperones are indispensable in the re-activation of present proteins upon their release from the trehalose milieu (Bobek et al., 2017), hydrolases reconstitute the thick hydrophobic spore cell wall (Bobek et al., 2004;Haiser et al., 2009).
Further development requires the re-activation of the transcriptional apparatus (Paleckova et al., 2006;Mikulik et al., 2008Mikulik et al., , 2011 controlled by the activity of a set of sigma factors, whose expression takes place from the very beginning of the process (Bobek et al., 2014;Strakova et al., 2014). Genomewide expression data revealed that the activity of most metabolic pathways is stabilized after the first DNA replication that occurs between 120 and 150 min of germination of S. coelicolor (Bobek et al., 2014). After this period, morphologically observable changes, like the first germ tube emerging from the spore, occur (Kelemen and Buttner, 1998;Claessen et al., 2006;Ohnishi et al., 2008).
In the case of non-activated spores, it was found that about 10-20% of spores do not germinate even under optimal incubation conditions (Yoshida and Kobayashi, 1994). Sole spores of S. viridochromogenes have been shown to germinate more slowly than in the dense population (Xu and Vetsigian, 2017). This indicates an existence of germination activator produced into the medium. On the other hand, the extract from the S. viridochromogenes supernatant has been shown to inhibit the germination of unactivated spores when added prior to incubation (Hirsch and Ensign, 1976a). The inhibitor present was later isolated (along with other congeners) and described as germicidin A (Petersen et al., 1993;Aoki et al., 2011;Ma et al., 2017). The launch of germination within a spore population is stochastic, as was shown not only in streptomycetes (Xu and Vetsigian, 2017) but also in other spore-forming bacteria (van Vliet, 2015). The probability of germination within a population differs between different streptomycete strains; S. viridochromogenes and S. granaticolor exhibit fast and robust germination whereas S. coelicolor and S. venezuelae show more complex behavior with a fraction of germlings that stop growing soon after germination (Mikulik et al., 1977;Bobek et al., 2004;Xu and Vetsigian, 2017). Activity of early released compounds, germination activators and inhibitors, may affect the stochasticity of germination in order to adapt the germination strategy to environmental conditions (Petersen et al., 1993;Aoki et al., 2007Aoki et al., , 2011Ma et al., 2017). Since it is considered to be non-productive (Seipke et al., 2012), the initial developmental phase has hitherto not been given sufficient attention. It is nevertheless apparent from the genome-wide expression analysis of S. coelicolor's germinating spores (germlings) performed by Strakova (Strakova et al., 2013), that 163 genes involved in the biosynthesis of secondary metabolites are transcribed during germination (including those cryptic). It is for this reason that we chose to focus on the biosynthetic activities of the germinating spores of S. coelicolor in this article. Secondary metabolites produced in this phase would possibly function as germinative signals in the frame of intercellular communication (Rutherford and Bassler, 2012;Brachmann et al., 2013) or may suppress competing microflora.
Preparation of Streptomyces Spores
Streptomyces coelicolor M145 was cultivated on cellophane discs on solid agar plates (0.4% yeast extract, 1% malt extract, 0.4% glucose, 2.5% bacterial agar, pH 7.2) at 28 • C for 14 days. Harvested dormant spores were filtered through cotton wool and used to screen for associated secondary metabolites. Spores mixed with 20% glycerol were stored frozen at −20 • C.
Germination and Microbial Growth
Spores were washed twice in 10 mL sterile distilled water and resuspended in 50 mL NMMP (Kieser et al., 2000), R3 (Shima et al., 1996), or AM (Bobek et al., 2004) liquid medium to a final spore concentration 10 8 ml −1 . Glucose, glycerol, or mannitol was used as a carbon source. For boosting synchronicity of the population, spores were incubated for 10 min at 50 • C, followed by 6-h germination at 37 • C (Hirsch and Ensign, 1976a;Kieser et al., 2000) before screening for produced secondary metabolites.
To prepare samples from the stationary phase of growth, S. coelicolor was further cultivated 48 h at 29 • C in the same medium. The grown mycelium or supernatant was then used in the screening for produced secondary metabolites.
Solid Phase Extraction of Secondary Metabolites
Secondary metabolites were taken from the culture supernatants extracted using ethyl acetate (Rajan and Kannabiran, 2014), QuEChERS (Schenck and Hobbs, 2004), or solid phase extraction (Kamenik et al., 2010), which was found to be the most suitable and was carried out as follows. An Oasis HLB 3cc 60 mg cartridge (hydrophilic-lipophilic balanced sorbent, Waters, USA) was conditioned with 3 mL methanol (LC-MS grade, Biosolve, Netherlands), equilibrated with 3 mL water (prepared using Milli-Q water purifier, Millipore, USA) and then 3 mL culture supernatant (pH adjusted to 3 with formic acid, 98-100%, Merck, Germany) was loaded. Subsequently, the cartridge was washed with 3 mL water and absorbed substances were eluted with 1.5 mL methanol. The eluent was evaporated to dryness (Concentrator Plus, 2013 model, Eppendorf), reconstituted in 200 µL 50% methanol and centrifuged at 12,000 × g for 5 min.
LC-MS Analyses
LC-MS analyses were performed on the Acquity UPLC system with 2996 PDA detection system (194 -600 nm) connected to LCT premier XE time-of-flight mass spectrometer (Waters, USA). Five µL of sample was loaded onto the Acquity UPLC BEH C18 LC column (50 mm × 2.1 mm I.D., particle size 1.7 µm, Waters) kept at 40 • C and eluted with a two-component mobile phase, A and B, consisting of 0.1% formic acid and acetonitrile, respectively, at the flow rate of 0.4 mL min −1 . The analyses were performed under a linear gradient program (min/%B) 0/5; 1.5/5; 15/70; 18/99 followed by a 1.0-min column clean-up (99% B) and 1.5-min equilibration (5% B). The mass spectrometer operated in the positive "W" mode with capillary voltage set at +2,800 V, cone voltage +40 V, desolvation gas temperature, 350 • C; ion source block temperature, 120 • C; cone gas flow, 50 L h −1 ; desolvation gas flow, 800 L h −1 ; scan time of 0.15 s; inter-scan delay of 0.01 s. The mass accuracy was kept below 6 ppm using lock spray technology with leucine enkephalin as the reference compound (2 ng µL −1 , 5 µL min −1 ). MS chromatograms were extracted for [M+H] + ions with the tolerance window of 0.03 Da, smoothed with mean smoothing method (window size; 4 scans, number of smooths, 2). The data were processed by MassLynx V4.1 (Waters).
Bioactivity Assays
Although albaflavenone is not commercially available, we have verified its presence using LC-MS in the hexane extract from the supernatant after 48 h of S. coelicolor's growth in R3 medium with glycerol as the carbon source. Cells were centrifuged for 5 min (7,000 × g). The supernatant was extracted three times with 300 mL of n-hexane (Lach-Ner, Prague, Czech Republic). The cells were washed in distilled water and mycelial products were extracted with 200 mL of n-hexane. Cells were removed by filtration. After separation of the phases, the n-hexane layers were pooled (below called as albaflavenone-hexane extract, Figure 1B).
Germicidin A standard was purchased from Cayman Pharma (the Czech Republic), chalcone's standard was obtained from Sigma-Aldrich (Merck, Germany). Dimethylsulfoxid (DMSO) was purchased from Lach-Ner (Czech Republic). On a sixsector culture microtiter plate, ONA medium (1.4% Oxoid nutrient agar, pH 7.2; Kieser et al., 2000) with a linear concentration gradient of germicidin A 0-8 µg mL −1 (standard dissolved in sterile distilled water) or chalcone 0-8 µg mL −1 (standard dissolved in DMSO), or the albaflavenone-hexane extract (concentration unknown) was poured into three sectors as follows. Pure ONA medium was poured into an inclined plate and allowed to solidify to form a wedge; the plate was then placed horizontally and ONA medium containing selected compound in concentration 8 µg mL −1 was poured to form a complementary wedge. The remaining three sectors were filled with either pure ONA medium for control cultivation respective to germicidin A, or ONA medium with DMSO for control cultivation respective to chalcone, or ONA medium with hexane for control cultivation respective to albaflavenone. 10 5 spores were spread on each sector and incubated 48 h at 29 • C. Number of colony-forming units (CFU) was assessed and compared.
A Dehydrogenase Activity Test
Two milliliter of the albaflavenone-hexane extract was mixed with 5 mL R3 medium with glycerol and used for 6 h germination of 5 × 10 6 spores. A negative control culture was performed in 5 mL R3 medium with glycerol with 2 mL pure hexane. The dehydrogenase activity test (as described in Burdock et al., 2011) was used to measure metabolic activity of germinating spores by means of triphenyl tetrazolium chloride (TTC). After the germination course, cells were incubated in the presence of TTC and an electron-donating substrate for 1 h. Rising triphenyl formazan (TF) was extracted using ethanol and its concentration was determined colorimetrically by measuring the optical density at wavelength of 484 nm. The absorbances were compared between the tested and negative control samples.
Scanning Electron Microscopy
Streptomyces spores were fixed with 3% glutaraldehyde overnight at 4 • C. The fixed spores were extensively washed and then allowed to sediment at 4 • C overnight onto circular coverslips treated with poly-L-lysine. The coverslips with attached spores were dehydrated through an alcohol series followed by absolute acetone and critical point dried from liquid CO 2 in a K850 Critical Point Dryer (Quorum Technologies Ltd, Ringmer, UK). The dried samples were sputter-coated with 3 nm of platinum in a Q150T Turbo-Pumped Sputter Coater (Quorum Technologies Ltd, Ringmer, UK). The final samples were examined in a FEI Nova NanoSEM scanning electron microscope (FEI, Brno, Czech Republic) at 5 kV using CBS and TLD detectors. Beam deceleration mode of scanning electron microscope was used in some cases for minimization of charging artifacts.
In Silico Analysis of the Expressed Secondary Metabolite Biosynthetic Genes during Germination of S. coelicolor
Genes that are expressed in the consecutive time points of germination have been reported by Strakova (Strakova et al., 2013). From their dataset we selected genes whose products are involved in the biosynthesis of secondary metabolites by S. coelicolor, according to the StrepDB database (http://strepdb.streptomyces.org.uk), where the annotated S. coelicolor genes are categorized into metabolic groups. We updated the list of secondary metabolites with regard to newly published findings (Zhao et al., 2009;van Keulen and Dyson, 2014). The resulting list of genes is summarized in Table 1. Predicted secondary metabolites, whose respective genes are expressed during germination, include polyketides, pyrones, peptides, siderophores, terpenoids, oligopyrroles, and fatty acids.
Secondary Metabolites Produced by S. coelicolor during Germination and/or in the Stationary Phase
The fact that genes whose products are involved in the secondary metabolite biosynthesis were transcribed during germination encouraged us to investigate whether germinating spores produce respective compounds up to the 6th hour of their development. For this purpose, we performed an LC-MS analysis of the culture supernatants. In order to ensure that any detected compound is synthesized de novo during germination, we included dormant spores (non-activated) as negative control samples in the analysis. As a positive control that tests effectiveness of our detection method, samples from the sporulation phase were included too. The compounds detected only in the sporulation phase (after 48 h of cultivation) but absent from the germination are listed in Table 2.
Our LC-MS measurements revealed, however, that the S. coelicolor's germlings produce three different compounds with masses corresponding to the sesquiterpenoid antibiotic albaflavenone, the polyketide germicidin A, and chalcone ( Table 3; Figures 1-3, respectively). Furthermore, the identity of germicidin A was confirmed by comparing the actual retention time with the original standard obtained from Cayman Pharma (Figure 2).
Actinorhodin Production
Actinorhodin compounds (actinorhodinic acid and γactinorhodin) were detected in both dormant and germinating spores, as well as in samples from the stationary phase of growth ( Table 2). Streptomycetes produced these compounds after 48 h of cultivation in R3, AM, and NMMP medium, regardless of carbon source (glucose, glycerol, or mannitol). Although the suspensions of dormant spores were washed in distilled water, we found that dormant spores also contained these blue pigments. We also found actinorhodinic acid and γ-actinorhodin in the cell-free supernatant after spore germination in AM medium. All supernatants containing the blue pigments also exhibited a pH-dependent color change.
pH-Dependent Biosynthetic Activity of Germlings
Biosynthesis of albaflavenone involves the activity of cytochrome P450 (CYP170A1) which, depending on pH, can either act as a monooxygenase (pH 7.0-8.2) or as a farnesene synthase (5.5-6.5) (Zhao et al., 2009). Whereas β-farnesene was not synthesized in cultures in R3, NMMP, and AM medium at pH 7.0-7.2, we showed that the germlings produce a substance corresponding to albaflavenone. To see whether β-farnesene is produced in more acidic conditions, we performed a germination experiment in R3 medium with glycerol at pH 6.0. Secondary metabolites were extracted by the solid phase extraction, followed by the LC-MS analysis. To compare, we used an extract after 6 h of cultivation at pH 7.2 without changing the other conditions. Although the direct production of β-farnesene was not found, it is clear from the total ionic and base peak chromatograms that under different pH conditions S. coelicolor produces a different spectrum of substances (Figure 4). Further analysis was beyond the scope of this manuscript.
Biological Effects of Albaflavenone, Germicidin A, and Chalcone
Albaflavenone Since albaflavenone was not commercially available, its possible effect on germination was tested using the albaflavenonehexane extract (see Methods). A negative control, which contained pure hexane, was included in the experiment. No quantitative or phenotypic changes were observed in the tested conditions ( Figure 5C). To verify this result, a dehydrogenase activity test was performed ( Figure 5D). Metabolic activity of living cells that are present in the medium during germination was determined as a function of their dehydrogenase activity, proportional to the concentration of rising TF measured by the optical density at 484 nm (see Methods for details). The optical density increased in time during germination at the same rate in both tested and control samples; the albaflavenone-hexane extract created no observable effect.
The biological effect of the other two secondary metabolites on germination was examined using standards of gemicidin A (Cayman Pharma) and chalcone (Sigma-Aldrich). Experiments were performed on six-section cultivation titration plates (Gama Group). Three fields in one column contained ONA medium with a gradient of tested compound (0-8 µg mL −1 ) and pure medium without the tested substance was poured into the fields in the other column as negative controls. Results were evaluated in terms of the number of colony forming units (CFU) and phenotypic changes.
Germicidin A
Germicidin A clearly inhibited the germination of S. coelicolor from the concentration of 4 µg mL −1 (Figure 5A). The average number of colony-forming units (germinating spores) was 20 on an ONA medium with the linear gradient of germicidin A at 4 µg mL −1 and lower; 60 colonies were grown without germicidin A as a negative control. The tested colonies were of the same shape and size as their controls. Actinorhodin production was quantitatively the same.
Chalcone
Our initial experiments showed that chalcone suppressed germination of S. coelicolor in concentrations down to 8 µg mL −1 on the solid medium ( Figure 5B). The average number of CFU was 20 on a medium with a chalcone gradient (0-8 µg mL −1 ), contrary to 70 colonies on the chalcone-free medium. The size of the colonies was inversely proportional to the chalcone concentrations; the colonies were significantly smaller compared to the negative control, suggesting a slower germination rate and/or vegetative growth. Moreover, in the presence of chalcone, actinorhodin was not produced throughout the whole cell cycle. The effect of chalcone was additionally examined in R3 liquid medium. Whereas the concentration 8 µg mL −1 revealed to be subinhibitory, the chalcone concentration of 80 µg mL −1 completely suppressed the development as could be seen in electronmicroscopic images taken from the germination course in the 4th and 6th hour of cultivation Figure 6. As could be seen in the image, the developing germ tubes disrupt in the presence of chalcone, leaving empty cell envelopes.
DISCUSSION
Although many secondary metabolites of streptomycetes have been discovered, they were most often isolated from the stationary phase of growth, i.e., in the context of the formation of aerial mycelium (Janecek et al., 1997;Kieser et al., 2000;van Keulen and Dyson, 2014). However, our in silico search within the gene expression data (Strakova et al., 2013;Bobek et al., 2014) revealed a number of genes (including those cryptic) responsible for the biosynthesis of secondary metabolites to be expressed during the course of S. coelicolor's germination (see Table 1); especially genes responsible for the synthesis of desferrioxamines (sco2783-2784), gray spore pigment (sco5314, sco5318, sco5320), or yellow coelimycin P1 (sco6273-6276, sco6277-6287, so called cpk cryptic gene cluster) (Lakey et al., 1983;Gomez-Escribano et al., 2012). We therefore focused our work on whether germinating spores are capable of activating the respective biosynthetic pathways and producing any compound within the 6 h of germination.
Metabolites that could be bound to the spore surface or present in germination medium were identified by means of the LC-MS analysis. Simultaneously, the secondary metabolites produced during the sporulation phase and those associated with the dormant spores were also included in the analysis in order to state whether the compounds found in the samples from germlings were synthesized de novo. The cultivations were carried out in three different liquid media (R3, NMMP, and AM, see Methods). The nutritionally rich medium R3 was chosen for the capacity of S. coelicolor to produce a number of structurally distinct secondary metabolites in it, such as actinorhodin (Shima et al., 1996) or coelimycin P1 (Gomez-Escribano et al., 2012).
In contrast to the R3 medium, the minimal liquid medium NMMP, a poorer medium in which streptomycetes produce fewer secondary metabolites (Hodgson, 1982), was also used. The reason is that NMMP enables the testing of the effects of various ions and nutrients on the production of secondary metabolites. The AM medium containing 20 amino acids was also implemented into our experiments as it had been specifically designed for germination experiments and was used throughout the whole genome expression analyses (Bobek et al., 2004;Strakova et al., 2013). It is known that the presence of different carbon and energy sources in the medium qualitatively affects secondary metabolism (Janecek et al., 1997). That is why the presence of various sugars-glucose, glycerol, and mannitol-in all three media types was tested. In accordance with previously published data (Kieser et al., 2000), both glycerol and mannitol were revealed to be a more suitable source of carbon for secondary metabolites production in the R3 medium in our experiments. Mannitol and glucose exhibited a higher capacity for secondary metabolism when NMMP medium was used and glycerol was shown to have a higher capacity in cases where AM medium was used. Our results also showed that both nutritionally richer media R3 and AM are more suited to germination and the production of secondary metabolites than the minimal NMMP medium, probably due to the presence of Ca 2+ ions (Eaton and Ensign, 1980;Lakey et al., 1983), L-amino acids (Hirsch and Ensign, 1976b), or various carbon sources (Romero-Rodriguez et al., 2016) in the richer media.
The elution methods with ethylacetate (Rajan and Kannabiran, 2014) or the QuEChERS (Schenck and Hobbs, 2004) were not the most appropriate for isolation of secondary metabolites. Therefore, solid phase extraction (Kamenik et al., 2010) was applied with optimization for streptomycete secondary metabolites. These extracts from supernatants of S. coelicolor's cultures were used for LC-MS analyses.
Secondary Metabolites of S. coelicolor Not Produced during Germination
Most of the secondary metabolites, whose biosynthetic genes had previously been shown to be expressed during germination (Strakova et al., 2013), were not detected in samples from germination. These include 21 genes (including sco3230-3232 that encode CDA peptide-synthetase I-III) from the cda gene cluster [encoding synthesis of the calcium-dependent-antibiotic (CDA)], the cryptic gene cluster cpk (genes sco6273-6288, encoding synthesis of a polyketide antibiotic coelimycin P1), genes sco5314, sco5318, and sco5320 (encoding synthesis of the gray spore pigment), and genes sco5877-5878, sco5881, sco5891-5894, and sco5898 from the so-called red gene cluster (encoding synthesis of undecylprodigiosin), and genes sco2783-2784 from the desABCD cluster (sco2782-2785, controlling the synthesis of desferrioxamines). Despite the respective gene expression, biosynthesis of more complex secondary metabolites may not occur, since gene expression is only a requirement for biosynthesis and not evidence of it actually taking place.
On the other hand, we found two actinorhodin congeners -actinorhodinic acid and γ-actinorhodin (Bystrykh et al., 1996;Okamoto et al., 2009) in all tested developmental phases, i.e., stationary phase, dormant and germinating spores. In germinating spores, expression of several involved genes: sco5072-5086, sco5088, and sco5090 had been found (Strakova et al., 2013). Despite the detected expression, we cannot exclude the possibility that the presence of actinorhodin originates from the stationary phase rather than from de novo synthesis in germination (therefore the compounds are not listed in Table 3). The reason is that actinorhodin (as well as other aromatic pigments derived from the type II and type III polyketide synthases) is known to be bound on the spore envelopes throughout dormancy (Davis and Chater, 1990;Bystrykh et al., 1996;Funa et al., 1999;Tahlan et al., 2007). This was also confirmed by the actinorhodin detection in our samples of dormant spores even after several washings.
Secondary Metabolites of S. coelicolor Produced during Germination
The latest study on the topic (Xu and Vetsigian, 2017) concludes that the germination of S. coelicolor M145 may be positively or negatively affected by unknown substances produced by the germlings themselves or by other streptomyces species (e.g., S. venezuelae). The results presented here unveiled that S. coelicolor produces three secondary metabolites during germination, belonging to the terpenoids (albaflavenone) and polyketides (germicidin A, chalcone). As these compounds have not been detected in dormant spore extracts, we assume that they are produced de novo during germination. They show a variety of biological effects and thus perhaps help S. coelicolor suppress competitive microflora or coordinate its own development at the early stage of development. The biosynthetic pathways of the detected substances encompass only a few simple reaction steps that do not require complicated precursors and whose biosynthetic genes are expressed during germination, as can be found in gene expression data (Strakova et al., 2013). In contrast, structurally complex metabolites (like the CDA) were not detected in germination (see above).
Albaflavenone
Tricyclic sesquiterpenoid albaflavenone has an aroma similar to geosmin (Gerber and Lechevalier, 1965;Gurtler et al., 1994). Its biosynthesis requires only two genes: sco5222-5223 (Moody et al., 2012). The expression of these genes can be suppressed by cAMP-receptor protein, Crp, which also occurs in other bacteria (e.g., in Escherichia coli). The cAMP-Crp control system is a key regulator of germination, secondary metabolism, and further development of S. coelicolor (Derouaux et al., 2004;Gao et al., 2012;Bobek et al., 2017). The system also influences the expression of biosynthetic gene clusters in S. coelicolor that extend beyond albaflavenone actinorhodin, prodigiosin, CDA, and coelimycin (Gao et al., 2012).
So far, the production of albaflavone in streptomycetes has been described only in the stationary growth in S. albidoflavus (Gurtler et al., 1994), S. coelicolor, S. viridochromogenes, S. avermitilis, S. griseoflavus, S. Ghanaensis, and S. albus (Moody et al., 2012). However, the expression data analysis shows that the gene sco5223 is activated in germination (Strakova et al., 2013), indicating the possible formation of this metabolite during the initial 6 h of cultivation. We actually found a substance corresponding to albaflavenone in the germination sample in the AM medium with glycerol. The compound was also detected in samples from the stationary phase (positive control in R3 medium with glycerol), but not samples from dormant spores, indicating its germination-associated de novo synthesis.
Albaflavenone is not commercially available, which is why we used hexane extracts, where the compound was detected by LC-MS, for testing its biological activities. We did not see any effect, however. On the other hand, it can be assumed that the albaflavenone produced during the germination provides an advantage in a highly competitive soil environment because it has a demonstrable antibacterial effect on Bacillus subtilis at the concentration of 8 µg mL −1 (Gurtler et al., 1994). Moreover, if albaflavenone was incorporated into the hydrophobic envelope of spores, as other terpenoids do incorporate into the lipophilic membrane layers, it would affect the permeability of the envelopes leading to an intense water influx into spores, thereby accelerating their germination. If we reason that the thickness of the hydrophobic spore envelope is not unified (Lee and Rho, 1993), then the water influx comes into different spores in a different intensity, and therefore, naturally, germination is a non-synchronous process (Hirsch and Ensign, 1976a;Xu and Vetsigian, 2017). The spores already germinated would produce albaflavenone as a signal that environmental conditions are appropriate for the growth of the whole population.
β-farnesene is a sesquiterpene relative to albaflavenone with a wide range of bioactivities (Gibson and Pickett, 1983;Avé et al., 1987) that serves as a precursor of a number of biosynthetic pathways, including the geosmin synthesis. The synthesis of both albaflavenone and β-farnesene is dependent on the type of activity carried out by cytochrome P450 (CYP170A1). It may function either as P450 monooxygenase or as P420 farnesenesynthase. It has been shown that the farnesene-synthase activity predominates at pH 5.5-6.5 and in the presence of bivalent cations (Mg 2+ , Mn 2+ , Ca 2+ ), while at pH 7.0-8.2, it functions as the monooxygenase, oxidizing epi-isozizaen first to albaflavenol, and then to albaflavenone (Moody et al., 2012). Conformation and final enzymatic activity of CYP170A1 is thus affected by the pH of the environment. We presume that S. coelicolor may exploit the dual pH-dependent activity of the enzyme in order to detect optimal external conditions. Therefore, we tested whether the biosynthetic activity is dependent on the pH of the medium during germination. For the experiment we performed the same R3 medium with a pH of either 7.2 or 6.0. Although we were not able to directly demonstrate the production of β-farnesene, we proved that the spectrum of detected substances significantly differed. Further experiments are required to confirm the expected pH-dependent signaling activity of the albaflavenon/β-farnesene systems.
Germicidin A
Gcs protein (a polyketide synthase type III, PKS III) is involved in the germicidin biosynthesis in S. coelicolor (Chemler et al., 2012). However, expression of its gene sco7221 during germination was below the detection limit (Strakova et al., 2013). The germicidin biosynthesis could also be related to the activity of other genes sco7670-7671, whose expression is active in germination (Strakova et al., 2013).
Germicidin A production has previously been described in germination spores and in the stationary growth stage of S. viridochromogenes (Hirsch and Ensign, 1978;Petersen et al., 1993). It was also isolated after more than 24 h of submerged cultivation of S. coelicolor and other streptomyces (Petersen et al., 1993;Aoki et al., 2011;Ma et al., 2017). The results of our work, however, show for the first time that germicidin A is produced by germlings of S. coelicolor (in R3 medium with glycerol or mannitol, and in AM medium with glucose or glycerol). In contrast, germicidin B in germinating S. coelicolor was not produced, which is consistent with the results reported in S. viridochromogenes (Petersen et al., 1993). On the other hand, both polyketides, germicidin A and germicidin B, were detected here in samples from the stationary phase (in both, R3 medium with glycerol, or mannitol and NMMP medium with glucose or mannitol).
Germicidins belong to a richly represented group of α-pyrone natural substances found in bacteria, fungi, plants, and animals (Schaberle, 2016). Pyrones have many biological effects and signal molecules for quorum sensing can be found among them (Brachmann et al., 2013). Germicidin A is a known reversible inhibitor of spore germination; it prevents the germination at very low concentrations of 40 pg mL −1 ; i.e., only 2,400 molecules per spore (Petersen et al., 1993). We verified this biological effect in S. coelicolor, where germicidin A had a marked adverse effect on germination already at as low a concentration in medium as 4 µg mL −1 . It is known that germicidin A affects the respiration of spores and mycelia by interacting with the membrane Ca 2+ -ATPase, inactivating the enzyme. By this mechanism, germicidin not only prevents spores from generating sufficient energy for germination but also inhibits hyphal growth (Eaton and Ensign, 1980;Grund and Ensign, 1985;Aoki et al., 2011). Germicidin A also exhibits antibacterial activity against gram-positive bacteria such as Bacillus subtilis, Arthrobacter crystallopoietes, or Mycobacterium smegmatis (Grund and Ensign, 1985;Aoki et al., 2011).
Because of its inhibitory effect, the production of germicidin during germination in optimal conditions might, at first glance, seem surprising. Its production by germinating spores might help to co-ordinate germination within the population. Its self-regulating function could maintain a portion of spores in their dormant state for a prolonged period as a reserve if the environment proves to be unfavorable or when germination occurs in higher spore densities. Conversely, the ungerminated spores can be further propagated in the environment and, after overcoming the reversible inhibition, can spread in new niches.
Chalcone
The detection of chalcone in Streptomyces has not yet been proven to our knowledge. Its tentative identification presented here is based on the accurate mass of analyzed supernatants from the stationary phase and germination in R3 medium with glycerol or mannitol. Chalcones are intermediates of flavonoid biosynthesis where the key role in their synthesis plays 1,3,6,8-tetrahydroxynaphthalene-synthase. One of its precursors is the naringenin-chalcone, whose involvement in the flavonoid naringenin biosynthesis was described in S. clavuligerus (Alvarez-Alvarez et al., 2015). The enzyme, which belongs to PKS III, is closely related to the plant chalcone synthase (Izumikawa et al., 2003). Its sco1206 gene in S. coelicolor is expressed during germination (Strakova et al., 2013).
Chalcone is apparently an instrument of interspecies interaction, as there are referred its antifungal, phytotoxic and insecticidal effects are outlined (Diaz-Tielas et al., 2012). Chalcone could also function as a signaling molecule in a symbiotic relationship, such as 4,4 ′ -dihydroxy-2 ′methoxychalcone produced by legumes that induce transcription of nod genes in symbiotic rhizobacteria. The products of these genes, Nod factors, are involved in the symbiosis where rhizobacteria produce nitrogen for plants (Maxwell et al., 1989).
Interestingly, other flavonoids -quercetin, kaempferol, and myricetin-are known to stimulate pollen germination in Nicotiana tabacum L. (Ylstra et al., 1992). Therefore, we expected that chalcones produced by germinating spores would stimulate germination under favorable environmental conditions. Experimentally, however, we verified the opposite as the chalcone of S. coelicolor remarkably inhibited germination. At a concentration of 300 µg mL −1 , it completely suppressed growth and at 8 µg mL −1 and lower the compound visibly inhibited spore germination, colony differentiation, and actinorhodin production on solid medium. Electron microscopic images taken from liquid cultivation revealed disrupted germ tubes. This finding correlates with a described activity of chalcone which was shown to interfere with cell membrane of Staphylococcus aureus (Sivakumar et al., 2009).
CONCLUSION
In this work the production of secondary metabolites in germinating streptomycetes is systematically analyzed for the first time. Our investigation was based on the hypothesis that germinating spores exploit intercellular communications (quorum sensing) to support a coordinated development in its early stage as well as interspecies communication (quorum quenching) to suppress metabolic activities of competing microflora (Chen et al., 2000). This work succeeds the previous transcriptomic analysis of germination in streptomyces (Strakova et al., 2013), which has shown the expression of genes from different antibiotic clusters. Here using LC-MS, we detected three potentially important secondary metabolites-sesquiterpenoid albaflavenone, and polyketides germicidin A, and chalconethat are synthesized during spore germination of S. coelicolor. All three detected compound possess capacities to suppress competitive microflora at the early stage of development. Their biosynthetic pathways are simple, having only a few reaction steps that do not require complex precursors.
Albaflavenone had been previously detected only in the stationary phase of growth of certain streptomycetes (Gurtler et al., 1994;Moody et al., 2012). It exhibits an antibacterial effect and could serve as a germination signal for coordinated development or as a factor of interspecific communication (these suggestions are not proven here). In contrary, the two other compounds revealed inhibitory effects on germination that may explain slower germination rate and less synchronicity in the model S. coelicolor, in comparison with other Streptomyces species, such as S. viridochromogenes. Our data are consistent with the known inhibitory effect of the supernatant of S. coelicolor on its own germination (Xu and Vetsigian, 2017).
The widespread autoregulator of germination, germicidin A, is known to be produced by germinating spores of S. viridochromogenes. Here it was shown that the germlings of S. coelicolor are also capable of its production. Chalcone is probably one of the precursors of biosynthesis of a not yet described flavonoid in S. coelicolor. During germination it functions as a germination inhibitor that may serve as a means of interspecies communication.
AUTHOR CONTRIBUTIONS
MČ managed all experiments and evaluated data; MČ and KP performed analytical samples; MČ and ZK performed LC-MS measurements; KŠ and NB performed cultures and experimental design for germination; OB and OK performed the electronmicroscopy; JB concieved the project and wrote the manuscript. | 2017-12-13T19:02:07.741Z | 2017-12-13T00:00:00.000 | {
"year": 2017,
"sha1": "8f165402d7342f734914b42cefd08bae73cd4b68",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2017.02495/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8f165402d7342f734914b42cefd08bae73cd4b68",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
6491635 | pes2o/s2orc | v3-fos-license | Quantum Mechanics with a Momentum-Space Artificial Magnetic Field
The Berry curvature is a geometrical property of an energy band which acts as a momentum space magnetic field in the effective Hamiltonian describing single-particle quantum dynamics. We show how this perspective may be exploited to study systems directly relevant to ultracold gases and photonics. Given the exchanged roles of momentum and position, we demonstrate that the global topology of momentum space is crucially important. We propose an experiment to study the Harper-Hofstadter Hamiltonian with a harmonic trap that will illustrate the advantages of this approach and that will also constitute the first realization of magnetism on a torus.
The Hamiltonian of a charged particle in a magnetic field is a familiar and fundamental result in quantum mechanics [1]. In condensed matter physics, this Hamiltonian serves as a basic model for electrons in atoms and solids in the presence of magnetic fields. However, the textbook magnetic Hamiltonian is only one half of the full story. In this Hamiltonian, the roles of momentum and position are inherently asymmetric; the magnetic vector potential is a function of position that re-defines the relationship between the canonical and physical momenta. Here we show that the effective Hamiltonian of a very general class of systems is a magnetic Hamiltonian in momentum space. We build on ideas from the last sixty years [2][3][4][5] to discuss how the Berry curvature of an energy band acts as an artificial magnetic field with the roles of momentum and position reversed. Furthermore, we propose a simple experiment that will demonstrate the practical importance of understanding the effective magnetic Hamiltonian in momentum space. This general approach opens up new research avenues in diverse branches of physics, including ultracold gases, topological insulators, photonic systems, spin-orbit coupling and the mathematical physics of topological spaces.
Energy bands in momentum space are characterised not only by an energy spectrum, but also by important geometrical and topological properties. In recent years, these properties have been the subject of intense scientific research in many fields. Bands with nontrivial topological and geometrical properties are present in many solidstate materials [5], but thanks to important recent advances, they can also now be created in ultracold gases [6][7][8][9] and photonic systems [10][11][12][13]. The topological invariants of energy bands can be used to understand the quantisation of conductance in the quantum Hall effect [14] and to classify topological insulators [15,16]. Topological invariants are closely related to local geometrical properties of energy bands, such as the Berry curvature [4,5]. The Berry curvature of a band can be nonzero when timereversal or inversion symmetry is broken [5], as, for example, for electrons in a magnetic field or with spin-orbit coupling. The Berry curvature has many direct physical consequences. For example, it plays an important role in the anomalous Hall effect [3,17], the semiclassical dy-namics of a wave packet [18] and the collective modes of an ultracold gas [19]. Berry phase effects have also been directly observed in graphene [20,21] and bulk Rashba semiconductors [22].
Even though it was implicitly suggested very early [3,4], the idea of the Berry curvature acting like a magnetic field in momentum space has so far been generally regarded as just a primitive analogy [5]. Here, we put this idea on a solid footing, demonstrating the remarkable power of this perspective and discussing some of its many consequences. We derive an effective magnetic Hamiltonian in momentum space which is of broad applicability and relevance. To illustrate our general discussion, we present two specific systems to which the effective magnetic Hamiltonian can be applied. First, we discuss two-dimensional (2D) Rashba spin-orbit coupling with a Zeeman field and an external potential, then we propose a simple experiment to study the Harper-Hofstadter model [23] with an additional harmonic trap. This experiment would be a natural extension of recent experiments in ultracold gases [8,9], photonic systems [12] and solidstate superlattices [24].
The duality between the effective momentum space Hamiltonian and the textbook magnetic Hamiltonian is of great benefit to the study of both energy bands and magnetism. In particular, the energy bands of a twodimensional periodic system are defined in the 2D Brillouin zone, which has the topology of a torus. In real space, the magnetic field through a torus is quantised, counting the number of magnetic monopoles contained inside. A torus in a real space magnetic field has long been of mathematical and theoretical interest [25,26], but it has not previously been investigated experimentally. We show that this may now be possible, as the eigenstates of the Harper-Hofstadter model with a harmonic trap are momentum space Landau levels on a torus, for certain accessible parameter regimes. Alternatively, the effective Hamiltonian can be extended to energy bands with degeneracies, such as those in quantum spin Hall physics, graphene and topological insulators [15,16]. The artificial gauge field in momentum space then has interesting properties; it is non-Abelian and may be connected to the Yang-Mills gauge field of high energy physics.
We begin with the general derivation of the effective magnetic Hamiltonian in momentum space. We start from the single-particle Hamiltonian, H = H 0 + W . The first part of the Hamiltonian, H 0 , is either translationally invariant or periodic in real space. For example, H 0 could refer to an electron in a crystal, an atom with spin-orbit coupling, an ultracold atomic gas in an optical lattice or light in either a photonic crystal or a lattice of coupled resonators or waveguides. The second part of the Hamiltonian, W , is a weaker additional potential. This could be, for instance, an electric field for an electron or a harmonic trap or optical superlattice potential for atoms.
The eigenfunctions of H 0 can be written as |χ n,p (r) = e ip·r √ V |np , where |np is the energy eigenstate for band index n and momentum p, and V is a normalisation factor. If H 0 is periodic, the eigenstate is the periodic Bloch function, u n,k (r), and the momentum is the crystal momentum, k, defined in the Brillouin zone (we takeh = 1 throughout). The normalisation, V , is the number of lattice sites, N . If instead H 0 is translationally invariant, the eigenstate |np is a position-independent wave function and V is the volume of the system. For simplicity, we focus on systems in 2D, although the extension to three dimensions is straightforward.
The energy bands are described by a band structure, E n (p), and have geometrical properties encoded in the Berry connection, A n (p), and Berry curvature, Ω n (p) [4,5]: The additional potential, W , mixes different eigenstates, |np . The first example of this was investigated in 1959 to study the anomalous velocity of an electron in an electric field in a solid [3]. We build on these ideas to derive the effective Hamiltonian in a much more general, simple and modern context. We expand the eigenstates of the full Hamiltonian, H, as: where ψ n (p) are expansion coefficients. For a periodic H 0 , this sum is taken over the first Brillouin zone, otherwise, the sum runs over infinite momenta. We substitute into the Schrödinger equation, i ∂ ∂t |Ψ = H|Ψ , and apply χ n ′ ,p ′ |, to obtain: To proceed, we expand W (r) as a power series in r, and repeatedly insert the completeness relation: 1 = n p |χ n,p χ n,p |. Then we can use the identity: which we have generalised from a previously known result [2](Supplementary Information). We assume that the additional potential is sufficiently weak that it does not significantly mix energy bands and that the contribution from only one band n is non-negligible. A quantitative condition for this approximation will be discussed in the following. The effective quantum Hamiltonian in the single-band approximation is: The Berry connection enters like a magnetic vector potential in momentum space, modifying the physical position as r → i∇ p + A n (p), where i∇ p = R is the canonical position [3,17]. This is the reciprocal replacement to p → −i∇ r + eA(r); the correction of the physical momentum, p, by a magnetic vector potential, A(r), where e is the electric charge [1]. In both cases, the canonical variable becomes gauge-dependent, while the physical variable remains independent of gauge. The duality between momentum space magnetism and real space magnetism is also transparently demonstrated by comparing the effective Hamiltonian (5) to the textbook magnetic Hamiltonian of a charged particle: for an external potential, V (r), and particle mass, M . In particular, note that the roles of momentum and position are reversed. The energy bandstructure, E n (p), acts like the external potential V (r), while W (i∇ p + A n (p)) corresponds to the "kinetic energy". The band index n could be a spin index in the magnetic analogy. When W (r) is a harmonic trap, W (r) = 1 2 κr 2 , the effective momentum space Hamiltonian (5) is: Comparing with equation (6), we see that the inverse trapping strength, κ −1 , is equivalent to the mass of the particle, M . Different types of external potential, W (r), are equivalent to different energy-momentum relationships in the real space magnetic Hamiltonian. The effective Hamiltonian (5) may also be generalised to systems such as graphene and topological insulators. Instead of making a single band approximation, we could keep a sub-space of degenerate or nearly degenerate bands (Supplementary Information). The effective momentum space magnetic field then has a non-Abelian gauge structure [27].
The anomalous velocity [3] and semiclassical equations of motion [18] follow straightforwardly from Hamilton's equations:Ṙ = ∂H ∂p andṗ = − ∂H ∂R . The velocity will be: provided that we can either approximate the potential, W (r) as a locally uniform force [3], or approximate the Berry curvature as locally constant in momentum space. The contribution of the Berry curvature to the velocity is analogous to the Lorentz force in magnetism. This has important experimental consequences, for example, for a particle with 2D Rashba spin-orbit coupling in a Zeeman field [5,19,28]. This model is relevant for real materials [28], as well as ultracold gases, where there is great interest in creating 2D artificial spin-orbit coupling [29]. The basic Hamiltonian is: whereσ x,y,z are the Pauli matrices, ∆ is the Zeeman field and λ is the spin-orbit coupling strength. The Hamiltonian, H 0 , has two energy bands, E ± (p) = p 2 2M ± λ 2 p 2 + ∆ 2 , which are non-degenerate provided that ∆ is non-zero. The Berry curvature is [28]: With an external harmonic potential, W (r) = 1 2 κr 2 , a particle in the lowest band can be described by the effective Hamiltonian (7). For λ 2 M/∆ < 1, the lowest band has a single minimum at p = 0. For a gas condensed at this minimum, the Berry curvature acts as a momentum space magnetic field, splitting the dipole mode frequencies of a trapped ultracold gas [19] (Supplementary Information). The splitting of the dipole modes can be large, and would be directly observable within current experimental capabilities [19].
While the Berry curvature considered in equation (5) is a local property of the energy band, the global topology of the momentum space can also play an important role. In the Rashba model discussed above, the basic Hamiltonian, H 0 , is translationally invariant and the momentum p is continuous and unconstrained. However, when H 0 is periodic, the momentum is defined over the Brillouin zone, which has the topology of a torus.
It is well known that real space magnetism on a torus has many interesting properties, such as the quantisation of magnetic flux due to the presence of magnetic monopoles. In the analogy with magnetism, this is equivalent to the quantisation of the Berry curvature, integrated over the entire Brillouin Zone, in units of the Chern number: the topological invariant underlying the quantum Hall effect [14]. In the following, we propose a simple experiment to study magnetism on a torus experimentally for the first time. We investigate the eigenstates of the Harper-Hofstadter Hamiltonian [23] with an external harmonic trap; such an experiment would be a natural extension of recent advances [8,9,12,24].
In the Harper-Hofstadter model, a particle hops on a 2D lattice in a perpendicular magnetic field, B = Bẑ. Choosing the magnetic vector potential in the Landau gauge, A(r) = Bxŷ, the tight-binding Hamiltonian with a harmonic trap is: ,nâ m,n + e iφâ † m,n+1â m,n + h.c. (11) where H 0 is the Harper-Hofstadter Hamiltonian, J is the hopping amplitude, a is the lattice spacing and theâ † m,n (â m,n ) operators create (annihilate) a particle at lattice site (m, n). The hopping alongŷ is modified by a complex phase φ = 2παma, where α is the number of magnetic flux quanta per plaquette of the lattice. Without a harmonic trap, there is just the Harper-Hofstadter Hamiltonian, H 0 , and the behaviour is governed by the value of α. For rational values of α = p/q, the tight-binding band splits into q magnetic sub-bands. The energy spectrum of the Harper-Hofstadter Hamiltonian is the well-known Hofstadter butterfly [23]. The magnetic vector potential, A(r), is not periodic, and the usual translation operators do not commute with H 0 [18]. Bloch's theorem can be applied only when we define new magnetic translation operators which do commute with H 0 . For these to also commute amongst themselves, we define a larger magnetic unit cell of q plaquettes, containing an integer number of magnetic flux quanta. Then the Bloch states are magnetic Bloch states and the crystal momentum, k, is defined within the magnetic Brillouin zone (MBZ): −π/a < k y ≤ π/a and −π/qa < k x ≤ π/qa (for a magnetic unit cell of q plaquettes alongx) [18] .
We numerically diagonalise the full Hamiltonian, H, with the harmonic trap (11). The energy spectrum is shown in Figure 1 for low energies as a function of magnetic flux, α. This was calculated for κa 2 /J = 0.02 on a lattice of 21 by 21 sites. The weak harmonic potential splits the Harper-Hofstatder bands into a complicated structure first noted in Ref. 30. Here we show that these eigenstates can be understood through the analogy between Berry curvature and magnetism.
When the single-band approximation is valid, the physics of the full system (11) is described by the effective momentum space Hamiltonian (5). We focus on the regime α = 1/q ≪ 1 so that we can make two further simplifications. Firstly, with decreasing α = 1/q, the bands flatten compared to the hopping energy J. If the bandwidth is much smaller than κa 2 , we can assume E n (k) ≃ E n . The effect of the band structure is then only (These numerical results were first obtained in Ref. 30, but a complete interpretation was not provided.) As shown in the inset, the harmonic trap splits the lowest Hofstadter band into a ladder of toroidal Landau levels, with an equal energy spacing of κ|Ω0| = κa 2 /2πα. At increasing energies, higher bands are also split into ladders of toroidal Landau levels. As α → 0, the effective Hamiltonian breaks down and the states are those of a 2D simple harmonic oscillator on a lattice [30].
to shift the overall energy. Secondly, when α = 1/q and q is odd, the Chern number of each band, except the middle band, is -1. For α ≪ 1, the Berry curvature of these bands is increasingly uniform, Ω n (k) ≃ Ω n . The uniform value of the Berry curvature, |Ω 0 | = a 2 /(2πα), is found by noting that the Chern number: C 0 = 1 2π Ω 0 A BZ = −1, where A BZ = (2π) 2 /qa 2 is the area of the MBZ [19]. Therefore for α = 1/q ≪ 1, the effective Hamiltonian describes a particle in a uniform magnetic field in momentum space, with an additional overall energy shift. The eigenstates of a particle in a real space uniform magnetic field are the infinitely degenerate Landau levels [1]. Introducing periodic boundary conditions restricts the particle to the surface of a torus [25,26]. Toroidal Landau levels are superpositions of the infinitely degenerate Landau levels such that the boundary conditions are satisfied. From the duality between momentum space magnetism and real space magnetism, we can deduce the properties of toroidal Landau levels in the magnetic Brillouin zone. The energy is unaffected by topology and is given by the well-known Landau level spectrum: where β = 0, 1, 2... is the Landau level quantum number and κ|Ω n | is the analogue of the cyclotron frequency, ω c = e|B|/M . The degeneracy of toroidal Landau levels is finite and equal to the quantised number of magnetic flux quanta piercing the torus [25]. Counting the degeneracy of such states may therefore provide another experimental tool to directly measure the Chern number of non-degenerate energy bands. As shown in the inset of Fig. 1, the lowest Harper-Hofstadter band is split into a ladder of equispaced states. The energy spacing of these states is in excellent agreement with Eq. 12 where |Ω 0 | = a 2 /(2πα). The result was also noted in Ref. 30 but its origin was not discussed there. These states are non-degenerate reflecting the Chern number of the lowest band. Similar behaviour is also observed for p = 1; this will be discussed in a future publication. Note that, as shown in Fig. 1, this result only holds for sufficiently large α such that the single band approximation is valid. For smaller values of α, one recovers the energy spacing of a 2D simple harmonic oscillator on a tight-binding lattice [30].
Looking again at the inset of Fig. 1, one notes that at higher energies a second ladder of states cuts diagonally across the first ladder. These states can be identified as toroidal Landau levels in the second lowest Harper-Hofstadter band. The single band approximation requires that κa 2 ≪ (E n − E n ′ ). This is easily fulfilled for our choice of parameters: the splitting of the two lowest bands, for example, is (E 1 − E 0 ) ≃ J for α = 1/11 without a harmonic trap. The effective Hamiltonian applies to each band separately, without inter-band coupling. Increasing the energy, Landau levels of other bands also appear. However, our assumptions eventually break down: the inter-band transition energy decreases, and the Berry curvature and band structure become increasingly nonuniform.
We can also compare the form of the numerical eigenstates with those of theoretical Landau levels on a torus [26]. In the magnetic Brillouin zone, when |C n | = 1, the states are: where H β are Hermite polynomials and N lΩ n β is a normalisation constant (Supplementary Information). The characteristic momentum scale is l Ωn = 1/|Ω n |, the analogue to the "magnetic length". Like a real space magnetic vector potential, the Berry connection is gaugedependent. In writing down this wave function, we chose the Landau gauge for the momentum space Berry connection, A n (k) = |Ω n |k xky , although we only directly compare quantities that are independent of this gaugechoice with numerics. The excellent agreement between Landau levels on a torus and the numerical eigenstates is shown in Fig. 2, for the 9th and 48th numerical states. These correspond to the β = 8 toroidal Landau level in the first and second Harper-Hofstadter bands respectively.
Our simple proposal is well suited to experiments as the Landau levels observed are remarkably robust. According to our numerics, the basic features of the lowest energy toroidal Landau levels survive for magnetic flux up to α = 1/5 and for κa 2 /J ≃ 0.5. This gives a large parameter regime over which the properties of the effective magnetic Hamiltonian may be investigated. Importantly, these results are also very insensitive to lattice size, due to strong localisation of the low energy eigenstates in real space (Fig. 2). This is because toroidal Landau levels are delocalised in momentum space, varying over a large characteristic momentum-scale, l Ωn . Therefore, experiments can be performed for very small lattices, which is a key technical advantage. Experimentally, a harmonic trap can be added straightforwardly to an ultracold gas by using additional laser beams and/or magnetic fields. The eigenstates of the system can be probed directly in time-of-flight measurements of the momentum distribution. In photonic systems, a harmonic potential might be created by letting the cavity size vary in space, while the momentum space wave function is measured in the far-field emitted light [12]. It is important to remember that the eigenstates depend on the magnetic gauge chosen for the Harper-Hofstadter Hamiltonian (11). However, in both ultracold gases and photonic systems, there is a preferential magnetic gauge choice inherent in the experimental set-up [8,9,12].
The magnetic Hamiltonian in momentum space is the counterpart to an important textbook result. It is also applicable to many systems of current experimental interest. These also include quantum spin Hall models, graphene and topological insulators, where the gauge field becomes non-Abelian. In the future, the analogy between magnetism and energy bands should open up many new avenues of research. It also raises many interesting open questions concerning the role of interactions and the possibility of momentum-space electrodynamics.
We are grateful to N. R. Cooper for helpful comments and to P. Ghiggini for mathematical support. This work was partially funded by ERC through the QGBE grant and by the Autonomous Province of Trento, Call "Grandi Progetti 2012," project "On silicon chip quantum optics for quantum computing and secure communications -SiQuro".
We note that the magnetic Brillouin zone is defined from the Harper-Hofstadter Hamiltonian without a harmonic trap [9,10]. For clarity, we demonstrate this by taking the Fourier transform of the lattice creation and annihilation operators:â and substituting them into H 0 . After simplifying the algebra, the Harper-Hofstadter Hamiltonian in the full Brillouin zone becomes [11]: [−2J cos(p x a)â † px,pyâ px,py − J(e −ipyaâ † px+2πα/a,pyâ px,py + e ipyâ † px−2πα/a,pyâ px,py )].
This is not diagonal in the full Brillouin zone, as p x is mixed with p x + 2πα/a and p x − 2πα/a. However, the Hamiltonian can be made diagonal, by defining a new variable: k x = p x + j2πα/a, where j is an integer. For α = 1/q, G = 2πα/ak x is a magnetic reciprocal lattice vector for the unit cell discussed in the text, and k is the magnetic crystal momentum. Note that the choice of the new variable, k, followed naturally from the magnetic gauge of H 0 . We have deliberately picked the magnetic unit cell for which: k = p + jG. (For other choices of magnetic unit cell, this relation would not generally take this simple form.) We can then write: where U n,k (p) is a unitary matrix that transforms eigenstates between the full and magnetic Brillouin zone. This matrix only has non-zero values for p = k − jG. Taking the inverse of Eq. 17 and applying it to Eq. 14, we have: where ψ n (k) are the wave function coefficients in the magnetic Brillouin zone. Then it follows that: If we make the single band approximation, and assume that ψ n (k) is only non-negligible in one band, we obtain |ψ n (k)| 2 = j |ψ(p = k − jG)| 2 . This relationship is used to calculate the numerical wave function in the magnetic Brillouin zone in the main text & below in Fig. S1.
Further Details for Toroidal Landau Levels
Here we give further details and examples of toroidal Landau levels. As stated in the main text, for |C n | = 1, these have the form: translated from the theoretical Landau levels of a particle on a real space torus [12]. We note that the form of these levels depends on the Chern number and the dimensions of the magnetic Brillouin zone. The wave function obeys periodic boundary conditions and the "solenoid fluxes" are zero. The normalisation constant is: where k varies continuously over infinite momenta and is not constrained to the Brillouin zone. (The Berry connection is again chosen in the Landau gauge specified in the main text). The wave functions of this basis of Landau levels are characterised by a plane wave along k y , and β nodes along k x . In the same gauge, the toroidal Landau levels have a markedly different structure (Fig. S1 c & d). The toroidal Landau levels vary along both momenta directions, with the majority of nodes along k y . | 2014-11-19T16:04:41.000Z | 2014-03-24T00:00:00.000 | {
"year": 2014,
"sha1": "0824a8dcac67ceb4436c5d69c92f4c7cb4732e48",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1403.6041",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0824a8dcac67ceb4436c5d69c92f4c7cb4732e48",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
90341478 | pes2o/s2orc | v3-fos-license | Modeling transcription factor combinatorics in promoters and enhancers
We propose a new approach (TFcoop) that takes into account cooperation between transcription factors (TFs) for predicting TF binding sites. For a given a TF, TFcoop bases its prediction upon the binding affinity of the target TF as well as any other TF identified as cooperating with this TF. The set of cooperating TFs and the model parameters are learned from ChIP-seq data of the target TF. We used TFcoop to investigate the TF combinations involved in the binding of 106 different TFs on 41 different cell types and in four different regulatory regions: promoters of mRNAs, lncRNAs and pri-miRNAs, and enhancers. Our experiments show that the approach is accurate and outperforms simple PWM methods. Moreover, analysis of the learned models sheds light on important properties of TF combinations. First, for a given TF and region, we show that TF combinations governing the binding of the target TF are similar for the different cell-types. Second, for a given TF, we observe that TF combinations are different between promoters and enhancers, but similar for promoters of distinct gene classes (mRNAs, lncRNAs and miRNAs). Analysis of the TFs cooperating with the different targets show over-representation of pioneer TFs and a clear preference for TFs with binding motif composition similar to that of the target. Lastly, our models accurately distinguish promoters into classes associated with specific biological processes.
INTRODUCTION
Transcription factors (TFs) are regulatory proteins that bind DNA to activate or repress target gene transcription. TFs play a central role in controlling biological processes, and are often mis-regulated in diseases (22). Technological developments over the last decade have allowed the characterization of binding preferences for many transcription factors both in vitro (4,16) and in vivo (15). These analyses have revealed that a given TF usually binds similar short nucleic acid sequences that are thought to be specific to this TF, and that are conserved along evolution (5). In addition to enabling characterization of the sequence specificity of TF binding sites (TFBS), in vivo approaches such as ChIP-seq also have the potential to precisely identify the position of these binding sites genome-wide, in a particular biological condition (cell type or treatment). While consortiums such as ENCODE (40) have generated hundred of ChIP-seq datasets for different TFs under different conditions, it is not possible to provide data for every TFs in every possible biological condition. Therefore, accurate computational approaches are needed to complement experimental results. Also, a biological explanation is missing: knowing where a TF binds in the genome does not explain why it binds there. Understanding TF binding involves developing biophysical or mathematical models able to accurately predict TFBSs.
Traditionally, TFBSs have been modeled by position weight matrices (PWMs) (43). These models have the benefit of being simple, easy to visualize, and they can be deduced from in vitro and in vivo experiments. As a result, several databases such as JASPAR (25), HOCOMOCO (21), and Transfac (44), propose position frequency matrices (PFM, which can then be transformed in PWMs) for hundred of TFs. These PWMs can be used to scan sequences and identify TFBSs using tools such as FIMO (12) or MOODS (20). However, to a certain extent, PWMs lack sensitivity and accuracy. While the majority of TFs seem to bind to a unique TFBS motif, in vivo and in vitro approaches have revealed that some TFs recognize different motifs (17,32). In that case, one single PWM is not sufficient to identify all potential binding sites of a given TF. Moreover, while a PWM usually identifies thousands of potential binding sites for a given TF in the genome (45), ChIP-seq analyses have revealed that only a fraction of those sites are effectively bound (19). There may be different reasons for this discrepancy between predictions and experiments. First, PWMs implicitly assume that the positions within a TFBS independently contribute to binding affinity. Several approaches have thus been proposed to account for positional dependencies within the TFBS (see for example (27,47)). Other studies have focused on the TFBS genomic environment, revealing that TFs seems to have a preferential nucleotide content in the flanking positions of their core binding sites (9,23).
Second, beyond the primary nucleotide sequence, other structural or epigenetic data may also affect TF binding. For example, it is thought that TFs use DNA shape features (e.g., minor groove width and rotational parameters such as helix twist, propeller twist and roll) to distinguish binding sites with similar DNA sequences (36). The contributions of base and shape composition for TFBS recognition vary across TF families, with some TFs influenced mainly by base composition, and some other TFs influenced mostly by DNA shape (36). Some attempts have thus been made to integrate DNA shapes information with PWMs (26). Other studies have investigated the link between TF binding and epigenetic marks, showing that many TFs bind regions associated with specific histone marks (10). However, it remains unclear whether these chromatin states are a cause or a consequence of TF binding (14). Similarly, ChIP-seq experiments also revealed that most TFBSs fall within highly accessible (i.e., nucleosome-depleted) DNA regions (41). Consequently, several studies have proposed to supplement PWM information with DNA accessibility data to identify the active TFBS in a given cell type (24,31,38). However, as for other epigenetic marks, DNA accessibility can also be a consequence of TF binding rather than its cause. For example, Sherwood et al. (39) used DNase-seq data to distinguish between "pioneer" TFs, which open chromatin dynamically, and "settler" TFs, which only bind already-opened chromatin.
Competition and cooperation between TFs (combinatorics) can also impact the binding capability of a given TF. As reviewed in Morgunova et al. (30), multiple mechanisms can lead to TF cooperation. In its simplest form, cooperation involves direct TF-TF interactions before any DNA binding. But cooperation can also be mediated through DNA, either with DNA providing additional stability to a TF-TF interaction (18), or even without any direct protein-protein interaction. Different mechanisms are possible for the later. For example, the binding of one TF may alter the DNA shape in a way that increases the binding affinity of another TF (30). Another system is the pioneer/settler hierarchy described above, with settler TFs binding DNA only if adequate pioneer TFs have already bound to open the chromatin (39). Lastly, other authors have hypothesized a non-hierarchical cooperative system, with multiple concomitant TF bindings mediated by nucleosomes (29). This is related to the "billboard" system proposed for enhancers (3).
Recently, deep learning approaches such as DeepSea (33,48) have been proposed for predicting epigenetic marks (including TF binding sites) from raw data sequences. These approaches show higher prediction accuracy than PWM-based methods, but the biological interpretation of the learned neural network is not straightforward. Moreover, approaches such as DeepSea involve a very high number of parameters and hence require high amount of learning data to work.
In this paper, we propose a new approach (TFcoop) for modeling TFBSs taking into account the cooperation between TFs. More formally, for a given cell type, regulatory region (for example 500bp around the TSS of a particular gene), and TF, we aim to predict whether this TF binds the considered sequence in the given cell type. Our predictor is a logistic model based on a linear combination of two kinds of variables: i) the binding affinity (i.e. PWM affinity score), of the TF of interest as well as any other TFs identified as cooperating with the target TF; and ii) the nucleotide composition of the sequence. The set of cooperating TFs and the model parameters are learned from ChIP-seq data of the target TF. This approach thus allows us to take into account the potential presence of cooperating TFs to predict the presence or absence of the target TF. Another advantage is that it allows us to consider all available PWMs for a given TF, and therefore to handle alternative binding-site motifs. Lastly, the learned model can be readily analyzed and directly yields a list of potentially cooperating TFs. Variable selection (i.e. identification of cooperating TFs) is done via lasso penalization (42). Learning can be done using a moderate amount of data, which allows us to learn specific models for different types of regulatory sequences.
Using ChIP-seq data from the ENCODE project, we used TFcoop to investigate the TF combination involved in the binding of 106 different TFs on 41 different cell types and in four different regulatory regions: promoters of mRNAs, long non-coding (lnc)RNAs and microRNAs, and enhancers (2,7,11,13). Our experiments show that this approach outperforms simple PWM methods, with accuracy and precision close to that of DeepSea (48). Moreover, analysis of the learned models sheds light on important properties of TF combinations. First, for a given TF and region, we show that TF combinations governing the presence/absence of the target TF are similar for the different cell-types. Second, for a given TF, we observe that TF combinations are different between promoters and enhancers, but similar for all promoters of all gene classes (mRNAs, lncRNAs, and miRNAs). Analysis of the composition of TFs cooperating with the different targets show over-representation of pioneer TFs (39), especially in promoters. We also observed that cooperating TFs are enriched for TFs whose binding is weakened by methylation (46). Lastly, our models can accurately distinguish promoters into classes associated with specific biological processes.
Data
Promoter, enhancer, long non-coding RNA and microRNA sequences. We predicted TF binding in both human promoters and enhancers. For promoters, sequences spanning ± 500bp around starts (i.e. most upstream TSS) of protein-coding genes, long non-coding RNAs and microRNAs were considered. Starts of coding and lncRNA genes were obtained from the hg19 FANTOM CAGE Associated Transcriptome (CAT) annotation (11,13). Starts of microRNA genes (primary microRNAs, pri-miRNAs) were from (7). For enhancers, sequences spanning ± 500bp around the mid-positions of FANTOM-defined enhancers (2) were used. Lastly, our sequence datasets are composed of 20,845 protein coding genes, 1,250 pri-microRNAs, 23,887 lncRNAs, and 38,553 enhancer sequences.
Nucleotide and dinucleotide features. For each of these sequences, we computed nucleotide and dinucleotide relative frequencies as the occurrence number in the sequence divided by sequence length. Frequencies were computed in accordance with the rule of DNA reverse complement. For nucleotides, we computed the frequency of A/T and G/C. Similarly, frequencies of reverse complement dinucleotides (e.g. ApG and CpT) were computed together. This results in a total of 12 features (2 nucleotides and 10 dinucleotides).
PWM. We used vertebrate TF PFMs from JASPAR (25), including all existing versions of each PFM, resulting in a set of 638 PFMs with 118 alternative versions. PFMs were transformed into PWMs as described in Wasserman and Sandelin (43). PWM scores used by TFcoop for a given sequence were computed as described in (43), keeping the maximal score obtained in any position of the sequence.
ChIP-seq data. We collected ChIP-seq data from the ENCODE project (40) for human immortalized cell lines, tissues, and primary cells. Experiments were selected when the targeted TF were identified by a PWM in JASPAR. Thus we studied 409 ChIP-seq experiments for 106 distinct TFs and 41 different cell types. The most represented TF is CTCF with 69 experiments, while 88% of the experiments are designed from immortalized cell lines (mainly GM12878, HepG2 and K562). The detailed list of all used experiments is given in Supplementary materials. For each ChIP-seq experiment, regulatory sequences were classified as positive or negative for the corresponding ChIP targeted TF. We used Bedtools v2.25.0 (34) to detect intersection between ChIP-seq binding sites and regulatory sequences (both mapped to the hg19 genome). Each sequence that intersects at least one ChIP-seq binding region was classified as a positive sequence. The remaining sequences formed a negative set. The number of positive sequences varies greatly between experiments and sequence types. Mean and standard deviation numbers of positive sequences are respectively 2661(±1997) for mRNAs, 1699(±1151) for lncRNAs, 216(±176) for microRNAs, and 1516(±1214) for enhancers.
Expression data. To control the effect of expression in our analyses, we used ENCODE CAGE data restricted to 41 cell lines. The expression per cell line was calculated as the mean of the expression observed in all corresponding replicates. For microRNAs, we used the small RNA-seq ENCODE expression data collected for 3,043 mature microRNAs in 37 cell lines (corresponding to 403 ChIP-seq experiments). The expression of microRNA genes (i.e. pri-microRNAs) was calculated as the sum of the expression of the corresponding mature microRNAs.
Logistic model
We propose a logistic model to predict the regulatory sequences bound by a specific TF. Contrary to classical approaches, we not only consider the score of the PWM associated with the target TF, but also the scores of all other available PWMs. The main idea behind this is to unveil the TF interactions required for effective binding of the target TF. We also integrate in our model the nucleotide and dinucleotide compositions of the sequences, as the environment of TFBSs are thought to play major role in binding affinity (9,23).
For each ChIP-Seq experiment, we learn different models to predict sequences bound by the target TF in four regulatory regions (promoters of mRNA, lncRNA and pri-miRNA, and enhancers). For a given experiment and regulatory region, our model aims to predict response variable y s by the linear expression where y s is the boolean response variable representing the TF binding on the given sequence s (y s = 1 for TF binding, 0 otherwise); Score m,s is the score of motif m on sequence s; Rate n,s is the frequency of (di)nucleotide n in sequence s; α is a constant; β m and β n are the regression coefficient associated with motif m and (di)nucleotide n, respectively; and ε s is the error associated with sequence s. M otif s and N ucl sets respectively contain 638 JASPAR PWMs and 12 (di)nucleotide frequencies.
To perform variable selection (i.e. identifying cooperating TFs), we used the lasso regression minimising the prediction error within a constraint over l1-norm of β (42). The weight of the lasso penalty is chosen by crossvalidation by minimising the prediction error with the R package glmnet (35). As the response variable is boolean, we used a logistic regression giving an estimation of the probability to be bound for each sequence. We evaluate the performance of the model using 10-fold cross validation. In each validation loop, 90% of sequences (training data) are used to learn the β parameters and the remaining 10% (test data) are used to evaluate the predictive performance of the model.
Alternative approaches
We compared the predictive accuracy of our model to three other approaches.
Best hit approach. The traditional way to identify TF binding sites consists in scanning a sequence and scoring the corresponding PWM at each position. Positions with a score above a predefined threshold are considered as potential TFBS. A sequence is then considered as bound if it contains at least one potential TFBS.
TRAP score. An alternative approach proposed by Roider et al. (37) is based on a biophysically inspired model that estimates the number of bound TF molecules for a given sequence. In this model, the whole sequence is considered to define a global affinity measure, which enables us to detect low affinity bindings as described in (? ). We use the R package tRap (35) to compute the affinity score of the 638 PWMs for all sequences. As proposed in (37), we use default values for the two parameters (R 0 (width), λ = 0.7).
DNA shape. In addition to PWMs, Mathelier et al. (26) considered 4 DNA shapes to increase binding site identification: helix twist, minor groove width, propeller twist, and DNA roll. The 2 nd order values of these DNA shapes are also used to capture dependencies between adjacent positions. Thus, each sequence is characterized by the best hit score for a given PWM plus the 1 st and 2 nd DNA shape order values at the best hit position. The approach based on gradient boosting classifier requires a first training step with foreground (bound) and background (unbound) sequences to learn classification rules. Then the classifier is applied to the set of test sequences. We used the same 10-fold cross-validation scheme that we used in our approach. We applied two modifications to speed-up the method, which was designed for smaller sequences. First, in the PWM optimization step of the training phase, we reduced the sequences to ± 50bp around the position with highest ChIP-Seq peak for positive sequences and to ± 50bp around a random position for negative sequences. Second, after this first step we also reduced sequences used to train and test the classifiers to ± 50bp around the position for which the (optimized) PWM gets the best score.
DeepSEA. Zhou and Troyanskaya (48) proposed a deep learning approach for predicting the binding of chromatin proteins and histone marks from DNA sequences with single-nucleotide sensitivity. Their deep convolutional network takes 1000bp genomic sequences as input and predicts the states associated with several chromatin marks in different tissues. We used the predictions provided by DeepSEA server (http://deepsea.princeton.edu/). Namely, coordinates of the analyzed promoter and enhancer sequences were provided to the server, and the predictions associated with each sequence were retrieved. Only the predictions related to the ChIP-seq data we used in our analyses were considered (i.e. 214 ChIP-seq data in total).
Computational approach
Given a target TF, the TFcoop method identifies the TFBS combination that is indicative of the TF presence in a regulatory region. We first considered the promoter region of all mRNAs (defined as the 1000bp centered around gene start). TFcoop is based on a logistic model that predicts the presence of the target TF in a particular promoter using two kinds of variables: PWM affinity scores and (di)nucleotide frequencies. For each promoter sequence, we computed the affinity score of the 638 JASPAR PWMs (redundant vertebrate collection), and the frequency of every mono-and dinucleotide in the promoter. These variables were then used to train a logistic model that aims to predict the outcome of a particular ChIP-seq experiment in mRNA promoters. Namely, every promoter sequence with a ChIP-seq peak is considered as a positive example, while the other sequences are considered as negative examples (see below). In the experiments below, we used 409 ChIP-seq datasets from ENCODE and different models. Each model targets one TF and one cell type. Given a ChIP-seq experiment, the learning process involves selecting the PWMs and (di)nucleotides that can help discriminate between positive and negative sequences, and estimate the model parameters that minimize prediction error. Note that the learning algorithm can select any predictive variable including the PWM of the target TF. See Material and methods for more details on the data and logistic model.
As explained above, positive sequences are promoters overlapping a ChIP-seq peak in the considered ChIP-seq experiment. We used two different procedures for selecting the positive and negative examples. Each procedure actually defines a different prediction problem. In the first case, we kept all positive sequences, and randomly selected the same number of negative sequences among all sequences that do not overlap a ChIP-seq peak. In the second case, we used an additional dataset that measures gene expression in the same cell type as the ChIPseq data. We then selected all positive sequences with non zero expression level and randomly selected the same number of negative sequences among all sequences that do not overlap a ChIP-seq peak but that have a similar expression level as the selected positive sequences. Hence, in this case (hereafter called the expression-controlled case), we learn a model that predicts the binding of a target TF in a promoter knowing that the corresponding gene is expressed. On the contrary, in the first case we learn a model that predicts the binding without knowledge about gene expression.
Classification accuracy and model specificity
We ran TFcoop on the 409 ChIP-seq datasets and for the two classification cases. The accuracy of each model was assessed by cross-validation by measuring the area under the Receiver Operating Curve (ROC). For comparison, we also measured the accuracy of the classical approach that discriminates between positive and negative sequences using only the affinity score of the PWM associated with the target TF. In addition, we estimated the accuracy of the TRAP method, which uses a biophysically inspired model to compute PWM affinity (37) and that of the approach proposed in (26), which integrates DNA shape information with PWMs. As shown in Figure 1Accuracy on mRNA promoters. (a) An example of ROC curves obtained on ChIP-seq targeting TF USF1 in cell-type A549. (b) Violin plots of the area under the ROC curves obtained in the 409 ChIP-seq. Best hit (red), TRAP (blue), DNAshape (green), TFcoop with no expression control (purple), and TFcoop with expression control (orange). ROC curves for Best hit, TRAP and DNAshape were computed in the non expression-controlled casefigure.1 and Supp. Figures 1 and 2, TFcoop outperforms these PWM-based approaches on many TFs. Next, we ran TFcoop using the TRAP scoring approach instead of the standard PWM scoring method but did not observe better results (data not shown), despite the fact that TRAP slightly outperforms the standard method when used standalone (Figure 1(b)Subfigure 1(b)subfigure.1.2 and Supp. Figure 1a). We also ran TFcoop with tri-and quadri-nucleotide frequencies in addition to di-nucleotide frequencies. Although a consistent AUC improvement was observed, the increase was very slight most of the time (Supp. Figure 3). Lastly, we compared TFcoop accuracy to that of the deep learning approach DeepSea (48) and observed very close results (see Supp. Figure 4).
We then sought to take advantage of the relative redundancy of target TFs in the set of 409 ChIP-seq experiments to investigate the specificity of the learned models. Namely, we compared pairs of models learned from ChIP-seq experiments targeting (i) the same TF in the same cell-type, (ii) the same TF in different cell-types, (iii) different TFs in the same cell-type, and (iv) different TFs in different cell-types. In these analyses, we used the model learned on one ChIP-seq experiment A to predict the outcome of another ChIP-seq experiment B, and we compared the results to those obtained with the model directly learned on B. More precisely, we measured the difference of the Area under the ROC Curves (AUC) between the model learned on A and applied on B and the model learned and applied on B. As shown in Figure 2Model specificity on mRNA promoters. Distribution of AUC differences obtained when using a model learned on a first ChIP-seq experiment to predict the outcome of a second ChIP-seq experiment. Different pairs of ChIP-seq experiments were used: experiments on the same TF and same cell type (red), experiments on the same TF but different cell type (yellow), experiments on different TFs but same cell type (light blue), and experiments on different TFs and different cell types (blue). For each pair of ChIP-seq experiment A-B, we measured the difference between the AUC achieved on A using the model learned on A, and the AUC achieved on A using the model learned on B. AUC differences were measured on the non expression-controlled case (a) and on the expression-controlled case (b) figure.2, models learned on the same TF (whether or not on the same cell-type) have overall smaller AUC differences than models learned on different TFs.
We then analyzed cell and TF specificity more precisely. Cell specificity refers to the ability of a model learned on one TF and in one cell type to predict the outcome of the same TF in another cell type. Oppositely, TF specificity refers to the ability of a model learned on one TF in one cell type to predict the outcome of another TF in the same cell type. Cell and TF specificities were evaluated by the shift between the associated distributions of AUC differences in Figure 2Model specificity on mRNA promoters. Distribution of AUC differences obtained when using a model learned on a first ChIP-seq experiment to predict the outcome of a second ChIP-seq experiment. Different pairs of ChIP-seq experiments were used: experiments on the same TF and same cell type (red), experiments on the same TF but different cell type (yellow), experiments on different TFs but same cell type (light blue), and experiments on different TFs and different cell types (blue). For each pair of ChIP-seq experiment A-B, we measured the difference between the AUC achieved on A using the model learned on A, and the AUC achieved on A using the model learned on B. AUC differences were measured on the non expression-controlled case (a) and on the expression-controlled case (b)figure.2: cell specificity was assessed by the shift between red and yellow distributions, while TF specificity was assessed by the shift between red and light blue distributions. We used a standard t-test to measure that shift. Low p-values indicate high distribution shifts (hence high cell/TF specificity), while high p-values indicate low shifts (hence low specificity). Our results indicate very low cell specificity (p-values 0.91 and 0.95 in the non-controlled and expression-controlled cases, respectively) and high TF specificity (1·10 −61 and 3·10 −83 ). The fact that the TF specificity is slightly higher in the expression-controlled case suggests that part of the TF combinations that help discriminate between bound and unbound sequences is common to several TFs in the non-controlled case. It is indeed known that the majority of ChIP-seq Finally the low cell specificity means that the general rules governing TFBS combination in promoters do not dramatically change from one tissue to another. This is important in practice because it enables us to use a model learned on a specific ChIP-seq experiment to predict TBFSs of the same TF in another cell type.
Analysis of TFBS combinations in promoters
We next analyzed the different variables (PWM scores and (di)nucleotide frequencies) that were selected in the 409 learned models. Overall, 95% of the variables correspond to PWM scores. Although only 5% of the selected variables are (di)nucleotide frequencies, almost all models include at least one of these features (see Supp. Figure 7).
We then looked at the presence of the PWM associated with the target TF in its model. As mentioned earlier, the learning algorithm does not use any prior knowledge and can select the variables that best help predict the ChIPseq experiment without necessarily selecting the PWM of the target TF. In fact, our analysis shows that, for 75% of the models, at least one version of the target PWM was selected. However, it is important to note that similar PWMs tend to have correlated scores. Hence, another PWM may be selected instead of the target. To overcome this bias, we also considered all PWMs similar to the target PWM. We used Pearson correlation between PWM scores in all promoters to measure similarity and set a threshold value of 0.75 to define the list of similar PWMs. With this threshold, 90% models include the target or a similar PWM.
Next, following the analyses of Levo et al. (23) and Dror et al. (9) we used our models to investigate the link between the nucleotide composition of the target PWM and that of the TFBS flanking region. First, we did not observe a significant link between target PWM composition and the (di)nucleotide variables that were selected in the models (Kolmogorov-Smirnov test p-val=0.448; see Supp. Figure 14). However, the (di)nucleotide composition of target PWM exhibited strong resemblance to that of the other selected PWMs (see Figure 3Pearson correlation between nucleotide composition of the target PWM and the mean composition of selected PWMs (with positive and negative coefficients in red and blue, respectively) in 409 models. Grey: correlation achieved by randomly selecting the same number of PWMs for each modelfigure.3). Specifically, except for dinucleotide TpT, the nucleotide and dinucleotide frequencies of the target PWM were strongly correlated with that of the PWMs selected with a positive coefficient, but not with those selected with a negative coeficient. This is in accordance with the findings of Dror et al. (9), who show that TFBS flanking regions often have similar nucleotide composition as the the TFBS. We next evaluated the possibility of clustering the 409 learned models using the selected variables. As shown in Supp. Figure 5, the models can be partitioned in a few different classes. In Supp. Figure 5 models were clustered in 5 classes with a k-means algorithm. Supp. Figure 7 reports the most used variables in these different classes. We can first observe that, in agreement with our analysis of model specificity, the models associated with the same TF tend to cluster together. For example, the 4 th class of our clustering is exclusively composed of CTCF models. This clustering seems to be essentially driven by the nucleotide composition of the PWMs belonging to the models (see Supp. Figure 15a). Note that we did not observe any enrichment for the classical TF structural families (bHLH, Zinc finger, . . . ) in the different classes (data not shown).
Pioneer TFs are thought to play an important role in transcription by binding to condensed chromatin and enhancing the recruitment of other TFs (39). As shown in Figure 4Pioneer TF distribution of selected PWMs in the different models. We kept one model for each target PWM to avoid bias due to over-representation of the same PWM in certain classes. Grey represents the distribution of all PWMs associated with a family in Sherwood et al. (39) (159 over 520 non-redundant PWMs)figure.4 and Supp. Figure 7, pioneer factors clearly are the most represented TFs in the selected variables of all models (regardless of the model class), whereas they represent less than 14% of all TFs. These findings are in agreement with their activity: pioneer TFs occupy previously closed chromatin and, once bound, allow other TFs to bind nearby (39). Hence the binding of a given TF requires the prior binding of at least one pioneer TF. We also observed that TFs whose binding is weakened by methylation (46) are enriched in all models (Supp. Figure 16a). This result may explain how CpG methylation can negatively regulate the binding of a given TF in vivo while methylation of its specific binding site has a neutral or positive effect in vitro (46): regardless of the methylation status on its binding site, the binding of a TF can also be influenced in vivo by the sensitivity of its partners to CpG methylation.
TFBS combinations in lncRNA and pri-miRNA promoters
We then ran the same analyses on the promoters of lncRNAs and pri-miRNAs using the same set of ChIP-seq experiments. Results are globally consistent with what we observed on mRNA promoters (see Figure ?? for the expression-controlled case). Overall, models show good accuracy and specificity on lncRNAs. Models are less accurate and have lower specificity for pri-miRNAs but this likely results from the very low number of positive examples available for these genes in each ChIP-seq experiment (Supp. Figure 13), which impedes both the learning of the models and estimation of their accuracy.
Next we sought to compare the models learned on mRNA promoters to the models learned on lncRNA and pri-miRNA promoters. For this, we interchanged the models learned on the same ChIP-seq experiment, i.e. we used the model learned on mRNA promoters to predict the outcome on lncRNA and pri-miRNA promoters. One striking fact illustrated by Figure ?? is that models learned on mRNA promoters and those learned on lncRNA promoters are almost perfectly interchangeable. This means that the TFBS rules governing the binding of a specific TF in a promoter are similar for both types of genes. We obtained consistent results when we used the models learned on mRNAs to predict the ChIP-seq outcomes on pri-miRNA promoters (Figure ??). Accuracy is even better than that obtained by models directly learned on pri-miRNA promoters, illustrating the fact that the poor performance achieved on pri-miRNA promoters likely results from the small number of learning examples available for these genes.
TFBS combinations in enhancers
We next applied the same approach on 38,554 enhancers defined by the FANTOM consortium (2). We used the same ChIP-seq experiments as for the promoters. All enhancer sequences overlapping a ChIP-seq peak in the considered ChIP-seq experiment were considered as positive examples. As for promoters, we used two strategies to select the negative examples: in a first case we did not apply any control on the expression of the negative enhancers, while in a second case, we used CAGE expression data to ensure that negative enhancers have globally the same expression levels as positive enhancers.
As observed for promoters, TFcoop outperforms classical PWM-based approaches on many TFs (see Figure ??) and achieves results close to that of DeepSea (48) (Supp. Figure 4). However, analysis of model specificity reveals somewhat different results from that observed for promoters. Globally, models have good TF specificity: models learned on the same TF have more similar prediction accuracy than models learned on different TFs. However, in contrast to promoters, cell specificity is high in the non-controlled case (p-value 2·10 −45 , see peak shift in Figure ??), although much lower in the expression-controlled case (p-value 1.6·10 −12 ). Additionally, TF specificity seems slightly higher in the expression-controlled case than in the non-controlled case (p-values 1.7·10 −102 vs. 1.·10 −114 ). This is in accordance with our hypothesis formulated for promoters, that part of the TF combination learned by TFcoop in the non-controlled case actually differentiates between close and open chromatin marks. Moreover, this also seems to indicate that these TF combinations are cell-type specific, while the remaining combinations seems more general (as illustrated by the 1.6·10 −12 p-value measured on the expressioncontrolled case). The fact that cell-type specificity is more apparent for enhancers than for promoters in the non expression-controlled case (2·10 −45 for enhancers vs. 0.91 for promoters) is in accordance with the lowest ubiquity of enhancers (2) and the fact that, contrary to promoters, most of enhancers are expressed in a cell-specific manner (as illustrated in Supp. Figure 9). We next analyzed the different TFBS combinations of the enhancer models. As for promoters, we observed that the selected PWMs tends to have similar (di)nucleotide composition as the target PWM ( Figure. ??). Moreover, models can also be partitioned in a few different classes according to the selected variables (Supp. Figures 11 and 12). These classes mostly correspond to the nucleotide composition of the target and selected PWMs (Supp. Figure 18). Pioneer TFs are also over-represented in the selected PWMs but surprisingly less so than for promoters ( Figure ?? and Supp. Figure 12).
Next we sought to compare the models learned on enhancers to the models learned on promoters. We used the models learned in the expression-controled cases. First, we can observe that these models have globally similar prediction accuracy (see Figure ??). However, a pairwise comparison of the enhancer and promoter models learned on each ChIP-seq experiment shows that the prediction accuracy is only moderately correlated (see Supp. Figure 10, Pearson correlation 0.33). Moreover, if we interchange the two models learned on the same ChIP-seq experiment, we observe that the model learned on promoters is generally not as good on enhancers as it is on figure.2 for details. Bottom: Promoter models are interchangeable. For each ChIP-seq experiment, we computed the AUC of the model learned and applied on mRNAs (pink), learned and applied on lncRNAs (yellow-green), learned and applied on pri-miRNAs (blue), learned on mRNAs and applied to lncRNAs (green), learned on mRNAs and applied to pri-miRNAs (purple).
promoters and vice-versa (Figure ??). Hence, while the rules learned on enhancers (promoters) in a given cell type are valid for enhancers (promoters) of other cell types, they do not apply to promoters (enhancers) of the same cell type. Note that AUCs of models learned on promoters and applied to enhancers are greater than that of models learned on enhancers and applied to promoters (Figure ??). This result might be explained by the existence of promoters able to exert enhancer functions (6,8). Note that, conversely, the FANTOM definition of enhancers precludes potential promoter functions (2).
Using TFcoop score for describing regulatory sequences We next explored whether TFcoop scores could be used to provide meaningful descriptions of regulatory sequences. This was assessed in two ways. First, we used the TFcoop models to cluster mRNA promoters and searched for over-represented gene ontology (GO) terms in the inferred clusters. We randomly selected one model for each TF, and used the 106 selected models to score the 20,846 mRNA promoter sequences. Each promoter sequence was then described by a vector of length 106. We next ran a k-means algorithm to partition the promoters into 5 different clusters, and we searched for over-represented GO terms in each cluster. For comparison, we ran the same procedure using two other ways to describe the promoter sequences: the classical PWM scores of the same 106 selected TFs (so promoters are also described by vectors of length 106), and the (di)nucleotide frequencies of the promoters (vector of length 12). Globally, the same GO terms appear to be over-represented in the different gene clusters and the three different clusterings: defense response, immune system process, cell cycle, metabolic process, and developmental process. We noticed that the p-values obtained with the TFcoop scores were invariably better than the two others. To avoid any clustering bias, we repeated the k-means clusterings several times, with various numbers of clusters. Namely, for each approach we ran 3 clusterings for each number of clusters ranging between 3 and 10 (resulting in 24 different clusterings for each approach) and computed overrepresentation p-values for the 5 GO terms in each cluster. As shown in Figure ??, the TFcoop scores substantially and systematically outperform the other scoring functions, indicating that the classification obtained with this score is more accurate to functionally annotate promoters than the others. Next, we used the TFcoop models to discriminate between mRNA promoters and enhancers. We randomly split the sets of promoters and enhancers in training and test sets, and learned a K-nearest neighbor (KNN) classifier for discriminating between promoter and enhancer sequences on the basis of scores of the TFcoop models learned on promoters. As above, we also used the classical PWM scores of the same 106 selected TFs and (di)nucleotide frequencies of the sequences. We resumed the procedure with a number of neighbors (K) varying between 1 and 20, and computed the number of errors obtained by each approach on the test set ( Figure ??). Here again, TFcoop description outperforms other description methods, with an error rate around 2% for TFcoop vs. 15% and 25% for the other approaches. This result confirms the existence of DNA features distinguishing enhancers from mRNA promoters (1, 2) and identifies TF combinations as potent classifiers.
Identifying TFs responsible for gene expression change
As a final test, we sought to use TFcoop to identify the TFs responsible for gene expression change in various gene expression experiments. For this, we used the compendium collected by Meng et al. (28). The interest of this compendium is that each data corresponds to a particular TF for which the activity has been modified (repressed or enhanced), hence the TF responsible for deregulation (hereafter called as the "responsible TF") is known. In each experiment, we selected the top 500 genes with the highest positive log fold change, and computed the difference of score distribution of the responsible TF in the top 500 promoters and in all other promoters with a Kolmogorov-Smirnov test. This was done using the classical PWM scoring function and with the TFcoop scores. Of the 21 experiments, 5 responsible TFs achieved enrichment p-values below 1% with the classical PWM scoring function, while this number rises to 13 with the TFcoop score (see Supp. Table 1).
One striking fact however is that numerous TFs (not solely the responsible TF) appear to be also enriched in the top 500 promoters (Supp. interesting way to assess our models. Namely, when this appends (and if our models are meaningful) then TF A should be present among the selected variables of the TF B model. For each experiment, we therefore enumerated all TFs enriched in the top 500 promoters and checked whether the responsible TF was present in their models. We used a Fisher exact test to assess whether this appends more often than expected in the different experiments (Supp. Table 1). Of the 18 testable experiments, 13 yield a p-value below 5%, indicating that the responsible TF is often involved in the TF combinations associated with the TFs enriched in the top 500 promoters.
DISCUSSION
In this paper we proposed a method that takes TF combinations into account to predict whether a target TF binds a given regulatory sequence or not. Our approach is based on a logistic model learned from ChIP-seq experiments on the target TF. Once learned, the model can be used to predict the TF binding affinity on any other sequence of the same type (promoter or enhancer). Cross-validation study showed that the approach is effective and outperforms classical approaches on many TFs. It is important to note that TFcoop combinations do not necessarily reflect just cooperation, but also competition. For instance, a TF A competing with a TF B may be useful to predict the binding of B and would thus appear in the TF B model while A and B do not cooperate.
We distinguished two prediction problems associated with two situations, depending whether the aim is to predict binding in any promoter/enhancer or solely in expressed promoters/enhancers. For expressed promoters/enhancers, our experiments showed that the learned models have high TF specificity and quite low celltype specificity. On the other hand, for the problem of expressed and not expressed promoters/enhancers binding, the learned models are less TF specific and more cell-type specific (especially for enhancers). These results are in accordance with a two-level model of gene regulation: (i) cell-type specific level that deposits specific chromatin . Using TFcoop scores for describing regulatory sequences. (a) GO term enrichment obtained with different promoter descriptions. Promoters were described using three different representations-TFcoop scores (red), (di)nucleotide frequencies (green), classical PWM scores (blue)-and then partitioned several times with different k-means and different class numbers (see main text). For each clustering we identified the best p-value (Fisher exact test) associated with 5 GO terms ("defense response", "immune system process", "cell cycle", "metabolic process", "developmental process") in any cluster. (b) Classification errors achieved with KNN classifiers discriminating between promoter and enhancer sequences. Boxplots describe the errors obtained using TFcoop scores (red), (di)nucleotide frequencies (green), and the classical PWM scores (blue), using different number of neighbors (K). marks on the genome, and (ii) non, or poorly, cell-type specific level that regulates TF binding in all DNA regions associated with appropriate marks. An important property highlighted by our models is that rules governing TF combinations are very similar in the promoters of the three gene types analyzed (mRNA, pri-miRNA and lncRNA), but different between promoters and enhancers. This is further confirmed by our experiments for discriminating between promoter and enhancer sequences showing that scores produced by TFcoop models allow accurate classification between the two types of sequences. Our results thus argue for a prominent role of transcription factor binding as the fundamental determinant of regulatory activity able to distinguish enhancers and promoters (1). Furthermore, as promoters and enhancers produce different RNA molecules (1, 2), our results also suggest that the production of enhancer RNAs (eRNAs) on one hand, and that of mRNAs, lncRNAs, and miRNAs on the other hand, requires a specific and distinct subset of TFs.
Our approach could be improved in several ways. A quite straightforward improvement would be to use the DNAshape score developed by Mathelier et al. (26) instead of the classical PWM score. This could improve TFcoop accuracy for several TFs, especially for TFs such as CTCF for which TFcoop fails to outperform classical PWM scoring. More profoundly, one drawback of TFcoop is that the logistic model enables us to learn only a single TF combination for each target TF. However, we can imagine that certain TFs may be associated with two or more different TF combinations depending on the promoter/enhancer they bind. A solution for this would be to learn a discrimination function based on several logistic models instead of a single one.
SUPPLEMENTARY DATA
Supplementary data are available at: http://www.lirmm.fr/∼brehelin/TFcoop. Models, data and R code (R Markdown file) for reproducing some of experiments described in the paper are available at the same address. | 2019-04-02T13:11:41.053Z | 2017-10-02T00:00:00.000 | {
"year": 2017,
"sha1": "72b89c0e43516f8d19fee65bb8e67cd6d59ebd12",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12864-018-5408-0",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "91b34b2579e5a3ca8adc4c8faaf27e71275dfd51",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
215798639 | pes2o/s2orc | v3-fos-license | Different Proteins Regulated Apoptosis, Proliferation and Metastasis of Lung Adenocarcinoma After Radiotherapy at Different Time
Introduction The biological changes after irradiation in lung cancer cells are important to reduce recurrence and metastasis of lung cancer. To optimize radiotherapy of lung adenocarcinoma, our study systematically explored the mechanisms of biological behaviors in residual A549 and XWLC-05 cells after irradiation. Methods Colony formation assay, cell proliferation assay, cell migration assay, flow cytometry, BALB/C-nu mice xenograft models and Western blot of pan-AKT, p-Akt380, p-Akt473, PCNA, DNA-PKCS, KU70, KU80, CD133, CD144, MMP2 and P53 were used in our study to assess biological changes after irradiation with 0, 4 and 8 Gy at 0–336 hr after irradiation in vitro and 20 Gy at transplantation group, irradiated transplantation group, residual tumor 0, 7, 14, 21, and 28 days groups in vivo. Results The ability of cell proliferation and radiosensitivity of residual XWLC-05 cells was better than A549 cells after radiation in vivo and in vitro. MMP-2 has statistical differences in vitro and in vivo and increased with the migratory ability of cells in vitro. PCNA and P53 have statistical differences in XWLC-05 and A549 cells and the changes of them are similar to the proliferation of residual cells within first 336 hr after irradiation in vitro. Pan-AKT increased after irradiation, and residual tumor 21-day group (1.5722) has statistic differences between transplantation group (0.9763, p=0.018) and irradiated transplantation group (0.8455, p=0.006) in vivo. Pan-AKT rose to highest when 21-day after residual tumor reach to 0.5 mm2. MMP2 has statistical differences between transplantation group (0.4619) and residual tumor 14-day group (0.8729, p=0.043). P53 has statistical differences between residual tumor 7-day group (0.6184) and residual tumor 28 days group (1.0394, p=0.007). DNA-PKCS has statistical differences between residual tumor 28 days group (1.1769) and transplantation group (0.2483, p=0.010), irradiated transplantation group (0.1983, p=0.002) and residual tumor 21 days group (0.2017, p=0.003), residual tumor 0 days group (0.5992) and irradiated transplantation group (0.1983, p=0.027) and residual tumor 21 days group (0.2017, p=0.002). KU80 and KU70 have no statistical differences at any time point. Conclusion Different proteins regulated apoptosis, proliferation and metastasis of lung adenocarcinoma after radiotherapy at different times. MMP-2 might regulate metastasis ability of XWLC-05 and A549 cells in vitro and in vivo. PCNA and P53 may play important roles in proliferation of vitro XWLC-05 and A549 cells within first 336 hr after irradiation in vitro. After that, P53 may through PI3K/AKT pathway regulate cell proliferation after irradiation in vitro. DNA-PKCS may play a more important role in DNA damage repair than KU70 and KU80 after 336 hr in vitro because it rapidly rose than KU70 and KU80 after irradiation. Different cells have different time rhythm in apoptosis, proliferation and metastasis after radiotherapy. Time rhythm of cells after irradiation should be delivered and more attention should be paid to resist cancer cell proliferation and metastasis.
Introduction:
The biological changes after irradiation in lung cancer cells are important to reduce recurrence and metastasis of lung cancer. To optimize radiotherapy of lung adenocarcinoma, our study systematically explored the mechanisms of biological behaviors in residual A549 and XWLC-05 cells after irradiation. Methods: Colony formation assay, cell proliferation assay, cell migration assay, flow cytometry, BALB/C-nu mice xenograft models and Western blot of pan-AKT, p-Akt380, p-Akt473, PCNA, DNA-PKCS, KU70, KU80, CD133, CD144, MMP2 and P53 were used in our study to assess biological changes after irradiation with 0, 4 and 8 Gy at 0-336 hr after irradiation in vitro and 20 Gy at transplantation group, irradiated transplantation group, residual tumor 0, 7, 14, 21, and 28 days groups in vivo. Results: The ability of cell proliferation and radiosensitivity of residual XWLC-05 cells was better than A549 cells after radiation in vivo and in vitro. MMP-2 has statistical differences in vitro and in vivo and increased with the migratory ability of cells in vitro. PCNA and P53 have statistical differences in XWLC-05 and A549 cells and the changes of them are similar to the proliferation of residual cells within first 336 hr after irradiation in vitro. Pan-AKT increased after irradiation, and residual tumor 21-day group (1.5722) has statistic differences between transplantation group (0.9763, p=0.018) and irradiated transplantation group (0.8455, p=0.006) in vivo. Pan-AKT rose to highest when 21-day after residual tumor reach to 0.5 mm 2 . MMP2 has statistical differences between transplantation group (0.4619) and residual tumor 14-day group (0.8729, p=0.043). P53 has statistical differences between residual tumor 7-day group (0.6184) and residual tumor 28 days group (1.0394, p=0.007). DNA-PKCS has statistical differences between residual tumor 28 days group (1.1769) and transplantation group (0.2483, p=0.010), irradiated transplantation group (0.1983, p=0.002) and residual tumor 21 days group (0.2017, p=0.003), residual tumor 0 days group (0.5992) and irradiated transplantation group (0.1983, p=0.027) and residual tumor 21 days group (0.2017, p=0.002). KU80 and KU70 have no statistical differences at any time point. Conclusion: Different proteins regulated apoptosis, proliferation and metastasis of lung adenocarcinoma after radiotherapy at different times. MMP-2 might regulate metastasis ability of XWLC-05 and A549 cells in vitro and in vivo. PCNA and P53 may play important roles in proliferation of vitro XWLC-05 and A549 cells within first 336 hr after irradiation in vitro. After that, P53 may through PI3K/AKT pathway regulate cell proliferation after irradiation in vitro. DNA-PKCS may play a more important role in DNA damage repair than KU70 and KU80 after 336 hr in vitro because it rapidly rose than KU70 and KU80 after irradiation. Different cells have different time rhythm in apoptosis, proliferation and metastasis after radiotherapy. Time rhythm of cells after irradiation should be delivered and more attention should be paid to resist cancer cell proliferation and metastasis.
Introduction
Lung cancer is one of the most malignant tumors with the highest incidence, the mortality of which is the highest among all tumors. 1,2 Surgery, Radiation therapy and chemotherapy are the main treatment strategies for advanced NSCLC.
Radiotherapy killed cancer cells by breaking DNA damage directly or indirectly. 3 Radiotherapy is an important treatment for early or advanced lung cancer patients. 2 With the development of radiotherapy, radiotherapy becomes more and more accurate. However, the recurrence and metastasis of lung cancer still exist after irradiation.
Zheng et al 4 reported that X-ray irradiation improved the metastasis of tongue squamous cell carcinoma (TSCC) cells increasing dose-dependently with exposure to X-ray radiation. Residual tumors after chemotherapy grew slower than before, but the metastatic ability was significantly enhanced in MHCC97L and HepG2 cells of hepatocellular carcinoma and transplanted tumors in nude rats. 5 The biological behavior of lung cancer cells after radiotherapy may change, but the specific changes and mechanisms need to be studied.
Our study chose XWLC-05 and A549 human adenocarcinoma cell line to compare the differences in the biological behavior of XWLC-05 and A549 residual cells after irradiation. By adding different doses of X-ray irradiation to XWLC-05 and A549 cells, we recorded and analyzed the changes in radiation sensitivity, cell proliferation, migration, cell cycle regulation, apoptosis, adenocarcinoma stem cells and protein expression at different times. This study aims to guide the comprehensive treatment of lung cancer, especially individualized radiotherapy.
PI3K/AKT pathway and PCNA play an important role in cancer metabolism and proliferation. 6 DNA-PKCS, KU70 and KU80 play important roles during the repair of DNA damage. CD133, CD144, MMP2 and P53 can affect cancer stem cells and metastasis. Most reports did not explore the dynamic protein changes at different time after irradiation. The mechanism of lung cancer recurrence and metastasis are unknown. If the time rhythm of cancer cells proliferation and metastasis after irradiation were confirmed, we can choose the best time to reduce cancer cells proliferation or transformation by irradiation. So, we chose pan-AKT, p-Akt380, p-Akt473, PCNA, DNA-PKCS, KU70, KU80, CD133, CD144, MMP2 and P53 to assess the mechanism of recurrence and metastasis during lung cancer cell growth, metastasis and stem cells after irradiation. We found that different proteins play important roles in apoptosis, proliferation and metastasis of lung adenocarcinoma after radiotherapy at different times. The time of delivery of radiotherapy should be long enough to resist cancer cell proliferation. Pathway inhibitor combined with radiotherapy should be used at a reasonable time which may improve the effect of pathway inhibitor combined with radiotherapy.
A549 and XWLC-05 Cell Culture and Irradiation
The XWLC-05 cell was a kind of human lung adenocarcinoma cell line, supported by the first No.1 Affiliated Hospital of Kunming Medical University. 7-10 The A549 cell line (Cell center, Yunnan, China) and XWLC-05 cell line were maintained in RPMI-1640 (HyClone, USA) containing 10% fetal bovine serum (FBS; Gibco, USA). Cells were incubated at 37°C in a humidified incubator with 5% CO 2 and treated with different radiation doses of 4 and 8 Gy. Irradiation dose rate of X-rays was 2.2 Gy/min in the Irradiation (IR) Center of the Third Affiliated Hospital of Kunming Medical University.
A549 and XWLC-05 Xenograft Models and Tissue Cell Suspension
Male BALBc-nu mouse (aging 3-4 weeks old and weighting 10-15g) were obtained from Hunan SJA Laboratory Animal Company Limited (Hunan, China). All mice were lacking T cells and kept in a specific pathogen-free (SPF) animal laboratory of Kunming Medical University. After 7-day adaptive feeding, we concentrated A549 or XWLC-05 cells at the logarithmic phase and adjusted with PBS to 2×10 8 cells/mL by Moxi Z mini automated cell counter kit (Orflo, America). We injected 100ul (2×10 7 cells) A549 or XWLC-05 cells under the left axillary skin of a mouse. The diameters of tumors were observed every day (Tumor volume (cm 3 )=ab 2 /2, a: maximum diameter; b: minimum diameter 11 ). Transplantation tumor mice were killed or irradiated when tumor volume reached 1cm 3 (transplantation group and irradiated transplantation group). Irradiated transplantation tumor mice were killed when the tumor diameters grew up to 1cm again. Then, Irradiated transplantation tumors were separated and cut into 1mm 3 . We injected 2 pieces of irradiated transplantation tumor specimens under the left axillary skin of another 3-4-week male BALBc-nu mouse by mouse trocars. In this way, we made residual tumor mice models. Residual tumor mice were killed on days 0, 7, 14, 21, 28 after tumors reached 0.5cm 3 . All tumors were separated and weighted. We had a preexperiment to find out the irradiation dose that mouse models could bear. We tried 4 Gy, 6Gy, 10Gy, 15Gy and 20Gy local irradiation. Because higher doses cause more significant changes in vitro, we chose 20 Gy to get outcomes to hypofractionated radiation therapy that would be more similar to those in humans.
We separated and cut tumors into specimens, and then we used 0.5% IV collagenase (biosharp, China) at 37°C for 1.5 hr. After being filtered by sterilized nylon networks (200 mesh, Huaihe, China), cell suspensions were centrifuged (1000 rpm, 5 min) and resuspended 3 times with phosphate-buffered saline solution (PBS, Hyclone, USA). Red blood cells were removed with 7μm pre-separation filter (Mitenyi Biotec GmbH, Bergisch Gladbach, Germany). In this way, we made tissues into cell suspensions for flow cytometry. All of our operations were in sterile environment and instruments and containers were sterilized also.
The mouse experiments were approved by the ethics committee of Kunming Medical University. Animal experiments followed the guidelines for the welfare of the national standard "Laboratory Animal-Requirements of Environment and Housing Facilities" (GB 14925-) and conformed to "Yunnan Administration Rule of Laboratory Animal". Mice were randomly assigned into transplantation group, irradiated transplantation group, and residual tumor 0-, 7-, 14-, 21-, 28-day groups, with 5 mice in each group.
Briefly, 20 μL of the CCK8 solution (5 mg/mL) was added to each well, and the cells were incubated for another 4 hr at 37°C. The optical density was measured at 450 nm using a microplate reader (Bio-Rad, Hercules, CA, USA). The viability index was calculated as the experimental OD value/the control OD value. Three independent experiments were performed in quadruplicate.
Colony Formation Assay
To analyze colony formation, single cells were plated in a 10 cm dish before 0, 0.5,1, 2, 4, 6, and 8 Gy 6MV X-ray irradiation. The number of cells was 100, 100, 200, 400, 1000, 4000, and 8000, respectively. The cells were removed after being cultured under a static condition for 13 days. The colonies containing 50 or more cells were counted. Colonies were fixed with 99% methanol for 30 min and then stained with 0.1% crystal violet for 20 min. SF (surviving fraction) = Number of colonies/(cells inoculated × plating efficiency). We used SF to calculate D0 (mean lethal dose) and α/β of cells by single-hit multitarget model and liner quadric (LQ) model.
Cell Apoptosis Analysis by Flow Cytometry
An annexin V-FITC̸ PI double staining assay (BD, USA) was used to detect the apoptosis of A549 and XWLC-05 cells 12, 24, 48, 96, and 192 hr after 4 Gy and 8Gy (respectively) 6MV X-ray irradiation. The cells were washed twice with cold PBS and then resuspended in Annexin V Binding Buffer at a concentration of 1 × 10 6 cells/mL. Then, 100 μL of the solution (1 × 10 5 cells) was transferred to a 5 mL culture tube.
Next, 5 μL of FITC-annexin V and 5 μL PI were added with gentle vortexing and incubation for 15 min at room temperature (25°C) in the dark. Then, 400 μL of Binding Buffer was added to each tube and analyzed by flow cytometry within 1 hr, so as in tumor suspensions.
Cell Cycle Analysis by Flow Cytometry
Attached cells were harvested at 12, 24, 48, 96, and 192 hr after 4 Gy and 8 Gy radiation for cycle measurement using DAPI (SYSMEX PARTEC Germany). The cells were washed twice with cold PBS and 1 mL of DAPI was added to each tube and analyzed by flow cytometry within 1 hr, so as in cell suspensions of tumor tissues.
CD44 and CD133 Surface Expression by Flow Cytometry
Attached A549 cells and XWLC-05 cells were collected at 12, 24, 48, 96, and 192 hr after either single 4 Gy or 8 Gy irradiation for CD44 and CD133 surface expression detection. For CD133 staining, 1 × 10 6 cells were stained with 20 μL CD133 mouse mAb (CST, USA). The antibody was incubated for 60 min at room temperature in the dark. Then, 2.5 μL of anti-mouse IgG Fab2 Alexa Fluor (R) 647 Molecular Probes were added. The antibody was incubated for 30 min at room temperature in the dark. Then, samples were washed 3 times with PBS. For single CD133 staining of tumor tissue suspension cells, we put 20ul Human CD133 PE-conjugated Antibody (R&D, USA) into 1 × 10 6 cells (100ul) tumor suspensions for 30 min at RT, then samples were washed for 3 times with PBS.
For CD44 staining, 1 × 10 6 cells were stained with 20 μL Hu CD44 APC G44-26 (BD, USA) and incubated for 30 min at RT in the dark. The samples were washed 3 times with PBS, so as in tumor tissue suspension cells for single CD44 staining.
For CD133 and CD44 co-staining of tumor tissue suspension cells, we put 20ul Human CD133 PE-conjugated Antibody (R&D, USA) and 20ul Hu CD44 APC G44-26 into 1 × 10 6 cells (100ul) tumor suspensions for 30 min at RT, then samples were washed for 3 times with PBS. All stained samples analyzed by flow cytometry (Beckman, USA) within 1 hr.
Cell Migration Assay
The cell migration was measured using a transwell assay 24, 48, 96, and 192 hr after either single 4 Gy or 8 Gy radiation. Irradiated A549 and XWLC-05 cells were seeded in transwells (Corning Incorporated, NY, USA) at a density of 6 × 10 4 cells/well. Cells with 200 μL of RPMI1640 supplemented with 10% fetal bovine serum (FBS; Gibco, USA) were incubated for 24 hr at 37°C. Cells were fixed via incubation with 100% methanol for 30 min and stained with 0.1% crystal violet for 20 min. Migrated cells on the lower surface of the filter were counted per filter in random microscopic fields at a 40-fold magnification. The reported values are the means of three independent experiments.
Statistical Analysis
The data are depicted as the mean ± SD for normal distribution parameters and the median for non-normal distribution parameters. The multivariate analysis of variance for normal distribution parameters and Kruskal-Wallis H text for non-normal distribution parameters was used to compare independent samples. All analyses were performed on SPSS 22.0, and differences were considered significant if p < 0.05.
Different Effects on Cell Proliferation and Apoptosis of Residual A549 or XWLC-05 Cells During Radiation
We calculated SF (SF (surviving fraction) = Number of colonies/(cells inoculated × plating efficiency)); then, we used SF to calculate D0 (mean lethal dose) by single-hit multitarget model (S=extrapolation number×e −k×Dose ) and α/β of cells by and liner quadric (LQ) model (BED=n×dose×[1+dose/(α/β)]). With the increasing radiation dose, SF decreased gradually. The survival fraction of A549 cells was higher than that of XWLC-05 cells in vitro ( Figure 1A). D0 is a reflection of radiosensitivity in cells. Higher value of D0 means worse radiosensitivity. D0 of A549 cells was 3.224Gy while XWLC-05 cells were 2.447Gy, A549 cells have worse radiosensitivity than XWLC-05 cells. Radiation causes reversible sublethal damage in cancer cells, less value of α/β represents the ability to repairing cell sublethal damage is better. The α/β of A549 is 19.92 while XWLC-05 is 9.18.
Radiation suppressed the proliferation of A549 cells and XWLC-05 cells within 96 hr in a time-dependent manner. There were no significant differences in the proliferation between the 8 Gy radiation and 4 Gy radiation (p>0.05, Figure 1B and C).
In vitro, radiation made the volumes of tumors decreased for several days, then it increased again. XWLC-05 tumors grow faster than A549 cell tumors before and after irradiation ( Figure 1D). A549 transplantation group (0.196, p=0.000) and A549 residual and tumor group (0.075, p=0.033) have statistical differences with XWLC-05 residual and tumor group (0.547).
In vivo and vitro, A549 and XWLC-05 tumor cell apoptosis rates increased after irradiation ( Figure 1E-G). The cell apoptosis rates of the two irradiated groups were increased after radiation and was higher with 8 Gy radiation than 4 Gy radiation. The highest cell apoptosis rate in A549 cells and XWLC-05 cells was seen at 192 hr and 48 hr after radiation. 48-h group (8.56) presented a statistical difference with 192h group (18.4950, p=0.041) after irradiation. In vivo, A549 and XWLC-05 tumor cell apoptosis rates increased after irradiation. However, there were no statistical differences in the similar tumor groups (P>0.05).
Different Effects on Cells Cycle of Residual A549 or XWLC-05 Cells During Radiation
The effect of radiation on the cell cycle was assessed using a DAPI assay. The cell cycle of A549 and XWLC-05 cells ( Figure 2A Highest peak of G2/M phase may occur at 96h after irradiation. In vivo, the percent of G1, S, G2/M phase cells of XWLC-05 and A549 tumor increased after irradiation had no statistical differences (p=0.213). G1, S or G2/M phase cells of different time presented no statistical difference after irradiation (p≥0.05).
Differences in Stem Cells of XWLC-05 or A549 Cells During Radiation
CD44 and CD133 are biomarkers of lung cancer stem cells. CD44 or CD133 expression in A549 cells and XWLC-05 were increased with time ( Figure 3A-F).
The percentages of CD44+ or CD133+ cells in 8-Gy groups were higher than 4-Gy groups, but there are no statistical differences between every group (P>0.05) ( Figure 3A-D). The percentage of CD44+, CD133+ and CD444+CD133+ cells ( Figure 3E and F) in XWLC-05 tumor increased after irradiation while A549 group increased little; however, there are no statistic differences between every vivo A549 and XWLC-05 groups (p>0.05). CD44+ cells has statistic differences between transplantation group (18.7150) and irradiated transplantation group (2.6645, p=0.034), CD133+ cells have statistic differences between irradiated transplantation group (3.3860) and residual tumor 0-day group (15.795, p=0.034). But there have been no statistical differences between CD44 +CD133+ cells. Considering CD44+CD133+ cells were more representative than CD44+ or CD133+ cells, these results suggest a greater possibility that differences in stem cells of vivo XWLC-05 or A549 Cells during radiation may not be significant.
Differences in Migration Ability of Residual A549 or XWLC-05 Cells in vivo
The migration ability of A549 and XWLC-05 ( Figure 4A and B) cells was changed after radiation in a time-dependent manner. The migratory ability of A549 cells was stronger than that of XWLC-05 cells during 24 to 96 hr after 4 Gy radiation. A 336 hr after 4-Gy or 8-Gy radiation, the migratory ability of XWLC-05 cells was stronger than that of A549 cells. The migratory ability of XWLC-05 cells and A549 cells after 8 Gy radiation was significantly weaker than 4 Gy radiation at 336 hr after radiation. The migration ability of residual A549 or XWLC-05 cells was increased in vivo, but because of there
Discussion
This is the first study to compare the biological behavior and molecular biology between Xuanwei lung adenocarcinoma XWLC-05 and normal lung adenocarcinoma A549 cell in vitro after radiation. In our study, we found that the capacity of cell proliferation and radiosensitivity of residual XWLC-05 cells were better than A549 cells after radiation in vivo and vitro. MMP-2 might regulate metastasis ability of XWLC-05 and A549 cells in vitro and vivo. PCNA, P53 may play important roles in proliferation of residual XWLC-05 and A549 cells within the first 336 hr after irradiation in vitro. PCNA has no statistical differences at every time point in vivo. AKT plays an important role in vivo cells proliferation. DNA-PKCS may take more important role than KU70 and KU80 in DNA damage repair in vivo. Some researchers have found that the proliferation and migration potentials of cancers (cervical cancer C33A and CaSki cells, 12 colorectal cancer SW480, SW620 and HCT116 cells, 13 hepatocellular carcinoma MHCC97L, HepG2, Hep3B and Huh7 cells, 14 glioblastoma U87 cells, 15 glioma U251 cells, 16 and tongue squamous cell carcinoma Tca-8113 cells 4 ) were enhanced after radiation. p53 promotes apoptosis and regulates cell cycles in response to cellular stress signals. Transcriptional activation by p53 is critical for the induction of apoptosis. 17 Transcriptional activation by p53 upregulates BAD during DNA damage-induced radiation. p53 can translocate from nucleus into cytoplasm and interact with BAD. p53 translocates to mitochondrial outer membrane and increases its permeability, inducing cell apoptosis. 18 Targeted therapies for p53-deficient genotype lung cancer are feasible and significant. 19 In our study, we found that after radiation p53 expression in XWLC-05 and A549 cells was enhanced, which was consistent with the cell apoptosis observed after radiation. PCNA, due to its role in proliferation, has been widely used as a tumor marker for cancer cell progression and patient prognosis. [20][21][22] Our study found that the PCNA expression in A549 cells decreased below the unirradiated control levels after radiation, whereas 192 hr after radiation, the PCNA expression in A549 cells was enhanced. The change of PCNA expression in A549 cells was not completely consistent with cell proliferation, which was suppressed 12 to 336 hr after radiation. The change of PCNA expression in XWLC-05 cells was not consistent with the proliferation of XWLC-05 cells, which was suppressed within 96 hr after radiation and increased 360 hr after radiation. It is well known that proteins can be expressed in different parts of the cell. Different protein expression sites in cells have different effects on cell biological behavior. Proliferation of lung adenocarcinoma cells after radiation was not consistent with the changes in PCNA expression.
CSCs are known to be radiation resistant and chemoresistant. CSCs also cause tumor metastasis and relapse. Surface markers of CSCs are expressed in a variety of human malignant tumors, including tumors of the lung, colon, pancreas, brain, breast, and others. [23][24][25][26][27] In our study, we detected the expression of CD133 and CD44. We found that the expressions of both CD133 and CD44 on A549 and XWLC-05 cells were increased after radiation, but the percentages of CD133+ or CD44+ of A549 cells were not obvious change after radiation in vivo and vitro. The rates of CD44+ or CD133+ in 8-Gy groups were higher than 4-Gy groups. Further, the percentage of CD444+ and CD133+ cells in irradiated transplantation group XWLC-05 tumor cells presented a statistical difference (P<0.05) compared with A549 tumor cells. Mi Hyun Kim et al 28 reported that the expressions of CD133 and CD44 in human breast cancer cells were increased after irradiation. Glioblastoma multiforme, after 50 Gy irradiation, displayed increased proliferation infiltration, and migration in mouse models, and cancer stem cell marker CD44 expression was enhanced. 29 Some studies have found that increased expression of CD44 is associated with poor disease progression and poor prognosis in none small cell lung cancer. 30 Clinical studies have found that the expression of CD133 in residual cancer cells after chemoradiotherapy is associated with distant recurrence and poor DFS. 31 In our study, CD44 and CD133 expressions were not with migration of lung adenocarcinoma cells after radiation. In fact, the main effect of radiation is to kill the common cancer cells, but less to the tumor stem cells, so the proportion of cancer stem cells was relatively increased in the short term after radiation. The proliferation and differentiation of cancer stem cells lead to an absolute increase in the proportion of cancer stem cells in the long term after radiation. The CD133 and CD44 expressions of lung adenocarcinoma cells were significantly increased after radiation, which was correlated with cell migration.
Irradiated XWLC-05 cells showed higher metastatic than A549 cells in vitro, MMP-2 may connect with metastasis protein of residual XWLC-05 and A549 cells in our study. The migratory ability of cancer increased after radiation, not only in vitro but also in the clinical study. Radiotherapy of advanced carcinomas of the bladder and the uterine cervix led to increased metastatic disease. 32,33 In contrast to other studies, we found that the proliferation and metastatic potential of residual tumors was decreased at first and then increased. Our result is consistent with Li Tao's study, 34 which found that the migration ability of hepatocellular carcinoma is decreased at first and then increased. Li Tao also found that the metastatic potential of the residual hepatocellular carcinoma was dependent on dose. However, in our study, we did not observe this phenomenon. As liver cancer cells and lung cancer cells have their own characteristics, the migration response to radiation and dose may be inconsistent. Speake 35 found that the expression of MMP was increased in rectal cancer cells (HT1080 and HCT cell lines) after radiation. Cheng's 14 study also demonstrated a radiation-enhanced invasion capability in HCC cells through upregulation of MMP-9, which they hypothesized was correlated with metastasis due to radiotherapy in clinical practice. In our study, we found that radiation induced a temporary increase in the expression of MMP-2 in A549 and XWLC-05 cells. Whereas 336 hr after radiation, the expression of MMP-2 in A549 and XWLC-05 cells decreased to unirradiated control levels. The change in the expression of MMP-2 was significantly inhibited within 96 hr after radiation and increased 336 hr after radiation. Li Tao's study 34 found that the expression of MMP-2 in hepatocellular carcinoma was inconsistent with the migration potential. The migration of the tumor may be also affected by other factors, such as E-Cadherin, nm23, VEGF and so on, and the distant metastasis of adenocarcinoma may be more closely related to angiogenesis factors.
The changes in cell cycle in residual XWLC-05 and A549 cells were no significant differences. Shuning Yang 36 reported that the highest rate of A549 cells arrested in G2/M phase was seen 48 hr after radiation, which then gradually reduced. In our study, A549 and XWLC-05 cells were arrested in G2/M phase after radiation, but the proportion of G2/M phase was decreased over time and was eventually even lower than that of the unirradiated control. However, we found that the highest proportion of G2/M of A549 was seen 12 hr after radiation, but that of XWLC-05 was seen 48 hr after radiation. These results were not completely consistent with Shuning Yang's study, because there are some differences between the experimental conditions and experimental time points. In vivo, there are no statistical differences between two kinds of cells after irradiation. This phenomenon due to the time of testing cell cycle was too long to seen significant radiogenic changes of cell cycle.
XWLC-05 cells had better abilities of cell proliferation and metastasis than A549 cells after radiation. XWLC-05 cells also had higher radiosensitivity than A549 cells. It suggests that hyperfraction radiotherapy can be considerer to reduce radiation damage to normal tissues and increase the death of the Xuanwei adenocarcinoma cancer cells. Sufficient treatment should be ensured to kill more stem cells of Xuanwei adenocarcinoma cancer. Treatments, such as chemotherapy, radiotherapy, surgery, targeted therapy, biotherapy which inhibit Xuanwei lung cancer metastasis should be considered for suitable Xuanwei lung cancer patients in early stages. Our study is not sufficient about the proteins and pathways of biology behaviors in XWLC-05 and A549 cells after radiation. But the time of delivery radiotherapy should prolong enough to resist cancer cell proliferation. Pathway inhibitor combines with radiotherapy should be used at a reasonable time which may improve the effect of pathway inhibitor combine with radiotherapy. Pathway inhibitor combines with radiotherapy should be used at a reasonable time which may improve the effect of pathway inhibitor combine with radiotherapy. More exploration and study regarding the proteins and pathways associated with proliferation and migration after radiation.
Conclusion
Those results show systematic biological differences and changes after radiation between XWLC-05 cells and A549 cells. Different time has different protein take faction. MMP-2 might regulated metastasis ability of XWLC-05 and A549 cells. PCNA, P53 may play important roles in proliferation of residual XWLC-05 and A549 cells within the first 336 hr after irradiation in vitro. After that, P53 may through PI3K/AKT pathway regulate cell proliferation after irradiation in vitro. DNA-PKCS may play a more important role in DNA damage repair than KU70 and KU 80 after 336 hrs in vitro. Time rhythm of radiotherapy should be paid more attention to resist cancer cell proliferation and metastasis. Enough time of radiotherapy should be delivered.
Dovepress
Publish your work in this journal Cancer Management and Research is an international, peer-reviewed open access journal focusing on cancer research and the optimal use of preventative and integrated treatment interventions to achieve improved outcomes, enhanced survival and quality of life for the cancer patient. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. | 2020-04-09T09:13:35.981Z | 2020-04-01T00:00:00.000 | {
"year": 2020,
"sha1": "a73b7615f77bf42cc1f02d279a8f41ff35e92c4a",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=57211",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ba9613d024251d8089f8af0401d7a2322108c0b6",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251937253 | pes2o/s2orc | v3-fos-license | Association Between Bio-Fermentation Derived Hyaluronic Acid and Healthcare Costs Following Knee Arthroplasty
Background Limiting access to intra-articular knee injections, including hyaluronic acid (HA), has been advocated as a cost-containment measure in the treatment of knee osteoarthritis. The association between presurgical injections and post-surgical complications such as early periprosthetic joint infection and revision remained to be investigated. This study evaluated pre- and post-surgical costs and rates of post-surgical complications in knee arthroplasty (KA) patients with or without prior HA use. Methods Commercial and Medicare Supplemental Claims Data (IBM MarketScan Research Databases) from January 1, 2012 to December 31, 2018 were used to identify unilateral KA patients. Those who completed a course of bio-fermentation derived HA (Bio-HA) as the first-line HA therapy comprised of the test group (n = 4091), while the control group did not use HA prior to KA (n = 118,659). Using multivariable regression with propensity score (PS) weighting, overall healthcare costs, readmission rates, and revision rates were assessed at six months following KA. Results Healthcare costs following KA were significantly lower for the Bio-HA group ($10,021 ± $22,796) than No HA group ($12,724 ± $32,966; PS p < 0.001). Bio-HA patients had lower readmission rates (8.9% vs 14.0%; PS p < 0.001) and inpatient costs per readmitted patient ($43,846 ± $50,648 vs $50,533 ± $66,150; PS p = 0.005). There were no differences in revision rate for any reason (Bio-HA: 0.78% vs No HA: 0.67%; PS p = 0.361) and with PJI (Bio-HA: 0.42% vs No HA: 0.33%; PS p = 0.192). Costs in the six months up to and including the KA were similar for both groups (Bio-HA: $49,759 ± $40,363 vs No HA: $50,532 ± $43,183; PS p = 0.293). Conclusion Bio-HA use prior to knee arthroplasty did not appear to increase overall healthcare costs in the six months before and after surgery. Allowing access to HA injections provides a non-surgical therapeutic option without increasing cost or risk of post-surgical complications.
Introduction
The treatment options for knee osteoarthritis (OA) range from non-invasive medical management, to minimally invasive therapies such as intra-articular injections, to surgical inventions like knee arthroplasty (KA), 1 depending on the stage and progression of OA, along with the patient's pain levels and function. Primary KA is recognized as providing substantial improvement in pain, function, and overall health-related quality of life in the majority of patients, 2 but the appropriateness of non-arthroplasty treatments continues to be debated. 1,3 The costs of non-arthroplasty treatments leading up to KA, such as those for opioids, physical therapy, hyaluronic acid (HA) injections, and corticosteroid injections, have been examined. [4][5][6][7][8] Some of those studies concluded that limiting the use of some of these interventions may help reduce healthcare costs, [4][5][6] despite evidence that HA contributed less than 2% of knee OA-related costs among KA patients. Moreover, others have found that HA was associated with a large reduction in knee OA-related healthcare costs for patients who progressed to total knee arthroplasty. 8 HA patients have also been found to have a 15-month delay in the peak healthcare spend related to the KA; 7 however, it is unclear whether HA impacts post-surgical costs.
HA has been shown to provide improved joint function and mobility, 9,10 particularly with higher molecular HA. [11][12][13][14][15] HA products with an average molecular weight of at least 3000 kDa were found to provide favorable efficacy results than products of an average molecular weight of less than 3000 kDa. 11 Unlike low molecular weight HA, high molecular weight HA also exceeded the minimal clinically important improvement threshold or minimal important difference for pain relief. 12,14 There are several mechanisms in which HA injections may provide clinical benefit in knee OA, with chondroprotection being the most frequently reported mechanism. 15 HA therapy is also reported to stimulate proteoglycan and glycosaminoglycan synthesis and provide anti-inflammatory, mechanical, subchrondral, and analgesic effects. The safety profile differences between bio-fermentation derived HA (Bio-HA) and avian-derived HA are also unclear, 11,15 as others have found that the potential risk of localized reactions may not be greater for HA of avian origin. 16 In terms of clinical outcomes, patients who used Bio-HA have been observed to have a longer time to KA than patients who used non-Bio-HA therapies. 17 Previous researchers have questioned whether presurgical injections may be associated with complications after TKA, including early periprosthetic joint infection (PJI) and revision. [18][19][20][21] Claims database studies have observed an association between intra-articular injections and short-term PJI within six months post-KA, [18][19][20] albeit without evaluating the type of injection or adjusting for potentially confounding surgical factors. For example, prior knee arthroscopy has also been implicated with increased risk of PJI after KA. 22 On the other hand, others have found that there was no association between the administration of HA within four months of TKA and the risk of PJI up to 24 months after surgery, after considering the differences in patient mix and many other risk factors. 21 The costs of treating PJI are high, as revision KA for PJI patients has the longest length of stay and incur higher costs than revisions for any other diagnosis, except periprosthetic fracture. 23 With suggestions that HA injections prior to KA add pre-surgical costs to the healthcare system and increase the postsurgical risk of PJI, we tested the hypothesis that KA patients with prior Bio-HA use incur higher healthcare costs both before and following KA and have higher rates of readmissions and revisions.
Methods
Commercial and Medicare Supplemental Claims Data (IBM MarketScan Research Databases, IBM Corporation, Somers, NY) from January 1, 2012 to December 31, 2018 were used to compile a dataset for knee OA patients who underwent KA with or without use of HA injections prior to KA. The commercial portion of the dataset is constructed by collecting data from employers and health plans, which encompasses employees, spouses, and dependents covered by employersponsored private health insurance. The Medicare Supplemental portion consists of data from retirees with Medicare supplemental insurance paid by employers, including Medicare-covered and employer-paid portions, as well as out-ofpocket expenses. The dataset contains service-level claims for inpatient and outpatient services, and outpatient prescription drugs, including claims for mail order prescriptions and specialty pharmacies. The data is fully de-identified with a unique identifier for each enrollee to allow for tracking of beneficiary-level claims over time.
Knee OA patients were identified by International Classification of Diseases (ICD) diagnosis codes (Supplementary Table 1), from which unilateral KA patients were included in the study based on those with Current Procedural Terminology (CPT) procedure codes 27446 and 27447. The inclusion criteria were as follows: 1) at least 18 years old; 2) at least six months claim history pre-and post-KA with continuous enrollment; and 3) known laterality for KA (using modifier code for left or right side). The test group was then determined based on the use of bio-fermentation derived HA (Euflexxa; Ferring Pharmaceuticals Inc., Parsippany, NJ) prior to the KA ("Bio-HA group"), while the control group was those without any prior HA use ("No HA" "group") (Supplementary Table 1). Since the dataset included claims starting January 1, 2012 and patients needed to have at least six months of claim history both pre-KA and post-KA, the index KA procedures were between July 1, 2012 and June 30, 2018. Bio-HA must be the first-line HA therapy used with no other HA between enrollment or start of the data period and the KA. Included Bio-HA patients were also required to have exactly three Bio-HA injections within a six-week period, in the same knee that was subsequently operated on (matching Bio-HA and KA laterality). The exclusion criteria were as follows: 1) unknown laterality for KA; 2) without pharmacy benefits in the six months pre-and post-KA; 3) with non-Bio-HA treatment prior to KA; 4) with Bio-HA without concurrent CPT 20610 or 20611 or without concurrent knee OA diagnosis; and 5) with revision KA prior to primary KA. For the Bio-HA patients, additional exclusion criteria were as follows: 1) with unknown laterality to any of their Bio-HA injections during the index Bio-HA treatment; 2) without completed course of Bio-HA treatment during the index Bio-HA treatment (less than three per knee within a six-week period); 3) who exceeded the course of Bio-HA treatment during the index Bio-HA treatment (more than three per knee within a six-week period); and 4) without matching laterality for the knee with the completed index Bio-HA treatment and the KA. A total of 419,714 KA patients were initially identified, of which 163,051 patients did not have the requisite six-month claim history both before and after the KA (Figure 1). Of the remaining 256,643 patients, a final group of 4091 Bio-HA patients and 118,659 No HA patients met the remaining inclusion and exclusion criteria, with stepwise exclusion of 32,576 patients without pharmacy benefit, 56,049 patients with prior non-Bio-HA use, 332 patients with prior non-knee OA Bio-HA use, 37,815 with unknown or incomplete laterality (KA or Bio-HA) or incomplete or exceeded Bio-HA treatment courses, and 7121 with bilateral KA.
Based on a follow-up of six months following KA, the overall healthcare costs (adjusted to Jan 2019 medical service consumer price index) were evaluated. This included all medical and facility claims from the physician office, urgent care, inpatient hospital, outpatient hospital, emergency room, ambulatory surgical center, pharmacy, and outpatient facilities. The sixmonth period was based on a previously observed association between intra-articular injections and short-term PJI within six months post-KA. [18][19][20] Readmission rates were determined during this follow-up, along with the corresponding primary procedure performed and costs of the inpatient services. Revision for any reason and revision with PJI as a diagnosis at six months were also assessed. The overall healthcare costs in the six months up to and including the KA were also compared between the Bio-HA and No HA groups. Univariate analysis was conducted using t-test for the cost comparisons and Chi-square test for the rate comparisons. Multivariable statistics with and without propensity score (PS) weighting were performed to compare the costs, readmission rates, and revision rates. The multivariable analysis for both the PS-and non-PS weighted models adjusted for age, gender, census region, Charlson comorbidity index, diabetes, obesity, heart disease, renal failure, and year of KA, along with use of knee imaging, physical therapy (with knee OA diagnosis), knee brace, knee arthroscopy, and intra-articular corticosteroids in the six months prior to KA. In the PS-weighted model, the PS was used to reweight the populations by creating a pseudo-population where the treatment assignment is independent of the observed covariates. Propensity score for the use of Bio-HA before KA was based on the probability of receiving Bio-HA accounting for the above variables. Patients who did not fall within the overlapping range of PS in the Bio-HA (n = 2) and No HA (n = 79) groups were excluded from the multivariable regression analysis with PS weighting.
Results
The Bio-HA and No HA groups had similar patient characteristics at the time of KA (Table 1) Regarding comorbidities, most patients had a Charlson comorbidity score of 0. 54.5% of the Bio-HA cohort had a score 0, while 27.5% had a score of 1-2, 9.0% with 3-4, and 9.0% with at least 5. 52.4% of the No HA cohort had a score of 0, while 30.4% had a score of 1-2, 8.9% with 3-4, and 8.3% with at least 5. A diagnosis of heart disease was (Figure 2), which corresponded to 20% lower adjusted costs for the Bio-HA group without and with PS weighting (p < 0.001 for both) ( Table 2). Fewer Bio-HA patients were readmitted with inpatient visits following KA (8.9% vs 14.0%; univariate p < 0.001). The odds for the Bio-HA group were 38% lower without PS weighting (p < 0.001) and 39% lower with PS weighting (p < 0.001). The corresponding mean inpatient costs per readmitted patient were also significantly lower by about $6700 for the Contralateral KA was the most frequent primary procedure performed in both groups when readmitted. This comprised 30.0% of readmissions in the Bio-HA group (or 2.9% of all readmitted and non-readmitted patients) and 50.6% of readmissions in the No HA group (or 7.6% of all readmitted and non-readmitted patients) ( Table 3). Knee arthrotomy was the second most frequent primary procedure performed in both groups when readmitted, representing 6.8% of readmissions in the Bio-HA group (or 0.66% of all readmitted and non-readmitted patients) and 4.0% of readmissions in the No HA group (or 0.60% of all readmitted and non-readmitted patients). There was no significant difference in short-term revision for any reason between groups (Bio-HA: 0.78% vs No HA: 0.67%) (univariate p = 0.409; without PS weighing p = 0.340; with PS weighting p = 0.361). No differences in the short-term revision with PJI were observed (Bio-HA: 0.42% vs No HA: 0.33%) (without PS weighing p = 0.328; with PS weighting p = 0.192). Costs in the six months up to and including the KA, as well as cost of HA, were also not observed to be higher for the Bio-HA group ($49,759 ± $40,363 vs $50,532 ± $43,183; univariate p = 0.233; without PS weighing p = 0.204; with PS weighting p = 0.293).
Discussion
Our study found lower healthcare costs post-surgery for KA patients who had prior use of Bio-HA, which was driven primarily by fewer readmissions and lower inpatient costs. There were also no higher risks for the Bio-HA patients in terms of short-term revision for any reason or with PJI. Bio-HA patients did not have added costs leading up to and including the KA compared to No HA patients, despite their added cost from the Bio-HA.
The present study found fewer Bio-HA patients were readmitted within six months, correlating with a reduced per-patient inpatient cost of approximately $6700. Conflicting studies have reported the association of intra-articular knee injection use before KA with PJI risk. [18][19][20][21] A study of Medicare total knee arthroplasty (TKA) patients identified a higher incidence of PJI at 3 and 6 months after TKA when a knee injection was performed with less than 3 months to the TKA, but the authors were unable to differentiate between specific types of injections. 19 Bedard et al examined PJI outcomes in a cohort of 83,684 TKA patients and found that patients who received pre-operative injections within 6 or 7 months before the TKA had greater odds of developing PJI within 6 months post-TKA than patients who did not receive an injection. 18 However, the analysis was
581
unable to identify the anatomic site of injection nor the specific type of injection. Another study reported that preoperative CS or HA injections within three months before the TKA increased the risk of PJI, but there was no difference in risk with multiple injections compared with a single injection. 20 Conversely, Kurtz et al found that intra-articular HA injections given within four months prior to TKA were not associated with the risk of PJI up to 24 months after surgery. 21 Moreover, the present study found no increased revision risk at 6 months for the Bio-HA group compared to No HA group. The risk of revision with PJI was also comparable in both groups, suggesting that there was no increased PJI risk requiring surgical intervention for the Bio-HA group. Contralateral KA was also found to be the more frequent procedure performed in both groups, although the rate appeared to be lower for the Bio-HA group. This may be a key driver for the lower costs post-KA in the Bio-HA group. The study design did not permit an assessment of any potential causal relationship between Bio-HA and delaying contralateral KA. However, HA patients have been reported to be associated with a delay to their first KA. 7,8,17,24,25 Some researchers have suggested that limiting access to knee injections, including HA, prior to KA will reduce healthcare costs. 5,18 However, we did not find higher costs in the six months leading up to and including the KA procedure for the Bio-HA group than the no HA group, despite the added HA costs in the Bio-HA group. Concoff et al also reported lower median knee OA-related healthcare costs for patients who received HA before their TKA ($860.24) compared to those who did not receive HA before their TKA ($2,659.49). 8 It may be that the Bio-HA patients were using fewer opioids and corticosteroids and therefore not experiencing any associated adverse events with opioids and corticosteroids after completing their Bio-HA treatment course, but this needs to be investigated further for our cohort. The use of HA has been reported by others to reduce the need for analgesic or rescue medication or corticosteroids at six months by others, 10,26,27 and may even extend out to 12 months. 28 Wilson and coworkers also found that TKA patients who received two or more HA injections in the year before TKA had significantly lower odds of becoming chronic opioid users, in terms of filling opioid prescriptions for at least 120 pills or at least 10 opioid prescriptions within 90 days to 1 year postoperatively. 29 Niazi et al further observed a 54% reduction in the number of opioid users at 6 months after receiving a HA injection. 30 Almost 80% of HA patients also did not require additional corticosteroid injections during that six-month period.
Limitations
Information derived from administrative claims data could not determine the severity of OA prior to KA. The impact, however, is minimized by the study design because the inclusion of only KA patients introduced a proxy restriction to the most severe patients who needed surgical intervention. This study only evaluated short-term costs within six months following KA; it is unclear if lower costs for the Bio-HA group will continue with longer follow-up. Our study was limited to Bio-HA patients and the findings may not be generalizable to patients who are treated with HA products other than Bio-HA prior to KA. Conflicting data have been reported on the relative risk of localized reactions for Bio-HA and avian-derived HA, 11,15,16 which could have an impact on the costs. Due to the observational nature of the study design, we cannot attribute causation of the differences to Bio-HA. However, our study demonstrated that prior use of Bio-HA did not correspond to an increase in six-month pre-arthroplasty costs nor six-month revision rates. Moreover, the Bio-HA patients were observed to have lower overall healthcare and inpatient costs at six months post-surgery. It is unclear whether post-surgical costs may be associated with pre-surgical costs as this was not adjusted for the comparisons of post-surgical costs. Despite this uncertainty, the present study did not find any significant differences in costs in the six months up to and including the KA between the Bio-HA and no HA groups. While the study population included patients over 65 years, these results may not be generalizable to Medicare patients because the study cohort had private insurance with Medicare supplemental insurance. The study did not exclude patients enrolled in capitated health plans or Health Maintenance Organizations (HMOs), nor differentiate between those with and without Medicare supplemental insurance as the perspective was the cost to the insurer rather than patient costs. Moreover, the payments vary depending on the program type and whether the beneficiary has multiple coverages and other special insurance situations. However, the dataset did not contain sufficient information to be able to account for this. It is unclear if the findings would differ between those patients.
Conclusions
With the ongoing debate about healthcare cost containment by limiting HA use, the analysis from the present study did not support the hypothesis that Bio-HA use prior to knee arthroplasty was associated with increased overall healthcare costs in the six months before and after surgery. Instead, those patients were found to have fewer readmissions and lower inpatient costs in the six-month post-operative period compared to patients without prior HA therapy. It was also notable that there were no differences in overall healthcare costs in the six months leading up to and including the surgery, despite additional HA costs for the Bio-HA patients. No added risks for Bio-HA patients in terms of short-term revision for any reason or with PJI were also observed. Limiting access to HA injections may divert patients to other non-surgical treatments that may not provide equivalent clinical benefits and/or introduce additional risks.
Ethics Statement/Ethics Waiver
This study was based on a data set that is available for purchase. The data set was de-identified, did not involve human subject research, and therefore did not require oversight by an institutional review board. Studies involving the use of publicly available databases that do not include personal identifying information have been determined by an IRB (Exponent) to be exempt, with a waiver of approval. Such studies meet the criteria stipulated in the Code of Federal Regulations, Title 45, Part 46, section 46.101(b)(4), which states that "…(b) Unless other required by department or agency heads, research activities in which the only involvement of human subjects will be in one or more of the following categories are exempt from this policy: … (4) Research, involving the collection or study of existing data, documents, records, pathological specimens, or diagnostic specimens, if these sources are publicly available or if the information is recorded by the investigator in such a manner that subjects cannot be identified, directly or through identifiers linked to the subjects." Funding Exponent, Inc. received funding from Ferring Pharmaceuticals Inc. for this study. Two of the authors (FN, WNN) are employees of the study sponsor and were involved in the design of the study, interpretation of data, critical revision of the manuscript, and the decision to submit the report for publication. The study sponsor was not involved in the initial collection and analysis of the data.
Disclosure
K Ong is an employee and shareholder of Exponent, a scientific and engineering consulting firm. Exponent has been paid fees for K Ong's consulting services on behalf of Ferring Pharmaceuticals, during the conduct of the study; Exponent has been paid fees for K Ong's consulting services on behalf of Bioventus, Medtronic, Stryker Orthopaedics, Sanofi, Pacira Pharmaceuticals, Paradigm Spine, St. Jude Medical, Relievant Medsystems, International Society for the Advancement of Spine Surgery, SI-Technology, LLC; Exponent has been paid fees for K Ong's litigation consulting services on behalf of Zimmer Biomet, Joerns Healthcare, SpineFrontier, Ethicon, DJO, Ossur, Karl Storz Endoscopy-America, Rex Medical, Smith & Nephew and Covidien, outside the submitted work. S Kurtz reports grants from Ferring Pharmaceuticals, during the conduct of the study; grants from Stryker, Zimmer Biomet, Invibio, Wright Medical Technology, Ceramtec, Celanese, Ferring Pharmaceuticals, Formae, Lima Corporate, SINTX Technologies, Orthoplastics, Osteal Therapeutics, 3Spine, DJO Global, Carbofix, DePuy Synthes; is an employee and shareholder of Exponent, Inc, a scientific and engineering consulting firm. Exponent has been paid fees by companies and suppliers for S Kurtz consulting services on behalf of such companies and suppliers outside the submitted work.
Publish your work in this journal
ClinicoEconomics and Outcomes Research is an international, peer-reviewed open-access journal focusing on Health Technology Assessment, Pharmacoeconomics and Outcomes Research in the areas of diagnosis, medical devices, and clinical, surgical and pharmacological intervention. The economic impact of health policy and health systems organization also constitute important areas of coverage. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. | 2022-08-31T15:12:40.905Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "2da2d6574c1eb3d1d46b610249c1b03d07a9e665",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=83433",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "83d16fe3a33a0bfb512fa685ac3016d3d2629268",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
252897116 | pes2o/s2orc | v3-fos-license | International Center of Excellence for Malaria Research for South Asia and Broader Malaria Research in India
ABSTRACT. The Malaria Evolution in South Asia (MESA) International Center of Excellence for Malaria Research (ICEMR) conducted research studies at multiple sites in India to record blood-slide positivity over time, but also to study broader aspects of the disease. From the Southwest of India (Goa) to the Northeast (Assam), the MESA-ICEMR invested in research equipment, operational capacity, and trained personnel to observe frequencies of Plasmodium falciparum and Plasmodium vivax infections, clinical presentations, treatment effectiveness, vector transmission, and reinfections. With Government of India partners, Indian and U.S. academics, and trained researchers on the ground, the MESA-ICEMR team contributes information on malaria in selected parts of India.
INTRODUCTION
Malaria has been a recurring global health problem for centuries, with at least 29 countries currently showing malaria infections. 1 India, one of the countries affected, has a history of taking strong measures to counter the spread of malaria. As far back as 1958, India launched its National Malaria Eradication Program, with significant successes in subsequent decades. 2,3 Unfortunately, India experienced a resurgence of malaria beginning in the early 1970s 4,5 that was often attributed to operational difficulties and resistance to the primary mosquito-control insecticide dichlorodiphenyltrichloroethane. The National Institute of Malaria Research (NIMR), created in 1977, and the National Vector-Borne Disease Control Program of India (now called the National Center for Vector Borne Diseases Control) subsequently created in 2003, under the umbrella of the Ministry of Health and Family Welfare, Government of India, focused additional resources to curb malaria in India. Today, with an advancing research base and a fast-growing economy, India has also funded malaria research at premier academic research institutions across the country. Government hospitals and medical colleges are receiving support to improve clinical descriptions of malaria. There is a growing interest in bridging clinical and basic science to advance even further opportunities for improving malaria control and treatment across India. 6,7 With support from the U.S. NIH, and in collaboration with the Indian Council of Medical Research, the Malaria Evolution in South Asia (MESA) International Center of Excellence for Malaria Research (ICEMR) was organized to perform basic science and clinical research in India. The goal was to assess our present understanding of malaria pathogenesis, virulence, and transmission at multiple locations across India. In this framework, the MESA-ICEMR has witnessed new and evolving Government of India initiatives and policies that have been advanced to check transmission and treatments ( Figure 1). Given larger global pressures to control malaria throughout the world, in cooperation with a large number of stakeholders, India made a commitment to eliminate malaria completely by 2030. To this effect, a National Framework for Malaria Elimination in India 2016-2030 was set up with the goal of eliminating malaria within 15 years and, subsequently, to maintain a malaria-free status in the country. 8 The framework proposes a multipronged approach, including streamlining programs that include malaria control initiatives, research strategies, surveillance, active and passive reporting of malaria cases, and monitoring of drug resistance. There remain concerns regarding fragmented communication among research entities, unintegrated results from various laboratories, ineffective reporting to the national program, and insufficient engagement by policymakers. All of this has inspired the need for a common platform to synthesize existing initiatives.
A single platform-called the Malaria Elimination Research Alliance (MERA)-India, led by scientists and clinicians within the Indian Ministry of Health, was established by the Government of India in 2019 to identify research priorities. 9 The U.S.-based MESA-ICEMR is not formally a part of MERA-India, but is uniquely poised to cooperate with the MERA mission because some our Indian collaborators are part of MERA. MESA research today continues to have some synergistic overlap with the broad categories that MERA has defined, which include vector biology and community behavior. Upcoming MESA research, which has been delayed by the COVID pandemic, is in line with research priorities now also identified by MERA. The MESA-ICEMR team meets regularly with national and international malaria health-care experts from across the world who come as expert advisors or as collaborators. This, in turn, has led to many regular science meetings between U.S. NIH MESA-ICEMR scientists, the MESA-ICEMR partners in India, and representatives of key malaria control agencies in the Government of India. This has also facilitated the exchange of information between basic scientists and malaria control stakeholders at both the state and the national levels.
STAKEHOLDERS AT STUDY SITES
For sampling diverse malaria cases across the country, the MESA-ICEMR team established several malaria study sites in India during the past decade. These include research laboratories at Goa Medical College (GMC), Goa; NIMR-Goa, Goa; the Regional Medical Research Center-Dibrugarh (RMRC-Dibrugarh), Dibrugarh, Assam; and field sites at Shalini Hospital-Krishi Gram Vikas Kendra, Ranchi, Jharkhand; and Acharya Vinoba Bhave Rural Hospital, Wardha. At the field sites, patients with malaria are recruited into the study, blood samples are collected, and data are recorded. The MESA-ICEMR laboratory at GMC-Goa also serves as a central site where new scientists and staff undergo training in patient recruitment and laboratory techniques, and where visiting scientists and collaborators gather for ICEMRrelated scientific meetings. Moreover, research collaborations with government organizations such as NIMR-Delhi and public institutions such as the Indian Institute of Technology-Bombay, Mumbai, have also been established.
Goa Medical College-Goa. GMC is the only government tertiary health-care facility in a peri-urban setting in the small state of Goa. With both Plasmodium falciparum and Plasmodium vivax malaria endemic in the state, GMC treats both the local and migrant populations in large enough numbers to allow for adequate sampling of both mild and severe malaria infections of both Plasmodium species. In Goa, the MESA-ICEMR program was able to ascertain a demographic profile of both P. falciparum and P. vivax malaria, with diagnostic and clinical indicators, over a 4-year period. 10 As expected, the increase in the number of positive malaria cases coincided with the onset of the rainy season. The clinical data collected, and the severity indicators noted, allow for better diagnosis in the future. Moreover, alliance with global definitions of malaria disease severity enable clinicians in India to understand epidemiology and pathogenesis of infection more completely. To gain further insight into severe malaria for India, and beyond India, the MESA-ICEMR team studied the host-receptor interactions in both P. falciparum-and P. vivax-infected red blood cells, particularly P. falciparum erythrocyte membrane protein 1 on the surface of infected erythrocytes. These proteins varied in their ability to bind to endothelial protein C receptors and to inhibit activated protein C-endothelial protein C receptor interactions. 11 Together with high parasite biomass, such variations in parasite binding to blood vessels can influence disease severity. 12 With the relatively high volume of non-severe P. vivax cases seen annually at the GMC, the experimental approaches used in the MESA-ICEMR research laboratory at the GMC have led to insightful observations and results. MESA scientists initially focused on comparing cryopreservation efficacy between Indian P. vivax isolates and previously used Brazilian isolates. 13 The MESA-ICEMR team developed a modified version of counting (using a microscope reticle) to increase the accuracy of measuring both P. vivax parasitemia and reticulocytemia, which are frequently low. 14 Even these simple advancements have improved accuracy in parasite counting of patient samples at field sites, research laboratories, and clinical settings. Previously published data from Southeast Asia showed that P. vivax prefers to infect reticulocytes ex vivo. 15 Interestingly, we observed parasitemia for Indian P. vivax isolates that was greater than that of reticulocytemia, suggesting that Indian isolates differ in their reticulocyte preference. 16 Plasmodium vivax has proved to be more challenging to culture through in vivo or even ex vivo assays compared with P. falciparum, mainly because P. vivax infects reticulocytes predominantly, which are difficult to isolate. 17 MESA-ICEMR partners established a novel method of enrichment of P. vivax-infected reticulocytes and also found that P. vivax infection greatly decreases the osmotic stability of the infected reticulocyte. 18,19 Studies involving P. vivax continue to be a high priority for the MESA-ICEMR, which has a strong interest in understanding variations in disease severity in malaria.
NIMR-Goa. The MESA-ICEMR laboratory at NIMR-Goa, built with a fully functional, state-of-the-art insectary, was crucial for the controlled vector infection studies conducted in India using both P. falciparum and P. vivax. In the initial work, the MESA-ICEMR team discovered a new vector, Anopheles subpictus, to be a contributor to perennial transmission. 20 Membrane feeding experiments were optimized using Anopheles stephensi mosquitoes hatched directly from field larvae 21 compared with earlier published work with colonized An. stephensi laboratory colonies. 22,23 These experiments focused on comparison of Plasmodium infections induced in both field-derived and colonized An. stephensi mosquitoes, and subsequent sporozoite production. 21 The MESA-ICEMR team was the first to report optimized P. vivax sporozoite production in An. stephensi at an Indian field site, and the techniques learned will be beneficial for other vector biologists both inside and outside India. 24 Investigations involving sporozoites in P. vivax liver stage assays and transmission-blocking experiments are currently underway.
Indian Institute of Technology-Bombay, Mumbai. As a part of the ongoing coordinated effort by all ICEMRs to identify shared antibody responses to a number of parasite antigens, the MESA-ICEMR group initially teamed up with Indian Institute of Technology-Bombay, Mumbai, to determine the serological profiles for 200 Indian patients infected with either P. falciparum or P. vivax, or both. Using protein arrays, seroreactivity was measured against recombinant P. falciparum and P. vivax antigens. This study 25 indicates that the seroreactivity for the P. falciparum antigen was pronounced and is comparable to the seroreactivity as seen from endemic areas such as Africa. Of 248 seropositive P. falciparum antigens, MSP10, heat shock protein 70, PTP5, AP2, AMA1, and SYN6 Merozoite Surface Protein 10 (MSP10), Heat Shock Protein 70 (HSP70), PfEMP1-trafficking protein (PTP5), Apetala 2 (AP2), Apical Membrane Antigen 1(AMA1), and SNARE protein (SYN6) showed strong reactivity to patient serum IgG. For P. vivax patient sera, however, ETRAMPs, MSPs, and ApiAP2, sexual stage antigen s16, RON3 Early Transcribed Membrane Proteins (ETRAMPs), Merozoite Surface Proteins(MSPs), Apicomplexan Apetela 2(ApiAP2), sexual stage antigen Pfs16, and Rhoptry Neck Protein 3 (RON3) were the key antigens that showed strong seroreactivity. This study 25 also identified different antigens from severe and uncomplicated patient sera that showed strong seroreactivity. This protein array data from India, coupled with similar evidence from other NIH ICEMRs around the world, identified key antigens that could be used as a measure of exposure, especially in low-transmission settings. 25 The MESA-ICEMR team will probe and identify additional parasite antigens that could be used for malaria diagnosis in rapid diagnostic tests, and to provide a better understanding of host-parasite interactions. 26 RMRC-Dibrugarh. In addition to MESA-ICEMR studies in Goa, a second highly active partnership is with a Government of India research laboratory in RMRC (Indian Council of Medical Research), Dibrugarh, in the northeastern state of Assam. Assam and other neighboring states of India, share 4,000 km of international borders with surrounding countries and provide a potential route for entry of novel forms of malaria drug resistance into India. 27 These northeastern states also harbor high levels of P. falciparum malaria. The hilly terrain, vast forests, tribal inhabitants, poor infrastructure, and frequent floods present challenges in terms of diagnosis and treatment of malaria infections. These environmental conditions not only hamper malaria research, they also interfere with effective malaria interventions, such as the distribution of treated bed nets and effective indoor spraying. To gain a deeper perspective on the complexity of malaria transmission in these areas, longitudinal population studies with extensive follow-up would be most informative. The MESA-ICEMR team, with partners in Assam, set up a large cohort study in the high-transmission region of the Karbi Anglong District (manuscript in preparation). The goal was to assess the impact of epidemiological and socioeconomic factors on progression of disease in individuals and households over a 2-year period. The MESA-ICEMR study observed a greater malaria prevalence in the community, a high burden of asymptomatic cases, and some delayed parasite clearance (manuscript in preparation) compared with earlier reports. 28,29 Cohort studies to assess additional epidemiological factors in high malaria transmission settings, and investigations into delayed parasite clearance in treated individuals are currently being planned.
Dissemination of relevant evidence from northeastern India in publications, and presentations at Indian scientific meetings and symposia, have enabled the MESA-ICEMR team to bridge the gap regularly between field practices and laboratory malaria research in India. Inter-site communication within MESA-ICEMR has ensured that work is not duplicated, and continual contact between collaborators ensures that imaginative new ideas and techniques are transferred quickly from the laboratory to field sites. Continued close interactions among practicing clinicians, university medical departments, and state-level health committees remain a priority as the MESA-ICEMR team monitors malaria trends across India.
POTENTIAL INFLUENCE ON MALARIA RESEARCH POLICIES
India is a big, populous country, and malaria in India is complex and heterogeneous. The heterogeneity includes region-specific variations in malaria transmission intensity, P. falciparum-to-P. vivax ratios, vector species distribution, and varying sensitivity to artemisinin combination therapy (ACT). This heterogeneity requires that subnational malaria control policies must be tailored to particular regions. For example, India has adopted two different first-line ACT regimens, one for Northeast India and another for the rest of India. 30,31 To ensure an adequate strategy that is responsive to both short-term challenges (reduction in transmission burden) and long-term opportunities (malaria eradication), policymakers need an accurate description of the current malaria situation. The MESA-ICEMR research activities in India described in detail in an accompanying article ("Malaria Presentation across NIH South Asia ICEMR Sites" 32 ) can provide information on the current malaria situation and thus have some influence on regional malaria control policies.
Study outcomes from both our current research and planned research can provide complementary data for evidence-based policymaking at the regional level ( Table 1). The MESA-ICEMR surveillance studies constantly monitor the role of environmental (rainfall, temperature, and humidity), sociodemographic (e.g., occupation, migration), and entomological factors that influence transmission at our study sites, as well as the local impact of current malaria control measures. 10,20,33 In addition, our basic sciences research aims to understand the mechanistic underpinnings of clinical phenotypes, both in general and specific to India. The MESA-ICEMR findings will be informative to those developing blood-stage vaccines, and to provide tools for evaluating their efficacy. The surface of the invasive merozoite form of the parasite is directly exposed to the immune system in the bloodstream, and the molecules that mediate invasion have been proposed as vaccine candidates. 34 India is a major player in malaria vaccine development. 35 There is no continuous-culture system for the growth of P. vivax, but the MESA-ICEMR team has used a collection of live patient isolates to develop invasion assays. This will aid in mapping parasite invasion ligand expression and their interactions with red blood cell receptors. Furthermore, the invasion assay platform will allow MESA-ICEMR to test directly whether potential vaccine candidates such as recombinant invasion ligands, 36 and whether both parasite-and hosttargeted antibodies, are capable of blocking invasion. For P. vivax, for which very little is known about invasion, the MESA-ICEMR team will continue to establish much needed methods essential to blocking invasion. Such fundamental initiatives are unachievable in other settings.
Efforts to monitor ACT sensitivity at the MESA-ICEMR study sites in India are still in preliminary form. Future expansion of these research efforts will provide a more detailed view of the effectiveness of current antimalarial regimens in India, and whether the current treatment policies need to be updated.
EMERGING RESEARCH AND IMPLICATIONS FOR FUTURE DIRECTIONS
Future research activities of the MESA-ICEMR will have an emphasis on malaria core interventions ( Table 1). The cohort study in Assam (manuscript in preparation) revealed the importance of community engagement, which includes assessment of ongoing malaria prevention practices and subsequent education to prevent infections in the community. The MESA-ICEMR team will continue to assess and inform on the relevance of community engagement by the National Malaria Control Program, and identify ways to improve it based on community feedback. The MESA team along with local partners will increasingly track variables such as health-seeking behavior and LLIN use as malaria transmission decreases and more Indian states enter the elimination phase.
The MESA-ICEMR team will expand its entomological surveillance capacity, and laboratory vector studies, to include membrane feeding and parasite development studies. A high priority will be placed on poorly characterized Indian vectors that tested positive for P. falciparum and P. vivax in surveys conducted in their natural settings. This could build on earlier work pointing to poorly appreciated vectors possibly contributing to regional transmission, as we have reported previously. 20 The MESA-ICEMR, operating in South Asia, has a strong interest in P. vivax, which has historically been much less explored compared with P. falciparum. In particular, a full understanding of the molecular basis of P. vivax pathogenesis is needed, and the MESA-ICEMR team aims to pursue investigations on P. vivax pathogenesis, red blood cell invasion, and reticulocyte tropism. Plasmodium vivax has been shown to cause severe malaria and deaths in South Asia, and it is important to determine how much of this is a result of particular pathogenic strains of P. vivax and/or a regionspecific host vulnerability. [37][38][39] Plasmodium vivax-specific mechanisms of pathogenesis remain largely unknown. The MESA-ICEMR study site in Goa at GMC has a high proportion of P. vivax infections (70% mono infections), of which 5% lead to severe disease manifestation. GMC thus provides an important opportunity to study questions relating to P. vivax pathogenesis. The major clinical complications of patients with severe P. vivax at GMC are anemia, jaundice, and respiratory distress. 10 The MESA-ICEMR team will test associations of Indian P. vivax case data with severe disease, including parasite biomass, reticulocyte preference, endothelial activation, plasma biomarkers of coagulation, and inflammation. The MESA-ICEMR will also explore liver-stage infections of P. vivax, particularly the parasite's capacity to produce hypnozoites. Such research, in patient samples and in laboratory models, has the potential to develop diagnostic tools for identifying latent hypnozoites and to improve our understanding of relapsing infections. Improving technical approaches to understand P. vivax liver-stage biology is a high priority to MESA-ICEMR.
RESEARCH CAPACITY-BUILDING AND TRAINING
The MESA-ICEMR has extensive technical resources available through its collaborators, both within India and abroad. 40 During the past decade, the MESA-ICEMR team has worked side-by-side with Indian government institutions to broaden scientific inquiries relevant to malaria, and to do so through close engagement with affected local communities, especially hospitals and medical colleges. By building advanced laboratories in existing Indian institutions, the MESA-ICEMR helped bridge gaps between medical experts and strong basic scientists.
Even as MESA-ICEMR-mediated India-US collaborations have added fresh perspectives and innovations in malaria research for both sides, the MESA-ICEMR teams have also jointly helped build secondary expertise in statistical analysis, in novel molecular probes such as bead-based antibody detection, and in state-of-the-science insectaries. Last, the MESA-ICEMR team worked early and aggressively with the rapidly emerging biotechnology companies in India to gain high-quality, cost-effective services for primer design and DNA sequencing. This included local Indian startups and Indian branches of established companies such as Eurofins, Illumina, nanochip technology, and more. An internal, deep understanding of our questions and scientific needs allowed us to assess the quality of work outsourced in India, and set an example for others on what molecular research is possible in India, without large movements of patient samples abroad. MESA-ICEMR staff have varied educational backgrounds, with both basic and advanced science degrees from national and international educational institutions. Most newly hired staff train in laboratory techniques, including Plasmodium falciparum culturing, genomics, and serology, which are well established at the central MESA-ICEMR GMC site. In addition, our research scientists also have the opportunity to visit parent laboratories in Seattle (University of Washington and Seattle Children's Hospital) or Boston (Harvard T.H. Chan School of Public Health). MESA-ICEMR scientists in India have also attended international workshops and conferences on malaria, and regularly upgrade their technical knowledge and skills.
New protocols, which include community surveys at various sites, were under the ethical board review process at most sites, but progress has been hindered as a result of the COVID-19 pandemic. We have planned community surveys in conjunction with local community members and relevant government health workers to engage cooperation among local inhabitants. We have found local involvement to be invaluable for achieving community survey objectives (unpublished data). These surveys will be region specific and will focus on assessing health-seeking behaviors, disease surveillance, presence of asymptomatic infections, and more, in the local communities.
CONCLUSION
The MESA-ICEMR team and its Indian partners have developed infrastructure to survey, assess, and conduct malaria research in select parts of India. Although India has numerous resources to monitor national malaria control campaigns, there are components of the MESA-ICEMR activities that can add value with fresh public health questions and unique technical approaches, including deeper scientific queries into asymptomatic infections and mechanisms that initiate antimalarial resistance.
The team will also introduce administrative structures for sound and safe research. MESA-ICEMR can provide Indian authorities with an independent assessment of the malaria burden at each MESA-ICEMR study site, 8 and can corroborate the impact of changes in Indian malaria research policy in depth at select sites ( Figure 1). | 2022-10-15T06:17:38.173Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "44a688bc737bf070470fb122c771b444c672b2d5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "36bdaa6a4e6c4a1c36cb24088e95f3791fc5a22e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
116295851 | pes2o/s2orc | v3-fos-license | A methodology for calculating the passenger comfort benefits of railway travel
A comfortable environment in railway passenger coaches can be regarded as a resource for social consumption during the transport process. Railway passenger comfort benefits (RPCBs) can be regarded as a special generalized cost. In this paper, we select a series of objective and subjective indexes to formulate a quantitative method of calculating the RPCB with considering ticket fares. This method includes three steps: make the initial data dimensionless, calculate the weight of each index, and finally calculate the RPCBs. The proposed method was validated with the data collected from two types of trains: G13 from Beijing South to Shanghai Hongqiao and T109 from Beijing to Shanghai. Also, questionnaire survey was conducted in both trains. After data processing, the results show that there is a linear relationship between the RPCB and ticket fare with a correlation coefficient of 0.9616.
Introduction
With the rapid economic growth and high-speed rail network expanding in China, people are more willing to travel by train, and thus, there is an increasing requirement for passenger comfort of train travel. Railway passenger comfort is determined by a combination of physical and psychological factors. A comfortable environment in railway coaches can be regarded as a resource for social consumption, and railway passenger comfort benefits (RPCBs) can be regarded as a special generalized cost.
From the perspective of passengers, they choose to travel by train because they can take a rest and handle their businesses in a comfortable passenger coach. Moreover, during and after journey, passengers will spend less time to get rid of travel fatigue. This means that providing comfortable transport service is one of the most important measures to attract the passengers. Comfort-related indexes have been applied to evaluate the passenger transport service quality [1][2][3].
In China, Railway Construction Project Economic Evaluation Methods and Parameters (Third Edition, in Chinese) [4] is a guidance to investing projects of new railway lines. It points out that the improvement in passenger comfort benefits is one of the most important evaluation indexes needed to be calculated. But the improvement value is neglected in practice, because the calculation approach is incomplete and the basic data are hard to obtain. Most of the current theoretical researches are devoted to establishing the index system which influences the passenger comfort [5,6], exploring the comfort range of one exact factor [7], introducing some high technologies and facilities that are helpful to obtain the passenger comfort parameters accurately and easily [8,9], or studying the relationship between the passenger comfort and transportation management [10]. There are no quantitative & Wencheng Huang 1261992248@qq.com 1 calculation approaches to calculate passenger comfort benefits, so it is important to find out a new approach, which can be used both in theoretical research and engineering practice.
In this paper we focus on the quantitative calculation approach and application of RPCBs considering ticket fares. The remainder of this paper is organized as follows: In Sect. 2 we give a brief literature review about passenger comfort. In Sect. 3 we introduce the data that will be used to formulate the quantitative calculation approach. Next in Sect. 4 we give a detailed formulation about the proposed quantitative calculation approach considering passenger ticket fare. Then in Sect. 5, in order to test the performance of the methodology presented in this paper, a numerical experiment is carried out by taking the examples of Beijing-Shanghai ordinary-speed rail line and Beijing-Shanghai high-speed rail line in China. Finally, Sect. 6 presents the major conclusions and an outline of future research.
2 Literature review
Previous researches on passenger comfort
In 1970s, some researches began to establish the index system that influences the passenger comfort, and tried to obtain the assessment results with questionnaire surveys or other methods. Oborne and Clarke [11] carried out a questionnaire survey from Swansea and discussed how to obtain the quantitative assessment data from the survey effectively. Ref. [12] studied the techniques for the passenger comfort assessment, which included two parts: transportation system aspects, such as riding comfort, local comfort, and organizational comfort, and the behavioral aspects. Moreover, Ref. [13] presented an overview about passenger comfort, the concept of comfort, and its relationship to the passenger's other travel experiences which were discussed and some factors that influence comfort were introduced, including temperature, ventilation, illumination, photic stimulation, pressure changes on the ear, travel length, and task impairment. Richards et al. [14] thought that an individual's reaction to a vehicle environment depended on both the physical inputs and the individual characteristics, which means that both objective factors and subjective factors should be considered when evaluating the passenger comfort. Vink et al. [15] used more than 10,000 internet trip reports and 153 passenger interviews to gather opinions about aspects, which need to be improved in order to design a more comfortable aircraft interior. They found clear relationships between comfort and legroom, hygiene, crew attention and seat/personal space, and passenger rate in the newer planes significantly better than those in older ones, indicating that attention to design for comfort was effective, and rude flight attendants and bad hygiene reduced the comfort experience drastically. Nan [5] thought the factors affecting passenger comfort included space per capita, travel time, environment in vehicle, service levels. Passengers with different occupations and travel purposes had different demand in comfort. Wang [6] divided the whole comfort evaluation factors into two parts: objective factors and subjective factors, and a comfort evaluation model based on the mixing parameters of support vector machine was established.
Some other researches focused on exploring the comfort range of factors that influence the passenger comfort, for example, the comfort range of vibration, temperature, noise, air pressure. Refs. [16,17] studied the relationship between the vibration and passenger comfort; both wholebody vibration in humans and vibration in vehicle were involved. Also, Ciloglu et al. [18] investigated and assessed the whole-body vibration and the dynamic seat comfort of aircraft seats under using average weighted vibration, vibration dose values, and the transmissibility data. Ref. [19] gave a brief review of both field and laboratory studies on human reaction to vibration. The laboratory-based studies were used to predict comfort levels for passenger vehicles. As a result, some suggestions were made for vibration in passenger transport vehicles within acceptable range. Thermal comfort (the temperature factor) in passenger coach has been studied for a long time, in order to define a testing and calculation model for thermal comfort assessment of a bus HVAC design and to compare effects of changing parameters on passenger thermal comfort. Pala and Oz [20] carried out a combined theoretical and experimental work during heating period inside a coach, temperatures, air humidity, and air velocities were measured to investigate effects of fast transient conditions on passengers' physiology and thermal comfort, and the graphs of passengers thermal sensation and thermal discomfort level were used to evaluate the study. Ref. [21] indicated that solar radiation, poor interior insulation, the non-uniformity of the average radiant temperature affected the thermal comfort in vehicles, and the most popular methods for assessing thermal comfort were reviewed. For a better understanding of thermal comfort in a passenger coach by considering the spectral solar radiation, Moon et al. [22] used commercial software (ANSYS Fluent V. 13.0) to predict thermal and flow fields under the operating conditions of a heating, ventilation, and air-conditioning system. As a result, they found that the estimated temperature near the driver and passengers increased by approximately 1-2°C when considering the spectral solar radiation. In addition, it was found that the predicted mean vote by considering the spectral radiation was higher than that of the case without considering the spectral radiation. Except for the thermal comfort researches devoted to passenger coaches, and some researchers also tried to explore the thermal comfort in transport terminals. For example, in order to find out the thermal perception, preference, and comfort requirements of passengers and terminal staff in three airport terminals in the UK, researchers [23] monitored the indoor environmental conditions in different terminal areas and conducted questionnaire-guided interviews with 3087 terminal users. They found that the neutral and preferred temperatures for passengers were lower than for employees and considerably lower than the mean indoor temperature, and passengers demonstrated higher tolerance to the thermal conditions and consistently a wider range of comfort temperatures, whereas the limited adaptive capacity for staff allowed for a narrower comfort zone. Furthermore, Ref. [24] indicated that except for temperature, some other climate parameters such as humidity and air movement were also important to passenger thermal comfort. Thus, air temperature had the largest weight for comfort predictions; humidity and air draft also had significant effects and should not be neglected. Noise annoyance is another important factor which influences the passenger's comfort, Park et al. [25] evaluated the noise annoyance in passenger coaches of high-speed train, and the evaluation was undertaken in different conditions such as the stationary noise, unsteady sudden variation in sound, short-term noise. Ahmadpour et al. [26] analyzed whether the factors underlying the passengers' experience of comfort differed from those of discomfort. The results showed that there were no significant differences between the comfort and discomfort ratings on the pre-giving factors. Another influencing factor for passenger comfort in passenger coach is air pressure. Schwanitz et al. [7] conducted a questionnaire survey with 262 passengers which revealed that pressure variations are rated less important for riding comfort than climatic and spatial aspects (study 1). Also, a laboratory experiment (study 2) in the pressure chamber at the DLR Institute of Aerospace Medicine with 31 subjects investigated the effects of systematic pressure variations on discomfort, to find out air pressure variations inside trains and reduce pressure comfort for railway passengers, while trains were passing through tunnels. Similar comparative test was conducted again [27], a field study on the high-speed railway track Cologne-Frankfurt/Main as well as a simulation study in their pressure chamber TITAN (DLR Institute of Aerospace Medicine) with 31 subjects to investigate pressure comfort for passengers. They found that beside attributes of instantaneous pressure changes, pressure events of the latter significantly influenced current discomfort. The findings in the two papers may inform design engineers to improve train and tunnel design.
In order to obtain the passenger comfort parameters accurately and easily, some high technology and facilities have also been applied. For example, a triaxial accelerometer [8] was used to measure the acceleration and global positioning system (GPS) was used to obtain position detection. After data collection, they designed an embedded system (hardware, firmware, and software) to assess the dynamic motion factors that affect the comfort in public transportation systems. Lin [9] conducted an experiment using the High-Speed Train Generalized Comfort Research Platform in Southwest Jiaotong University Rail Laboratory. A Delphi-AHP method was used to determine the weight for each factor; then, the experiment was designed to test and verify correctness and feasibility of the passenger comfort evaluation process. Silveira et al. [28] compared two different types of shock absorbers' behavior, symmetrical (linear) and asymmetrical (nonlinear), which were used on passenger vehicles. The final results showed that the asymmetrical system with nonlinear characteristics tends to have a smoother and more progressive performance, for both vertical and angular movements. A lower level of acceleration is essential for improved ride comfort. The use of asymmetrical systems for vibrations and impact absorption can be a more advantageous choice for passenger vehicles.
Higher passenger comfort has also been considered as a necessary operation factor in traffic and transportation management, because the managers want to provide a better service to attract more passengers. To determine whether bus passenger comfort was influenced by driving style, especially the difference expected to occur after training of drivers in economical (fuel efficient) driving, Af [10] did a field study and found that after training of drivers, passengers experienced more comfortable, which means better driving style is helpful to improve the comfort. To ensure the passenger comfort and the cargo safety, Tezdogan et al. [29] used a sea-keeping analysis approach to calculate the operability index of high-speed passenger ships. The evaluation parameters included: the dynamic responses of the ship to regular waves, wave climate of the sea around the ship's route, and assigned missions of the vessel. [30] analyzed the influence that perceptions of safety and comfort of service have on the choice of river transport by passengers using hybrid choice models incorporating latent variables. The results indicated that older workers attach less importance to the hull condition and safety; comfort was more valued by young workers and by those users with a higher educational level; the space between seats and developing strategies to improve the behavior of other users significantly increased the perceived comfort of the service provided. Márquez et al. [31] used online survey with 244 respondents to determine which factors contribute to comfort when riding a bicycle and found that comfort is influenced by factors related to bicycle components (specifically the frame, saddle, and handlebar), as well as environmental factors (type or road, weather conditions) and the cyclist (position, adjustments, body parts). Respondents indicated that comfort was a concern when riding a bicycle in most situations and they believed that comfort was compatible with performance.
After summarizing and analyzing the previous researches above, three rules are adopted in this work: (i) The index system that affects the passenger comfort includes both objective and subjective factors. (ii) The thresholds for one exact factor that influence the passenger comfort are defined. (iii) We can use the measurement devices such as vibration sensor, air pressure sensor, thermometer, and automated passenger counting to obtain the data of basic objective parameters of the passenger coaches. Questionnaire survey is a way to obtain the initial data of the subjective indicators.
Method of calculating the improved railway passenger comfort benefits in China
The calculation approach for the improved railway passenger comfort benefits IRPCBs in the Economic Evaluation Methods and Parameters for Construction Project [4] refers to the whole passenger transport process. It can also be seen as the passenger comfort benefit difference before and after the construction of new railway project. In the approach, both subjective and objective factors are used to calculate it. The objective factors include space per capita, vibration, noise, pressure changes, temperature for a railway passenger coach. Subjective factors are the subjective feelings of people in a railway passenger coach, such as seat comfort degree, interior, information services, food services, and health conditions. The IRPCB V IRPCB is defined as where P ij is the set of origin and destination OD(i, j), which is related to the new railway project; m is the passenger coach types; Q ij,m d is the number of transfer passengers in different transport modes and for different transport routes in one transport mode as well; P m 0 is the set of passenger transport routes without newly built projects; P m 1 is the set of passenger transport routes with new projects; l is a generalized line for path P m 0 or path P m 1 , which represents both one actual line and one connecting line between the starting point and arrival point; T ij,m,l represents the time on the l between any OD(i, j); SP ij,m,l is the space per capita on the l between any OD(i, j); # ij,m,l represents the additional generalized cost per hour of the other related influential factors except space per capita on the l between any OD(i, j), which is generally ignored in practice; E and b are the factors need to be demarcated during the calculation process. After analyzing this formula, we can find the following disadvantages: (i) Except for space per capita, other objective factors are not considered, which means that it is not a precise formula. (ii) Usually, it is hard to be applied in practice because the indexes such as # ij,m,l , E, and b are difficult to obtain.
Index adoption and initial data measurement
According to the actual practice and theoretical researches, both objective indicators and subjective indicators have crucial impacts on RPCBs. Objective indicators belong to physical factors during the whole transport process, for example, area per capita in passenger coach, vibration, noise, pressure changes, temperature. Subjective indicators are related to the passengers' subjective feelings, such as seat comfort, interior decoration of the passenger coach, information services, catering services.
Objective indicators
We obtained the data of objective indicators by measurement devices, such as the vibration sensor, air pressure sensor, temperature sensor, noise tester, tape measure, automated passenger counting, and passenger coach design drawings. The following six indicators were measured.
(1) Area per capita in passenger coach Area per capita in passenger coach is a factor that directly affects passenger travel comfort during the rail transport process, and a large area per capita for passenger means high passenger travel comfort. The specific standards are clearly defined in the design specifications for each kind of transport mode. For example, according to the statistical data, the area per capita is 0.57 m 2 when the seat utilization rate is 100% in China [5,6]. In addition, the area per capita in railway passenger coach in other countries is listed in Table 1. The data of area per capita in passenger coach x m 1 were obtained by the combination of tape measurement, automated passenger counting, and passenger coach design drawings, for the different types of railway coaches.
(2) Vibration Vibration exists in each transport mode. It is also a major factor affecting passenger comfort. Vibration is generally divided into lateral and vertical vibrations. Humans are more sensitive to lateral vibration. In this paper, we let the ratio of lateral vibration be 0.7 and ratio of vertical vibration be 0.3 [6,18]. The vibration is used to formulate the operation stability x m 2 .
We used the vibration sensors to obtain the vibration acceleration and frequency in the passenger coach. The vibration sensors recorded the data during the whole testing process, and the averaged values of vibration acceleration and frequency are applied in calculation.
(3) Pressure changes
The air pressure in the railway passenger coach will change during the train operation, and intense pressure fluctuations can cause passenger discomfort and even damage to the body, such as ruptured eardrum. In this paper, the rate of air pressure change is introduced [5, 6]: where x m 3 in Pa=s is the air pressure change rate and Dp in Pa=s the maximum pressure change. Also, T in°C is the temperature in the railway coach, V is the passenger coach volume in m 3 , and R is Molar gas constant. Table 3 shows the design values of the maximum pressure change and maximum pressure change rate in some countries.
Pressure and temperature were measured by pressure sensors and temperature sensors. The passenger coach volume was obtained according to coach design drawings.
(4) Noise
High speed usually causes more noise in the passenger coach. Noise pollution has potentially adverse impact on passengers' health, such as hearing impairment, headache, and neurasthenia. During the train operation process, noise and speed have a linear relationship, i.e., when the speed increases 10 km/h, noise level will increase 1-2 dB, correspondingly. Each kind of transport mode has its own maximum noise level limitation. As for the train operation, if the train speed is 80 km/h, the noise level in the passenger coach should be limited at 68 dB. International Union of Railways (UIC) requires the noise in passenger train should not exceed 65 dB [5,6]. Also, the average operation speed of TGV-A in France is over 300 km/h, but the noise in passenger coach is 66 dB. We used the noise testers to obtain the actual noise level in the railway passenger coaches.
(5) Temperature In recent years, passenger coaches are generally equipped with air-conditioning systems. According to passengers' feedback, there is a great temperature difference between the passenger coach and external environment. Poor airconditioning and ventilation are source of passenger discomfort, for example, dizziness, sneezing, fatigue, memory loss, muscle and joint pain. IS07730 temperature setting standard is widely used in European countries, which requires the temperature range of human thermal comfort to be 21-24°C. Also, the standard ASHRAE55-92 used in the USA insists the range of thermal comfort temperature to be 20-23.6°C. Furthermore, the predetermined temperature boundaries of human skin feeling for hot and cold condition are 20-25°C. The comfort temperature range is 17-28°C in China [5][6]9]. We used the temperature testers to obtain the average temperature in the passenger coaches, and the test data were used to calculate the air pressure change rate.
(6) Passenger travel time Ergonomics research [6,9] shows that if the travel time exceeds 6 h, the passengers will feel uncomfortable. Sometimes, the train may be delayed, which can also cause the discomfort of passengers. We obtained the travel time according to the 12,306 Railway Passenger Service Center.
Subjective indicators
There are five kinds of subjective indicators of interest: health conditions, interior decoration of passenger coaches, information services, seat comfort, and catering services. The five main indicators include 16 subindexes u j , j = 1, 2, …, 16 as shown in Table 4. We collected the initial passenger evaluation data by conducting questionnaire survey in the passenger coaches. There are m kinds of ticket fare, so the comprehensive subjective indicator data x m 7 can be formulated as follows: where a m j is the weight for each subindex, for the m kinds of ticket fare; N m is the number of passengers surveyed, and h i m,j is the score of each surveyed passenger toward each subindex, the range of h i m,j being 0-100; if 0 B h im ,j \ 60, it means that the surveyed passenger regards this item as an uncomfortable source; if 60 B h i m,j B 100, this item is seen as a comfortable source. We let the average surveyed data h j m , h j m ¼ P N m i¼1 h m;j i N m . Furthermore, if 0 B x m 7 \ 60, the comprehensive subjective index is regarded as an uncomfortable factor; if 60 B x m 7 B 100, it is a comfortable factor for passengers.
The weighting coefficient a m j can be obtained using the analytic hierarchy process (AHP) [32]. Here, we used the two-phase AHP to construct the judgment matrix that can meet the consistency requirement [33] and [34]. The procedure is presented as follows: (i) Constructing the subindex judgment scaling matrix In the first phase, after the pairwise comparison of all subindexes through a three-scale method (with values of 0, 1, and 2), we built a comparison matrix to calculate the ranking index of the subindexes. In this section, we use i, j to represent the subindexes. H = (h i,j ) = {0, 1, 2}is the judgment scale set, in which h i,j = 0 means that subindex i is less important than subindex j; h i,j = 1 means that subindex i is as important as subindex j; and h i,j = 2 means that subindex i is more important than subindex j.
(ii) Constructing the subindex judgment matrix In the second phase, we constructed the judgment matrix using the range method. If a ij is the ratio of the importance of subindex i to the importance of subindex j, then 1/a ij is the ratio of the importance of j to that of i. According to the range method: where A ¼ a ij À Á constitutes the consistency judgment matrix; a b is the given relative importance of the pair of range elements, based on a certain standard, and always generally assigned a constant value of a b = 9 [35].
Finally, we conducted a consistency test for the obtained weights. The final results about the 16 subindex weightm n are presented in Table 7.
RPCB calculation approach
The field research on travelers in Shanghai [36] showed that under the same traveling time conditions, more than 86% of passengers were willing to pay more for a more comfortable traveling environment. The RPCB is regarded as an objective factor when passengers are choosing their travel mode. Shi et al. [37] thought that there is a directly proportional relationship between passenger comfort and travel expenses, but with the increase in travel time and travel costs, the RPCB growth rate is reducing. According to the pricing strategy of passenger tickets in China, in general, longer distance means higher ticket cost, but less unit ticket price. Figure 1 shows the relationship between travel time and passenger ticket cost (a), and the relationship among the travel time, passenger ticket cost, and RPCB (b).
Property The RPCB is related to both travel time and passenger ticket cost; we can use the following composite function to formulate the RPCB: Proof Figure 1a shows that, in general, longer travel distance means more travel time, so we use time to replace distance; if t 3t 2 = t 2t 1 , then c 3c 2 \ c 2c 1 ; we can use the following function to formulate the relationship between the travel time and passenger ticket cost: Figure 1b shows that, for the exact travel time t 2 , if ticket price c 3 [ c 1 , then F(c 3 , t 2 ) [ F(c 1 , t 2 ). If t 3t 2 = t 2t 1 , then we can obtain c 3c 2 \ c 4c 3 ; we can use the following function to describe the relationship between the RPCB, travel time, and passenger ticket cost: Remark When we combine t = f(c) and F(c, t), we can use the composite function [23] mentioned above to describe the relationship between the RPCB and the passenger transport ticket fare.
Passenger ticket cost and comfort degree vary with the types of seats. For example, there are three types of tickets in China's high-speed trains: first-class seat, business-class seat, and ordinary-class seat. The RPCB loss due to train delay is varied for the passengers in different seats, because the first-class seat passengers spend more money to buy the tickets, expecting better service and higher RPCB. The passenger comfort benefit is increased or reduced by the ticket fares.
We collected the information of the passenger ticket prices from the Railway Customer Service Center of China (RCSCC). The passengers have m types of ticket fares, such as 1748 CNY (for a business-class seat), 933 CNY (for a first-class seat), and 553 CNY (for a second-class seat) in one high-speed train from Beijing South to Shanghai Hongqiao. Though the ticket price is different, the passengers on one train have to experience the same vibration, pressure change, temperature, and operation time. The area per capita in passenger coach, however, is quite different, because different types of seat correspond to different train facilities and equipment setting and service.
In order to achieve the non-dimensionalization of initial data, we divide all the objective and subjective indexes into three categories: (1) area per capita in passenger coach, (2) vibration, noise, pressure changes, temperature, and For the m types of seats, the quantitative calculation process of the RPCB is organized as three steps.
Step 1 Non-dimensionalization of initial data There are seven kinds of obtained initial data x m n , n = 1, 2, …, 7, which need to be nondimensionalized.
(i) Area per capita in passenger coach where x m 1 represents the initial area per capita in passenger coach. If X m 1 [ 0, it is a comfort factor for passengers; otherwise, it is not helpful to improve the final RPCB. We let the comfort threshold about area per capita in passenger coach x m 1 C 0.57.
(ii) Vibration, noise, pressure changes, temperature, and operation time where x n m ; x n m  à ; 8m; 8n ¼ 2; 3; . . .; 6 shows the comfort range for each index. Also, it is helpful to increase the RPCB if X m n [ 0; otherwise, it will decrease the final RPCB. In this paper, let the comfortable range of railway vibration 0 B x m 2 B 7, which means if x m 2 [ 7, the vibration will become an intolerable factor for passengers in the railway coach. The comfortable range about air pressure change is 0 B x m 3 B 200 Pa=s. Also we set that the comfortable range of the noise is 0 B x m 4 B 65 dB, the comfortable temperature range in passenger coach is 17 B x m 5 B 28°C, and the comfortable railway travel time range is 0 \ x m 6 B 6 h.
The final data of the subjective index will increase the RPCB if 60 \ x m 7 B 100; otherwise, it will decrease the RPCB if 0 B x m 7 B 60.
Step 2 Calculate the weight of each index The two-phase AHP will be used to obtain the weight m n of each index n. The final results about the seven index weights are presented in Table 7.
Step 3 Calculate RPCB We use the product of the passenger ticket price P m (CNY) and comprehensive evaluation coefficient g m to represent the passenger comfort benefit value, which is just for an unit calculation result.
5 Numerical experiment and results discussion
Initial data
Over the past twelve years, China built a lot of railway lines to meet the huge and growing passenger transportation demand. This has resulted in the largest ordinary-speed railway (OSR) network and high-speed railway (HSR) network in the world, with total lines in operation nearly 120,000 km by the end of 2015. In addition to this, continuous high investments have prompted the expanding of China railway network. Thus, the evaluation of passenger comfort benefit for these new railway projects becomes important. The basic operation information of the two trains of G13 and T109 (G13 is a high-speed train, and T109 is a ordinary train) is collected from the China Railway Passenger Service Network (www.12306.com) to test the proposed RPCB calculation method, including train operation time, seat types, and ticket prices. Detailed information is shown in Table 5.
There are four kinds of blocks in G13, including business-class seat (BS), principal seat (PS), first-class seat (FCS), and second-class seat (SCS), and four kinds of blocks in T109, including luxury soft sleeper (LSS), soft sleeper (SS), hard sleeper (HS), and ordinary seat (OS). The test period is the train operation time, 4.92 h for G13 and 15.2 h for T109, respectively. We arranged eight persons for the G13 group and eight persons for T109 group. For the eight persons in each group, four of them used devices to collect the basic operation data. Each of them has a vibration sensor, an air pressure sensor, a temperature sensor, a noise tester, a tape measure, an automated passenger counting. The data of these objective indexes are presented in Table 6. Other four persons conducted the questionnaire survey. According to the number of passengers paying for different seats, different numbers of questionnaires were delivered: 50 in BS, 100 in PS, 100 in FCS, and 200 in SCS, and 50 in LSS, 100 in SS, 200 in HS, and 300 in OS. Finally, we obtained 423 valid questionnaires for G13 and 605 valid questionnaires for T109. Due to space limitations, we only present the final data in Table 6, not including the initial survey data. Only passengers traveling from Beijing to Shanghai were chosen as respondents. As a result, for each kind of seat, we only present the average values h j m in Table 7. The first row shows the average surveyed data h j m and its corresponding weight a m j about the 16 subindexes.
Calculation results and discussion
According to the proposed RPCB calculation processes, we obtain the following calculation results, shown in Table 8.
The business-class seat in G13 provides the most comfortable traveling condition, but has the most expensive fare. The RPCB for ordinary-speed train seats is lower than that for high-speed railway train generally. The RPCB of LSS is higher than RPCB of SCS and FCS, because the services of luxury soft sleeper (LSS) in T109 are better than of SCS and FCS, passengers have a relatively independent personal space than the other seats, and they have less disturbing factors during the travel, so the they will spend less time to get rid of travel fatigue. The negative values for both hard sleeper and ordinary seat mean that there is no comfort for passengers, and passengers will need more time to recover from tiredness. Furthermore, the high-speed railway has an advantage in time-saving. Figure 2a shows the comparison of the RPCB and passenger ticket fare between the high-speed train and ordinary-speed train; Fig. 2b shows the linear relationship between the RPCB and ticket fare with correlation coefficient R 2 = 0.9616, which can be formulated as where c is the ticket fare.
Conclusion and further works
By considering the passenger ticket fare, a quantitative calculation approach for railway passenger travel comfort benefit was formulated. After choosing the objective and subjective indexes, we used measurement devices to collect the data of the objective indexes from two types of railway trains, a high-speed train of G13 from Beijing South to Shanghai Hongqiao and an ordinary-speed train of T109 from Beijing to Shanghai. Also questionnaires were delivered to obtain the initial data of subjective indexes. Then, we processed the initial data with three steps and analyzed the obtained results, finding a linear relationship between the RPCB and ticket fare. The presented RPCB calculation process can be used to calculate the economical benefit of scale by multiplying the total number of passengers. Furthermore, the passenger comfort benefit calculation process is suitable for other kinds of transport modes, but the indexes may be different, so exploring other application of the calculation process will be meaningful and necessary for the future works. | 2019-04-16T13:27:40.284Z | 2018-02-05T00:00:00.000 | {
"year": 2018,
"sha1": "13605298b5a7243c5d0570d6efcde4f4500ce24b",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40534-018-0157-y.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "2a7b69ab31b18e30dc3e2ba2bc30cb16d4f97a95",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
234962148 | pes2o/s2orc | v3-fos-license | A Review of Japanese Greenhouse Cucumber Research from the Perspective of Yield Components
Japanese cucumbers are unique in terms of their production methods, such as steamed cultivation in which the greenhouse is closed to increase the temperature and humidity. Moreover, Japan has strict standards for fruit size. Therefore, most research on greenhouse cucumbers in Japan has focused on pests, diseases and fruit quality, and few studies have focused on increasing yields. Therefore, we aimed to contribute to a yield improvement in Japanese greenhouse cucumber production by considering environmental factors and training methods based on the yield components. Here, we discuss different training systems, pinching and lowering methods, and the effects of environmental factors such as temperature, humidity, CO 2 concentration, irrigation, and nutrition on yield and yield components. Moreover, this paper also proposes future areas of research for Japanese greenhouse cucumbers.
Introduction
Cucumber (Cucumis sativus L.) is an important vegetable in Japan and has a long cultivation history. However, The planted area and yield have fallen to 10,300 ha and 548,100 t, respectively (MAFF, 2020 <https://www.maff.go.jp/j/tokei/kouhyou/sakumotu/sakkyou _yasai/>). This represents a 23% decrease in planted area (13,400 ha) and a 19% decrease in yield (674,600 t) compared to 15 years ago. On the other hand, the productivity of Japanese cucumbers is low, at 3.4 kg·m −2 and 10.7 kg·m −2 for summer-autumn and winter-spring cultivations, respectively (MAFF, 2020 <https://www.maff.go.jp/j/tokei/kouhyou/sakumotu/sakkyou _yasai/>). This is significantly different in comparison with a productivity of 72.8 kg·m −2 in the Netherlands, an advanced facility-growing country (FAOSTAT, 2018). This may be due to the slow spread of highproductivity technologies such as soilless nutrient cultivation and environmental control technology, as well as a lack of progress in developing suitable high yield varieties. However, the most important problem may be the unique Japanese cultivation system known as "Steamed Cultivation", a characteristic cultivation Received; November 10, 2020. Accepted; March 1, 2021. First Published Online in J-STAGE on April 21, 2021. * Corresponding author (E-mail: ahn@affrc.go.jp). method used in Japanese greenhouse cucumber production. Steamed cultivation was referred to as a humid culture condition in a previous study (Hirama et al., 2011), and greenhouses are closed to increase the temperature and humidity (30°C and above 70% relative humidity) before noon to promote the development of lateral branches and fruit development. However, this cultivation method promotes plant diseases and may increase workload because the difficult work environment. Furthermore, there are various training methods such as pinching, lowering, and renewed lowering cultivation, which require a lot of time for trimming, training, harvesting, and adjustment. Moreover, Japanese cucumbers have certain strict standards for fruit size: young cucumbers 22 cm long and weighing 100 g are preferred. With this background, past research on greenhouse cucumbers in Japan has been mainly conducted to control diseases, such as downy mildew and powdery mildew, and maintain fruit quality (Sakata et al., 2006;Yoshioka et al., 2014), and avoid bent fruit, bottom-gourd-shaped fruits, poor texture and gloss (Kanahama and Saito, 1989;Hirama et al., 2007;Sakata et al., 2011). In contrast, there is little information aimed at productivity improvements, such as increasing yield.
Therefore, the purpose of this study was to reconsider previous studies regarding the environmental control and training method used in greenhouse cucumber production in Japan from the perspective of yield components. We also aimed to acquire knowledge to support future improvement in yields of greenhouse cucumbers by referring to greenhouse production in the Netherlands, which has been actively trying to improve productivity.
Yield components
In recent years, in major cucumber production areas in Japan, there have been cases where 40 kg·m −2 per year has been achieved by improving environmental control and cultivation techniques. In order to achieve higher yields in the future, it is necessary to conduct quantitative analysis from the perspective of dry matter production. Higashide and Heuvelink (2009) evaluated the differences in tomato productivity by analyzing the yield components, as shown in Figure 1. The yield components have a hierarchical structure; the elements in the lower layer determine the elements in the upper layer; the fruit yield fresh weight can be expressed as a product of fruit yield dry weight and fruit dry matter content. Thus, it is possible to clarify the factors contributing to the difference in yield.
According to studies by Higashide et al. (2012a) and Iwasaki et al. (2014), the light extinction coefficient of cucumber defined by Monsi and Saeki (1953) ranged from 0.8 to 1.69, while that of tomato was in the range of 0.6 to 1.0 (Higashide and Heuvelink, 2009;Higashide et al., 2012b), so cucumber has a higher light extinction coefficient than tomato. This indicates that the amount of intercepted light varies even with the same leaf area index (LAI) management.
The fruit distribution rate of cucumber varies throughout the growing season due to different female flowering rates and occurrence of fruit abortion in different cultivars. In fact, Heuvelink and Marcelis (1989) and Marcelis (1992Marcelis ( , 1993aMarcelis ( , b, 1994 developed a model to predict variable fruit distribution. This suggests that the number of fruit set and fruit distribution rate interact with each other. Data generated regarding the parts of the yield components that are affected by cultivars and cultivation systems in other crops as well as cucumbers can be used effectively for further technological development.
Environmental factors
In Japanese cucumber production, the relationship between environmental control and yield is not yet clear. Research has been conducted on the relationship between environmental factors and the occurrence of lateral branches. However, very little quantitative analysis has been conducted, making it difficult to detect the yield component that has affected the yield. In other words, it is difficult to evaluate the factors that result in a high yield due to the variation in LAI, the number of nodes with different plant densities and trimming methods. Here, we discuss the results of research on each environmental factor.
Temperature
Temperature is closely related to the growth rate, having a strong influence on the rate of increase in the number of nodes and the rate of fruit growth (Marcelis, 1993c;Marcelis and Baan, 1993). This indicates that temperature affects the intercepted light caused by increased LAI in the yield component.
The development rate is related to the day and night temperature, and therefore, to the 24-h mean temperature, as observed for cucumber (van Uffelen, 1989). The high temperature and humidity during steaming cultivation in the morning are not suitable for workers. Kawashiro et al. (2010) found that the workload was greatly reduced by maintaining a target temperature of 25°C in a greenhouse for 2 hours from 9:30 to 11:30 a.m. which is the harvest time, and then 33°C for 2 h until 13:30 p.m. In this study, it was found that the same yield (16.8 kg·m −2 ) and quality were obtained by maintaining the temperature at the same level compared with steaming cultivation: the maximum temperature was 28°C, and the harvesting operation in the greenhouse could be made more comfortable. Ehara et al. (2017) investigated the effect of a higher afternoon temperature followed by a quick temperature drop in the early evening on the growth of cucumbers. They found that this treatment reduced the days required for harvesting (13.6 days to 12.8 days and 13.0 days to 12.0 days in the fruits that flowered on December 2 and February 2, respectively) and increased the average fruit weight (89.7 g to 93.6 g) due to increased fruit surface temperature, which promoted fruit growth. It was evident that the treatment suppressed leaf growth and lateral branching and reduced the number of harvested fruits in the low sunshine period; therefore, no difference in total yield was observed with this treatment. This result suggests that dry matter production mostly depends on the average daily temperature. On the other hand, at different temperatures, Marcelis (1994) estimated that the whole plant potential vegetative growth rate of cucumber increased from 7.8 to 9.3 g·d −1 when the temperature increased from 18 to 24°C. Temperature greatly influences the leaf appearance rate (Baker and Reddy, 2001;Marcelis, 1993c). Therefore, methods that increase the average temperature are useful for securing LAI and increasing the amount of integrated light by accelerating the development of the number of leaves. Similarly, high temperature management to increase the number of nodes (improving the number of fruits) could be also be considered.
However, in Japan, temperature is controlled at lower than the optimum temperature to reduce energy costs. This may cause the reduction in plant vigor due to low temperature management, and the inability to secure LAI in the spring when solar radiation improves. Furthermore, at present, growth characteristics at low temperatures have not been modeled. Therefore, it is necessary to investigate the effect of dry matter production on the critical temperature, and to manage the temperature appropriately, balancing the cost-effectiveness.
Humidity
Humidity levels generally affects diseases and pests; high humidity promotes downy mildew and brown spots, and low humidity promotes powdery mildew (Sun et al., 2017;Whipps and Budge, 2000). On the other hand, extremely low humidity (excessive transpiration) leads to a decrease in photosynthetic rate and leaf area. However, there is no literature indicating that increased humidity directly promotes photosynthesis and improves yields. Nevertheless, in actual production sites, there have been several cases where suitable humidity control affected stomatal conductance and improved yields by promoting photosynthesis.
Very few studies have investigated the relationship between humidity and dry matter production in greenhouse cucumber production. In previous studies, Bakker (1991) observed no clear effects of vapor pressure deficit (VPD) varied from 0.3 to 0.9 kPa on fruit dry matter content in tomato and cucumber. However, the VPD value is expected to be higher than that mentioned above because of the higher temperature in Japan. Sakiyama et al. (2002) reported the effect of humidity conditions (RH40, 60, 80%; 1.1 to 3.4 kPa) at 35°C on young plants, and the relative growth rate of dry matter production and leaf area were higher at RH60 and 80% than 40%, which suggests that an increase in LAI contributed to the increase in the amount of integrated light and to the dry matter growth. Therefore, it is desirable to maintain the relative humidity at 60% or more at higher temperatures. On the other hands, Hirama et al. (2002) stated that the number of harvested fruits increased (43.0 to 50.2 per plant) due to the enhancement of second and subsequent lateral branching at 25°C and 40% RH compared with 30°C and 60% RH, suggesting that steaming cultivation (high humidity and temperature) may not be the optimal cultivation method in terms of the environmental conditions. Based on these results, it is necessary to analyze the effect of humidity on dry matter production in cucumbers.
CO 2
CO 2 is commonly used in greenhouse cucumber production, and its effectiveness is known to be superior to than that on tomatoes and bell peppers (Nederhoff, 1994). Nederhoff (1994) found that in the 200 to 1,100 ppm CO 2 concentration range, cucumbers had an increased fruit distribution rate and light use efficiency (LUE) was also increased by 10 to 15%. Kawashiro et al. (2009) reported that CO 2 application at 500 ppm (7 h application) or 1,000 ppm (3 h application) increased fruit yield by 39 to 55% compared with nonapplication. Further applying 500 ppm CO 2 for 7 h was more effective at increasing the yield of cucumber than 1,000 ppm for 3 h. In this research, a higher LAI, net assimilation rate and relative growth rate were observed, suggesting that the synergistic effect of the increase in the amount of interception light caused by increased LAI, and the LUE due to CO 2 supply induced a higher total dry matter (TDM) and yield. In addition, Iwasaki et al. (2014) analyzed the yield increase with CO 2 supply (800 ppm) and humidity control (RH70-80% when the temperature was above 25°C) based on the yield components in three different cultivars. As a result, the increase in LUE (11-19%) was due to increased photosynthetic rate, and fruit distribution due to a reduction in fallen and aborted fruit.
These results suggest that CO 2 supply is necessary to increase yield in cucumber production. However, in a number of scenarios in Japanese greenhouse cucumber production, it is not cost-effective to supply high CO 2 on cloudy days and before the plant community has formed. Therefore, to achieve high productivity in cucumbers, it is important to create an optimal environment for dry matter production by appropriate supply of CO 2 along with ensuring a suitable amount of light reception.
Irrigation & Nutrition
Nutrient and water management are indirectly closely related to dry matter production. In soil cultivation, even if the same nutrient and water management are conducted with similar control of environmental factors such as temperature, humidity, and CO 2 , the growth will be different due to differences in the physical, chemical and biological properties of the soil, which makes it difficult to conduct research while maintaining the same water content in different soil conditions. On the other hand, growers are reluctant to actively irrigate because of concerns about disease caused by high humidity, and in many cases they do not irrigate properly. Insufficient irrigation not only causes a reduction in photosynthesis due to stomatal closure, but also leads to a reduction in the amount of light interception due to a decrease in leaf area and lateral branch development. Although, management with a pF meter and a soil moisture sensor using the time domain reflectometry method or the amplitude domain reflectometry method is increasing, there is no clear standard for soil conditions. Therefore, in the future, it will be important to develop a model that can quantitatively determine the water management required by plants based on environmental and biological information.
On the other hand, the use of bloomless rootstock makes plants susceptible to disease and pests, resulting in low silicon content (Hasama, 1992;Hasama et al., 1993). Therefore, it is necessary to determine the range of silicon fertilization that does not cause blooms while maintaining the silicon content, or to select varieties that do not cause blooms. In recent years, there has been a growing interest in the nutrient cultivation of cucumbers, but most cultivars have been selected and bred by soil cultivation with an emphasis on the water deficit tolerance, nutrient stresses and diseases, rather than yield. Therefore, it is necessary to breed and research cultivars suitable for hydroponic cultivation to improve yield and productivity.
Training methods
In Japanese cucumber cultivation, there are two major cultivation methods, the pinching and lowering methods, and the pinching method is the most common because of the structural limitation of greenhouses with low eaves. In this method, the lateral branches developing from the axillary buds are repeatedly pinched to increase the number of lateral branches, which is important to increase the yield. However, the method of plucking differs depending on the producer and variety, which causes significant differences in plant vigor and LAI, leading to differences in yield. On the other hand, lowering cultivation is often used in employment-type production such as large greenhouses because of the ease of understanding in terms of the training method. A high rate of female flower setting, a so-called predominantly female type, is required because the method involves setting fruits on the main stem for a long period of time. Isomura et al. (2001) selected cultivars of the predominantly female type and used low lateral branch development. They compared the harvest time in lowering cultivation with that of pinching cultivation and found that the time required per fruit was shorter (8.0 to 11.7 and 8.8 to 12.3 seconds for lowering and pinching, respectively) for lowering than pinching, and that lowering was less labor-intensive. In addition, Ota et al. (2005) stated that it is easier to maintain plant vigor in lowering cultivation at low temperatures and irradiation, and that it is possible to expand the scale of greenhouses using simple plant management. However, lowering cultivation is disadvantageous in that the total working time is increased (10%) in comparison with pinching cultivation. Hirama et al. (2011) revealed that the lowering method with some lateral branches and three branches with non-training had higher numbers of fruits and marketable yields that was less laborintensive than the pinching method by testing different training methods. Considering this result from the viewpoint of the yield component, the increase in the number of fruits due to the increase in the number of nodes, and the increase in the amount of light reception due to the increase in the LAI caused by extending the side branches are the causes of the increase in the yield. Higashide et al. (2012a) analyzed the effects of the lowering and pinching methods on dry matter production and yield in short-term cultivation from the yield components, and found that the pinching cultivation had a higher yield than that of lowering. They stated that the result was due to a higher proportion of fruit dry matter and total dry matter. Moreover, there are relationships between total dry matter and light use efficiency in the total cultivation period, and the amount of light reception and total dry matter up to 40 days after planting. Further, Iwasaki et al. (2014) investigated the difference in yield between pinching cultivation and lowering cultivation in Japanese and Dutch cultivars in terms of yield components. As a result, Dutch cultivars had higher yields than the Japanese cultivars because of a lower light extinction coefficient and higher fruit distribution due to differences in sex expression. In addition, the yield was higher in pinching cultivation than in lowering cultivation caused by a lower light extinction coefficient and superior light use efficiency. Recently, an increasing number of producers have adopted the "renewal lowering method", which involves extending the newly lateral developed branches after promoting fruit growth at trained branches. Although this method has been shown to be effective in improving fruit quality and regulating harvest workload, it is still unclear whether the method is effective in dry matter production and yield potential. Therefore, it is necessary to conduct further research into this cultivation system in the future.
As described above, in greenhouse cucumber cultivation, there are differences in yield and workability according to the various cultivation methods, and the most suitable method for cucumber cultivation has not been determined. However, in terms of yield components, it is possible to clarify differences in yield, and the pinching method is more efficient than the lowering method in terms of the light extinction coefficient and fruit distribution. However, these results must be considered while acknowledging that the plants were cultivated for a short period and that cultivars with nonpredominantly female plants were used.
In other words, the initial increase in LAI is directly related to the yield in short-term cultivation, but the planting density results of Higashide et al. (2012a) were 1.48 plants·m −2 and 0.99 plants·m −2 using the lowering and pinching methods, respectively, suggesting that the lowering method may not be suitable for short-term cultivation. In addition, Japanese cultivars have a low rate of female flowering on the main branches. Iwasaki et al. (2014) reported that Japanese cultivars are suitable for pinching cultivation. Therefore, in the future, it will be necessary to use cultivars with predominantly female plants and to consider cultivation methods for long-term cultivation. In Japan, short-term cultivation of two crops per year is common, and for material production, it is important to ensure community formation, i.e., an increase in LAI in the early stages of cultivation, but there are no reports that investigated changes in LAI over time. Therefore, in the future, more detailed studies are needed on cultivation methods to investigate changes in LAI over time and yield components.
Future research
We have discussed the influence of environmental factors and cultivation methods in terms of yield components. Environmental control has been carried out to enhance the lateral branches of the non-predominantly female plants. Furthermore, Dutch tomato cultivars have been improved to have higher LUE (Higashide and Heuvelink, 2009). In order to improve productivity, similar breeding approaches can also be expected to be effective in Japanese cucumber, and it is necessary to re-evaluate the response of various types of Japanese cucumber plants to environmental factors in (Sakata et al., 2011) based on dry matter production.
On the other hand, various models have been devel-oped to predict and analyze many factors that affect growth and yield, and simulation models are also useful for quantitative analysis. Several models have been developed to predict yield and dry matter production in tomato (Dayan et al., 1993;Vanthoor et al., 2011;Lin et al., 2019), and efforts have also been made to analyze the potential yields in these models and the measured yield to analyze the reduction (Van Ittersum et al., 2013).
However, the yield model may require some important parameters that are difficult to estimate in an actual greenhouse environment, such as the potential growth rate and fruit distribution rate. Therefore, deep learning may be useful for more accurate predicted results. In fact, some models have been developed to predict yield and dry matter production in tomato (Lin and Hill, 2008;Ehret et al., 2011;López-Aguilar et al., 2020) by machine learning, and a product that predicts disease (Plantect, Bosch, Japan) has also been developed. On the other hand, it is extremely difficult to obtain accurate data to create a prediction model using deep learning. Especially in Japan, it is difficult to obtain accurate data because there are various cultivation methods, greenhouses structures, monitoring sites, and environmental control instruments. Therefore, it is necessary to develop a model for greenhouse cucumber production in Japan with high prediction accuracy, even if relatively less data is available, and to fine-tune the parameters for each greenhouse. On the other hand, simulation models may require plant information such as leaf area. Advances in sensing technology have made it easier to obtain plant physiological and morphological information. In fact, real-time monitoring technology for the photosynthetic rate in communities has been developed (Shimomoto et al., 2020). Therefore, it is necessary to measure and analyze the causal relationship between growth and the environment in cucumber in more detail. Cucumber has a higher growth rate than that of other fruit and vegetables, which suggests that it should be easier to estimate the dry matter production than that of tomatoes and bell peppers. In addition, in order to obtain higher yields in the future, it will be essential not only to breed new cultivars, but also to construct new cultivation systems with quantitative analysis research in terms of dry matter production. We conclude that consistent management of cultivation, labor and distribution based on evaluation is essential for future Japanese greenhouse cucumber production.
Literature Cited
Bakker, J. C. 1991. Analysis of humidity effects on growth and production of glasshouse fruit vegetables. Dissertation, Agricultural University, Wageningen. Baker, J. T. and V. R. Reddy. 2001. Temperature effects on phenological development and yield of muskmelon. Ann. Bot. 87: 605-613. | 2021-05-22T00:02:58.613Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "1a695b5aca55ccb1d5c8da7efc6d7c21c4e7d8fc",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/hortj/90/3/90_UTD-R017/_pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "27835f2787ef2d22202f74a9e84de875d9db2041",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
259837463 | pes2o/s2orc | v3-fos-license | QCD effective charges from low-energy neutrino structure functions
We present a new perspective on the study of the behavior of the strong coupling $\alpha_s(Q^2)$ -- the fundamental coupling underlying the interactions between quarks and gluons as described by the Quantum Chromodynamics (QCD) -- in the low-energy infrared (IR) regime. We rely on the NNSF$\nu$ determination of neutrino-nucleus structure functions valid for all values of $Q^2$ from the photoproduction to the high-energy region to define an effective charge following the the Gross-Llewellyn Smith (GLS) sum rule. As a validation, our predictions for the low-energy QCD effective charge are compared to experimental measurements provided by JLab.
Introduction. The study of (anti-)neutrino-nucleus interactions plays a crucial role in the interpretation of ongoing and future neutrino experiments which ultimately will also help improve our general understanding of the strong interactions as described by Quantum Chromodynamics (or QCD in short). Different types of interactions occur depending on the neutrino energies E ν probed. The one of particular relevance to QCD is inelastic neutrino scattering, which occurs at energies above the resonance region, for E ν O(10) GeV and when the invariant mass of the final states satisfies W 2 GeV. In such a regime, the inelastic neutrino scattering is composed of nonperturbative and perturbative regimes referred to as shallow-inelastic scattering (SIS) and deep-inelastic scattering (DIS), respectively.
The main observables of interest in neutrino inelastic scattering are the differential cross-sections which are expressed directly as linear combinations of structure functions F ν/ν i,A (x, Q 2 ) with x the Bjorken variable, Q 2 the momentum transfer, and A the atomic mass number of the proton/nuclear target. In the DIS regime, the neutrino structure functions are factorized as a convolution between the parton distribution functions (PDFs) and hard-partonic cross-sections. The latter are calculable to high order in perturbation theory while the former have to be extracted from experimental data. On the other hand, in the SIS regime in which nonperturbative effects dominate, theoretical predictions of neutrino structure functions do not admit a factorised expressions in terms of PDFs. Various theoretical frameworks have been developed to model these low-Q 2 neutrino structure functions, e.g. [1], but all of them present limitations.
In [2] we presented the first determination of neutrino-nucleus structure functions and their associated uncertainties that is valid across the entire range of Q 2 relevant for neutrino phenomenology, dubbed NNSFν. The general strategy comes down to dividing the Q 2 range into three distinct but interconnected regions. These regions refer respectively to the low-, intermediate-, and large-momentum transfers. At low momentum transfers Q 2 Q 2 dat in which nonperturbative effects occur, we parametrize the structure functions in terms of neural networks (NN) based on the information provided by experimental measurements following the NNPDF approach [3]. In the intermediate momentum transfer regions, Q 2 dat Q Q 2 thr , the NN is fitted to the DIS predictions for convergence. And finally at large momentum transfers, Q Q 2 dat , the NN predictions are replaced by the pure DIS perturbative computations.
Such a framework allows us to provide more reliable predictions of the low-energy neutrino structure functions -we refer the reader to [2] for more details. The NNSFν enables the robust, model-independent evaluation of inclusive inelastic neutrino crosssections for energies from a few tens of GeV up to the several EeV relevant for astroparticle physics [4], and in particular fully covers the kinematics of present [5,6] and future [7,8] LHC neutrino experiments.
Aside from its relevance in studying neutrino physics, the NNSFν framework may also potentially be used as a tool to strengthen our understanding of the nonperturbative regions of QCD owing to its predictions in the low-energy regime. It is commonly understood that studying the theory of the strong interactions in the Infrared (IR) regime is necessary to understand both high-energy and hadronic phenomena therefore providing sensitivity to a variety of Beyond the Standard Model (BSM) scenarios. One aspect that deserves a closer look in studying long-range QCD dynamics is the behavior of the strong coupling α s due to its special property as an expansion parameter for first-principle calculations. In the perturbative regime, the uncertainties in the value of α s is known to the sub-percent level (∆α s /α s = 0.85%, [9,10]). At low-Q 2 , however, its determination is subject to large uncertainties mainly due to the lack of theoretical frameworks that can correctly accommodate for the nonperturbative effects.
A number of approaches have been explored in the literature to study the coupling in the nonperturbative regime including lattice QCD or the Anti-de-Sitter/Conformal Field Theory (AdS/CFT) duality implemented using QCD's light-front quantization. In the following, we use the Grunberg's effective charge approach defined from the Gross-Llewellyn Smith sum rule sum rule. From perturbative QCD, the effective coupling charge can be calculated from the perturbative series of an observable -usually defined in terms of the sum rules -truncated to its first order in α s . The reason for such a truncation is related to the scheme as at leading order the observable is independent of the renormalization scheme (RS). One of the main advantages of the effective charge w.r.t. different approaches is that there are several experiments that measure the effective coupling α eff s (Q 2 ) to compare the theoretical computations to.
Here first we briefly review the Gross-Llewellyn Smith sum rule and verify that it is satisfied using the neutrino structure function predictions from the NNSFν determination. We then use the NNSFν framework to compute the effective charge defined from the sum rule and compare the results to experimental measurements extracted at JLab [11][12][13][14].
The Gross-Llewellyn Smith sum rule. The neutrino structure function xF νN 3 must satisfy the Gross-Llewellyn Smith (GLS) sum rule [15] in which its unsubtracted dispersion relation has to be equal to the number of valence quarks inside the nucleon N . Such a dispersion relation could also be extended to the neutrino-nucleus interactions in which the GLS sum rule writes as follows: where n f is the number of active flavors at the scale Q 2 . The terms inside the parentheses on the right-hand side represent the perturbative contribution to the leading-twist part whose coefficients c k have been computed up to O(α 4 s ). The ∆ HT -term instead represents the power suppressed non-perturbative corrections, see [16] for a recent review. Notice that the form of the perturbative part in Eq. (1) is convenient because, as opposed to many observables in pQCD, it does not depend both on x and on the mass number A.
The low-energy experimental data from which the NNSFν neutrino structure functions were determined do not provide measurements in the low-x region, and therefore the evaluation of Eq. (1) largely depends on the modeling of the small-x extrapolation region. In our predictions, the behavior at small-x is inferred from the medium-large-x regions via the preprocessing factor x 1−α i whose exponents are fitted to the data. In addition, due to the large uncertainties governing the small-x region, we have to truncate the integration at some x min value. The truncated sum rule should however converge to the pQCD predictions in the limit x → 0. The truncated sum rule writes as for different values of the lower integration limit x min and different nucleon/nuclear targets. In Fig. 1 we display the results of computing the truncated GLS sum rule in Eq. (2) using our NNSFν predictions. The results are shown for different lower integration limits x min = 10 −3 , 10 −4 and for different nuclei A = 1, 56. For reference, we compare the NNSFν calculations with the NLO fit to the CCFR data [17], the CERN-WA-047 measurements [18], and to the pure exact QCD predictions. All the results, except for the NNSFν, are always the same in all the panels since they are independent of both x and A. In the case of the QCD predictions, the Q 2 dependence is entirely dictated by the running of the strong coupling α s (Q 2 ). As in the previous section, the error bars on the NNSFν represent the 68% confidence level intervals from the N rep = 200 replicas fit. Based on these comparisons, we can conclude that there is in general good agreements between the different results. In particular, the NNSFν and pure QCD predictions are in prefect agreement when the lower integration limit is taken to be x min = 10 −3 . Even more remarkably, the slope of the GLS sum rule, which in the the QCD computation is purely dictated by the running of the strong coupling α s (Q 2 ), is correctly reproduced by the NNSFν predictions. The agreement in central values slightly worsens when the lower integration limit is lowered down to x min = 10 −4 . Such a deterioration can also be seen in the increase of the uncertainties. As alluded earlier, such a behavior is expected due to the fact that NNSFν does not have direct experimental constraints below x ≈ 10 −3 . Notice that the observations above hold for the different nuclei considered.
QCD effective charges. In order to fully understand the short-and long-range interactions, knowing the strong coupling α s in the nonperturbative domain (or equivalently in the IR regime) is crucial. Further arguments can be put forth that knowing the IRbehavior of α s is necessary to fully understand the mechanism for dynamical chiral symmetry breaking [9,10]. However, studying the strong coupling in the IR domain is very challenging since standard perturbation theory cannot be used. In the following section, we explore an attempt to extend the perturbative domain using our NNSFν framework to provide predictions for the low-energy strong coupling α s (Q 2 ).
In the framework of perturbative QCD, the strong coupling -which at leading order can be approximated as α s (Q 2 ) = 4π/β 0 ln(Q 2 /Λ 2 )-predicts a diverging behavior at the Landau Pole when Q 2 → Λ 2 . Such a diverging nature is not an inconsistency of perturbative computations per se since the pole is located in a region way beyond the ranges of validity of perturbative QCD. Instead, the origin of such a divergence is the absence of nonperturbative terms in the series that cannot be captured by high order perturbative approximations. That is, the Landau singularity cannot be cured by simply adding more terms to the perturbative expansion. The Landau Pole however is unphysical (with the value of Λ 2 defined by the renormalization scheme) and this is supported by the fact that observables measured in the domain Q 2 < Λ 2 display no sign of discontinuity or unphysical behavior.
Several approaches have been explored to study the low-energy running of the coupling, each with its advantages, justifications, and caveats. A prominent approach based on Grunberg's effective charge approach -that we attempt to pursue here -provides a definition of the coupling that behaves as α pQCD s at large-Q 2 but remains finite at small values of Q 2 . Since the regime is extended down to small-Q 2 , the effective charge incorporates nonperturbative contributions that appear as higher-twist. Such an effective charge is explicitly defined in terms of physical observables that can be computed in the perturbative QCD domain. An example of such observable that has been very well studied in the literature is the effective charge α Bj s (Q 2 ) defined from the polarized Bjorken sum rule [19,20]. Such an observable has important advantages in that it has a simple perturbative series and is a non-singlet quantity implying that some ∆-resonance contributions cancel out.
In the following study, we use the effective charge α GLS s (Q 2 ) defined from the GLS sum rule introduced above. Following the Grunberg's scheme, the definition of the effective charge α GLS s (Q 2 ) which follows from the leading order of Eq. (1) writes as: In the perturbative domain Λ 2 Q 2 , we expect the effective charges from the Bjorken and GLS sum rules to be equivalent α GLS s (Q 2 ) = α Bj s (Q 2 ) up to O(α 2 MS ). In addition, at zero momentum transfer we expect α GLS s (Q 2 = 0) = α Bj s (Q 2 = 0) = π. The latter kinematic limits originate from the fact that cross-sections are finite quantities and when Q 2 → 0 ⇒ x = Q 2 /(2M ν) → 0, the support integrand in Eq. (1) must also vanish; therefore we have the following relations: It is important to emphasize that Eq. (3) is directly related to the right-hand side of Eq. (1). We can see from this definition of the coupling that both the short-distance effects -those within the parentheses of Eq. (1) -and the long-distance perturbative QCD interactions -represented by the ∆ HT term-are incorporated into the expression of the effective coupling α GLS s (Q 2 ). Based on these comparisons, we can infer that the NNSFν predictions and the JLab experimental measurements agree very well down to Q ∼ 0.5 GeV. As Q → 0 we can see that the effective coupling α GLS s /π measured at JLab converges to 1 as per the kinematic limit while our predictions converges to ∼ 0.6 . Perhaps this result would slightly improve if the structure functions were forced to satisfy the sum rules during the fit and if more experimental measurements were available to constrain the small-x region. As before, the decrease in the value of the lower integration limit x min induces a significant increase in the uncertainties.
Conclusions and outlook. In the first part of the manuscript, we reviewed a new framework -referred to as NNSFν -for the determination of the neutrino-nucleus structure functions in the inelastic regime. In particular, we stressed on its capabilities to provide predictions for low-energy neutrino interactions. As a verification of the methodology, we compared the outcome of the computations of the GLS sum rule originating from such predictions with measured experimental data to which we found very good agreement.
In the second part, we used the NNSFν determination as a tool to understand the running of the coupling α s (Q 2 ) which encodes at the same time the perturbative dynamics at large momentum transfers and the nonperturbative dynamics underlying the color confinement at small momentum transfers. The use of standard perturbative computations to study the coupling at low-Q 2 yields erroneous results as it predicts a diverging behavior due to the existence of an unphysical pole. Owing to the lack of theoretical formalism that correctly accounts for the nonperturbative effects, studying the strong coupling in the IR regime is a challenging task.
A prominent approach that resolves the ambiguity in defining the strong coupling in the nonperturbative regime is the use of effective charges defined directly from a leading order perturbatively computable observable. In our study we defined the effective charge α GLS s (Q) from the GLS sum rule which at large momentum transfers reproduces the perturbative computations and at low momentum transfers is expected to converge to π. Our predictions yield comparable results to experimental measurements -accounting for the uncertainties -down to Q ∼ 0.5 GeV. However, our predictions do not fully satisfy the kinematic limit α s (Q = 0)/π = 1 at zero momentum transfers. This issue of convergence might be resolved by imposing the neutrino structure functions to satisfy the sum rules during the fit. From this we conclude that further investigation is needed in that direction in order to fully understand the Q ∼ 0 behavior. | 2023-07-13T07:36:26.434Z | 2023-07-12T00:00:00.000 | {
"year": 2023,
"sha1": "8ba10835f05861e0dc4b672e8accacbff2a06a99",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8ba10835f05861e0dc4b672e8accacbff2a06a99",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119304133 | pes2o/s2orc | v3-fos-license | Orlicz-Sobolev nematic elastomers
We extend the existence theorems in [Barchiesi, Henao \&Mora-Corral; ARMA 224], for models of nematic elastomers and magnetoelasticity, to a larger class in the scale of Orlicz spaces. These models consider both an elastic term where a polyconvex energy density is composed with an unknown state variable defined in the deformed configuration, and a functional corresponding to the nematic energy (or the exchange and magnetostatic energies in magnetoelasticity) where the energy density is integrated over the deformed configuration. In order to obtain the desired compactness and lower semicontinuity, we show that the regularity requirement that maps create no new surface can still be imposed when the gradients are in an Orlicz class with an integrability just above the space dimension minus one. We prove that the fine properties of orientation-preserving maps satisfying that regularity requirement (namely, being weakly 1-pseudomonotone, $\mathcal H^1$-continuous, a.e.\ differentiable, and a.e.\ locally invertible) are still valid in the Orlicz-Sobolev setting.
Introduction
Motivated by the modelling of nematic elastomers, Barchiesi & DeSimone [4] analyzed the minimization of functionals of the form I(u, n) = Ω W mec (Du(x), n(u(x)))dx + u(Ω) |Dn(y)| 2 dy (1.1) where Ω ⊂ R 3 , u ∈ W 1,p (Ω, R 3 ) for some p > 3, n ∈ H 1 (u(Ω), R 2 ), and W mec (F, n) = W (α −1 n ⊗ n + √ α(I − n ⊗ n))F (1.2) for a certain α > 0 and some polyconvex energy function W . Functionals with a similar structure appear also in models describing the nematic mesogens with the Landau-de Gennes theory, and in magnetoelasticity and plasticity, see, e.g., [6,12,18,28,5]. The major difficulties are that I depends on the composition of the two unknowns and that the nematic director n is defined in the domain u(Ω) which is also determined only as a part of the solution of the variational problem. The analysis is based on the inverse function theorem for Sobolev maps due to Fonseca & Gangbo [18], which is valid for W 1,p maps from a domain in R n to R n when p > n. Using the results for the Sobolev regularity of the inverse obtained in [20,21,22,23], both the local invertibility theorem of Fonseca & Gangbo and the analysis of Barchiesi & DeSimone were generalized by Barchiesi, Henao & Mora-Corral [5] to a suitable class of maps in W 1,p (Ω, R 3 ) for all p > 2. The importance of relaxing the hypothesis on the integrability exponent p is that, on the one hand, they are related to the coercivity that the stored energy function W is assumed to possess and, on the other hand, the analysis should ideally depend as little as possible on the behaviour of W at infinity (for physical reasons). Here the less restrictive condition that (e.g. A(t) := t 2 log α t for any α > 1) is shown to be also sufficient to establish the existence of minimizers for functionals like I(u, n) in (1.1). In the paper [26], the authors investigated the minimal analytic assumptions on a map u : Ω → R n to guarantee continuity, differentiability a.e. and the Lusin (N) condition. As far as the condition (N) is concerned, the n-absolute continuity introduced by Malý in [30] plays an important role. It turned out that this condition is satisfied by a function u ∈ W 1,1 (Ω) whenever their weak partial derivatives are in the Lorentz space L n,1 (Ω). In particular, they characterize the space L n,1 in terms of an Orlicz integrability condition. This condition is exactly the one stated in [9], see Theorem 2.6. We will prove this condition on manifolds of dimension n − 1.
Our result, on the one hand, enlarges the class of maps in which the minimization problem can be set. On the other hand, it sheds new light on results on invertibility of maps and interpenetration of matter. In fact, we can consider the class of Sobolev-Orlicz maps and define accordingly the notion of zero surface energy (E(u) = 0, see Definition 2.15). This, in turn, when imposed together with the positivity of the Jacobian determinant, is equivalent to the requirement that Det Du = det Du (where Det Du denotes the distributional determinant, see Definition 2.14) and that u preserves orientation in the topological sense. Theorem 1.1. Let A be a Young function satisfying (1.4) and suppose that u ∈ W 1,A (Ω.R n ) satisfies detDu ∈ L 1 loc (Ω), Then we have the equivalence: • E(u) = 0 and detDu > 0 a.e.; • (adj Du)u ∈ L 1 loc (Ω.R n ), det Du(x) = 0 for a.e. x ∈ Ω, det Du = Det Du and deg(u, B(x, r)) ≥ 0 for every x ∈ Ω and a.e. r ∈ (0, dist(x, ∂Ω)).
This article explains the new ideas and the results in the literature of Orlicz-Sobolev spaces that are required to generalize the analysis of [5] (full detail of the proofs is not given since that would render the article unnecessarily long, given the technical difficulties). Section 2 is for notation and preliminaries. Section 3 proves that weakly monotone maps having the integrability (1.3)-(1.4) are continuous at every point outside an H 1 -null set (in the classical sense, not only in the sense of quasi-continuity). The functional class of orientation-preserving Orlicz-Sobolev maps creating no surface, proposed for the modelling of nematic elastomers, is defined and studied in Section 4. Concretely, maps in this class are proved to be 1-pseudomonotone, [19]; to have a precise representative that satisfies Lusin's contition and is H 1 -continuous and a.e. differentiable; to be, in a certain sense, open and proper; and to be locally invertible around almost every point, the local inverses and their minors being Sobolev and sequentially weakly continuous. The main existence theorem, for functionals, such as (1.1), defined both in the reference and in the deformed configuration, is stated finally in Section 5.
2 Notation and preliminaries 2.1 General notation We will work in dimension n ≥ 3, and Ω is a bounded open set of R n . Vector-valued and matrix-valued quantities will be written in boldface. Coordinates in the reference configuration will be denoted by x, and in the deformed configuration by y.
The characteristic function of a set A is denoted by χ A . Given two sets U, V of R n , we will write U ⊂⊂ V if U is bounded andŪ ⊂ V . The open ball of radius r > 0 centred at x ∈ R n is denoted by B(x, r); unless otherwise stated, a ball is understood to be open. The (n − 1)-dimensional sphere in R n centred at x 0 , with radius r, is denoted by S(x 0 , r) or S r (x 0 ).
Given a square matrix A ∈ R n×n , the adjugate matrix adj A satisfies (det A)I = A adj A, where I denotes the identity matrix. The transpose of adj A is the cofactor cof A. If A is invertible, its inverse is denoted by A −1 . The inner (dot) product of vectors and of matrices will be denoted by ·. The Euclidean norm of a vector x is denoted by |x|, and the associated matrix norm is also denoted by |·|. Given a, b ∈ R n , the tensor product a ⊗ b is the n × n matrix whose component (i, j) is a i b j . The set R n×n + denotes the subset of matrices in R n×n with positive determinant.
The Lebesgue measure in R n is denoted by L n , and the (n − 1)-dimensional Hausdorff measure by H n−1 . The abbreviation a.e. stands for almost everywhere or almost every; unless otherwise stated, it refers to the Lebesgue L n measure. For 1 ≤ p ≤ ∞, the Lebesgue L p , Sobolev W 1,p and bounded variation BV spaces are defined in the usual way. So are the functions of class C k , for k a positive integer of infinity, and their versions C k c of compact support. The set of (positive or vector-valued) Radon measures is denoted by M. The conjugate exponent of p is written p ′ . We do not identify functions that coincide a.e.; moreover an L p or W 1,p function may eventually be defined only at a.e. point of its domain. We will indicate the domain and target space, as in, for example, L p (Ω, R n ), except if the target space is R, in which case we will simply write L p (Ω). Given S ⊂ R n , the space L p (Ω, S) denotes the set of u ∈ L p (Ω, R n ) such that u(x) ∈ S for a.e. x ∈ Ω. The space W 1,p loc (Ω) is the set of funcions u defined in Ω such that u| A ∈ W 1,p (A) for any open A ⊂⊂ Ω; we will analogously use the subscript loc for other function spaces. Weak convergence (typically, in L p or W 1,p ) is indicated by ⇀, while * ⇀ is the symbol for weak * convergence in M or in BV . The supremum norm in a set A (typically, a sphere) is indicated by · ∞,A , while − A denotes the integral in A divided by the measure of A. The identity function in R n is denoted by id. The support of a function is indicated by spt.
The distributional derivative of a Sobolev function u is written Du, which is defined a.e. If u is differentiable at x, its derivative is denoted by Du(x), while if u is differentiable everywhere, the derivative function is also denoted by Du. Other notions of differentiability, which carry different notations, are explained in Section 2.4 below.
If µ is a measure on a set U , and V is a µ-measurable subset of U , then the restriction of µ to V is denoted by µ V . The measure |µ| denotes the total variation of µ.
Given two sets A, B of R n , we write A ⊂ B a.e. if L n (A \ B) = 0, while A = B a.e. means A ⊂ B a.e. and B ⊂ A a.e. An analogous meaning is given to the expression H n−1 -a.e. With △ we denote the symmetric difference of sets: In the proofs of convergence, we will continuously use subsequences, which not be relabelled.
Orlicz-Sobolev spaces
We follow the presentation in [7] and refer the reader to [27,37,38] for a comprehensive treatment. and A Young function A is said to satisfy the ∆ 2 -condition near infinity if it is finite-valued and there exist constants C > 2 and t 0 > 0 such that The Young conjugate A of A is defined by It is known that A = A. An N -function A is a convex function from [0, ∞) into [0, ∞) which vanishes only at 0 and such that lim s→0 + A(s) s = 0 and lim s→∞ A(s) s = ∞. Let E be a measurable subset of R n . The Orlicz space L A (E) built upon a Young function A is the Banach function space of those real-valued measurable functions u on E for which the Luxemburg norm If A satisfies the ∆ 2 -condition at infinity then Given an open set Ω ⊂ R n and a Young function A, the Orlicz-Sobolev space W 1,A (Ω) is defined as The space W 1,A (Ω), equipped with the norm given by for u ∈ W 1,A (Ω), is a Banach space.
Lorentz spaces
Given a measure space (X, µ) and 1 ≤ q < p < ∞, the distribution function of a measurable function u on X is defined by The nonincreasing rearrangement u * of u is defined by The Lorentz space L p,q (X) is defined as the class of all measurable functions on X for which the norm is finite. For more on Lorentz spaces see, e.g. [39].
Approximate differentiability and geometric image
The density D(E, x) of a measurable set E ⊂ R n at an x ∈ R n is defined as , r)) .
In this case, we write ap lim x→x0 u(x) = y] 0 . We say that u is approximately continuous at x 0 if u is defined at x 0 and ap lim x→x0 u(x) = u(x 0 ). b) We say that u is approximately differentiable at x 0 if u is approximately continuous at x 0 and there exists F ∈ R n×n such that In this case, F is uniquely determined, called the approximate differential of u at x 0 , and denoted by ∇u(x 0 ). c) We denote the set of approximate differentiability points of u by Ω d , or, when we want to emphasize the dependence on u, by Ω u,d .
Given a measurable u : Ω → R n that is approximately differentiable a.e., for any E ⊂ R n and y ∈ R n , we denote by N E (y) the number of x ∈ Ω d ∩ E such that u(x) = y. We will use the following version of Federer's [16] area formula, the formulation of which is taken from [33, Prop. 2.6].
Proposition 2.2. Let u : Ω → R n be measurable, approximately differentiable a.e. Then, for any measurable set E ⊂ Ω and any measurable function ϕ : whenever either integral exists. Moreover, given ψ : E → R measurable, the functionψ : is measurable and satisfies whenever the integral of the left-hand side exists.
We recall the definition of a.e. invertibility.
Now we present the notion of the geometric image of a set (see [33,11,22]) in the context of Orlicz spaces.
Definition 2.4. Let u ∈ W 1,A (Ω, R n ) and suppose that det Du(x) = 0 for a.e. x ∈ Ω. Define Ω 0 as the set of x ∈ Ω for which the following are satisfied: a) u is approximately differentiable at x and det ∇u(x) = 0; and b) there exist w ∈ C 1 (R n , R n ) and a compact set K ⊂ Ω of density 1 at x such that u| K = w| K and ∇u| K = Dw| K .
In order to emphasise the dependence on u, the notation Ω u,0 will also be employed. For any measurable set E of Ω, we define the geometric image of E under u as u(E ∩ Ω 0 ), and denote it by im G (u, E).
The set Ω 0 is of full measure in Ω. Indeed, the Calderón-Zygmund theorem shows that property a) is satisfied a.e., while standard arguments, essentially due to Federer [16, Thms. 3.1.8 and 3.1.16] (see also [33,Prop. 2.4] and [11,Rk. 2.5]), show that property b) is also satisfied a.e. Note also that u is well defined at every x ∈ Ω 0 , because of Definition 2.1 b).
In this case, the linear map L| Tx 0 S : T x0 S → R n is uniquely determined, called the tangential approximate derivative of u at x 0 , and is denoted by ∇u(x 0 ).
Growth at infinity, continuity and Lusin's condition
The focus of this paper is on functions A whose growth at infinity is at least such that The condition is satisfied, in particular, when A(t) = t p for every p > n − 1 and when A(t) = t n−1 log α t for every α > n − 2.
Orlicz spaces are intermediate between L p spaces. In particular, L n−1 contains L A for any A satisfying (2.8) (see [36] or [29]).
As pointed out in [7, Rmk. 3.2], condition (2.8) is enough to ensure that maps defined on (n − 1)dimensional C 1 manifolds and having W 1,A regularity necessarily have a continuous representative and belong to the Lorentz space L n−1,1 .
Proposition 2.6. Let S ⊂ R n be a C 1 differentiable manifold of dimension n − 1. If an N -function A satisfies (2.8) and the ∆ 2 -condition at infinity then every u ∈ W 1,A (S) has a continuous representative and Du is of class L n−1,1 . Moreover, there exists a constant C, depending only on A, S, and n, such that Proof. Using local charts S may be assumed, without loss of generality, to be a bounded open subset of R n−1 . The embedding into C(S) is proved in [9, Thm. 1b] under the assumption that with m = n − 1. By [10, Lemma 2.3] applied to A and q = m ′ (taking into account that A = A), condition (2.9) is equivalent to (2.8). Define Note that ϕ is non-increasing because of (2.2). Also, From [26,Cor. 2.4] it follows that Du is of class L n−1,1 .
The following convention will be used throughout the paper.
Convention 2.7. If u : Ω → R n is measurable and u| ∂U ∈ W 1,A (∂U, R n ) for some C 1 open set U ⊂⊂ Ω and some N -function A satisfying (2.8) and the ∆ 2 -condition at infinity, then in expressions like u(∂U ) or u| ∂U we shall be referring to the continuous representative of u| ∂U in W 1,p (∂U, R n ), which exists thanks to Proposition 2.6. Moreover, we will usually write u ∈ W 1, , will play an important role in the paper. We will adopt the following formulation.
where ν(x) denotes the outward unit normal to ∂U at x. Remark 2.9. a) By u(E) we refer to the image of E by the continuous representative of u| ∂U in W 1,A (∂U, R n ), due to Convention 2.7.
b) We are mostly interested in the facts that H n−1 (u(∂U )) < ∞ and that H n−1 (u(E)) = 0 for every H n−1 -null set E ⊂ ∂U . In particular, L n (u(∂U )) = 0, and u(∂U where Ω 0 is the set of Definition 2.4.
A class of good open sets
In the following definition, given a nonempty open set U ⊂⊂ Ω with a C 2 boundary, we call d : Ω → R the function given by where Ω 0 is the set of Definition 2.4, and ∇(u| where ν t denotes the unit outward normal to U t for each t ∈ (0, ε), and ν the unit outward normal to U .
The following result can be proved as in [33,Lemma 2.9]. It is a consequence of Fubini's theorem and the compact embedding of W 1,A into the space of continuous functions (see [9,Corollary 1], which is proved for strongly Lipschitz domains and can be used in our setting, via local charts, since the mainfolds ∂U t have no boundary).
and, for a subsequence (depending on t), u j → u uniformly on ∂U t as j → ∞.
Degree for Orlicz-Sobolev maps
We assume that the reader has some familiarity with the topological degree for continuous functions (see, e.g., [13,17]). Let U be a bounded open set of R n and let φ : ∂U → R n be continuous. By Tietze's theorem, it admits a continuous extensionφ :Ū → R n . We define the degree deg(φ, U, ·) : R n \ φ(∂U ) → Z of φ on U as the degree deg(φ, U, ·) : R n \ φ(∂U ) → Z ofφ on U . This definition is consistent since the degree only depends on the boundary values (see, e.g., [13, Th. 3.1 (d6)]).
The following formula for the distributional derivative of the degree will be widely used (see, e.g., [ Proposition 2.12. Let A be an N -function satisfying (2.8) and the ∆ 2 -condition at infinity. Let U ⊂ R n be a C 1 open set. Suppose that u is the continuous representative of a function in W 1,A (∂U, R n ). Then, for all g ∈ C 1 (R n , R n ), where ν is the unit outward normal to U .
Proof. As mentioned in [33, Prop. 2.1, Rmk. 2], for the formula to be valid is enough to know that u ∈ W 1,n−1 (∂U, R n ), that u has a continuous representative and that L n (u(∂U )) = 0. That W 1,A (∂U, R n ) ⊂ W 1,n−1 (∂U, R n ) follows from the fact that L A (∂U ) ⊂ L n−1 (∂U ). Functions in W 1,A (∂U, R n ) satisfy the remaining two conditions thanks again to Proposition 2.6 and Remark 2.9.(b).
The concept of topological image was introduced byŠverák [40] (see also [33]). Definition 2.13. Let A be an N -function satisfying (2.8) and let U ⊂⊂ R n be a nonempty open set with a C 1 boundary. If u ∈ W 1,A (∂U, R n ), we define im T (u, U ), the topological image of U under u, as the set of y ∈ R n \ u(∂U ) such that deg(u, U, y) = 0.
Due to the continuity of deg(u, U, y) with respect to y, the set im T (u, U ) is open and ∂ im T (u, U ) ⊂ u(∂U ). In addition, as deg(u, U, ·) is zero in the unbounded component of R n \ u(∂U ) (see, e.g., [13,Sect. 5.1]), it follows that im T (u, U ) is bounded.
Distributional determinant
We present the definition of distributional determinant (see [2] or [32]). With ·, · we indicate the duality product between a distribution and a smooth function.
Definition 2.14. Let u ∈ W 1,1 (Ω, R n ) satisfy (adj Du) u ∈ L 1 loc (Ω, R n ). The distributional determinant of u is the distribution Det Du defined as
Surface energy
The following concepts were defined in [20]: Definition 2.15. Let u : Ω → R n be measurable and approximately differentiable a.e. Suppose that det ∇u ∈ L 1 loc (Ω) and cof ∇u ∈ L 1 loc (Ω, R n×n ). For In equation (2.11), Df (x, y) denotes the derivative of f (·, y) evaluated at x, while div f (x, y) is the divergence of f (x, ·) evaluated at y.
It was proved in [21,22] that if u is one-to-one a.e., det ∇u > 0 a.e. and E(u) < ∞ then where Γ V (u) and Γ I (u) are (n − 1)-rectifiable sets, defined as follows: • A point y 0 belongs to Γ V (u) if the approximate limit of u −1 (y) as y approches y 0 from one side of Γ V (u) lies in the interior of Ω, and either there are almost no points of im G (u, Ω) on the other side of Γ V (u) or the approximate limit of u −1 (y) coming from the other side lies on the boundary of Ω.
• A point y 0 belongs to Γ I (u) if the approximate limits of u −1 (y) coming from the two sides of Γ I (u) exist, are different, and both lie in the interior of Ω.
The motivation there was the modelling of fracture, context in which Γ V (u) ∪ Γ I (u) corresponds to the surface created by the deformation, as seen in the deformed configuration. In that case E(u) gives the area of this created surface.
Weak monotonicity
The following definition of weak monotonicity was introduced by Manfredi [31] (see, e.g., [42] for earlier related definitions; the subscript + stands for positive part). The definition asks for a weak version of the minimum and maximum principle to be satisfied for every open Ω ′ ⊂⊂ Ω. We shall work with maps where that minimum and maximium principles are satisfied only for open sets in U u ; in particular, given any x in Ω we will only be able to assume that they hold for a.e. r ∈ (0, dist(x, ∂Ω)) and not for every such radius. This possibility was taken into account in the notion of weak pseudomonotonicity of Hajlasz & Malý [19] (which, in fact, is more general than what we need: we will only consider the case K = 1).
Definition 2.17. A map u ∈ W 1,1 (Ω) is said to be weakly K-pseudomonotone, K ≥ 1, if for every x ∈ Ω and a.e. 0 < r < dist(x, ∂Ω), ess osc where the oscillation on the left is essential with respect to the Lebesgue measure and the oscillation on the right is essential with respect to the (n − 1)-dimensional Hausdorff measure.
H 1 -continuity of pseudomonotone Orlicz-Sobolev maps
In the paper [7] the authors develop continuity properties of weakly monotone Orlicz Sobolev functions. In our analysis, we improve their estimate concerning the Hausdorff dimension of points where the function is not continuous. Also, since in the following sections this estimate will be needed for maps whose restrictions to balls B(x, r) we will only be able to prove that satisfy the weak minimum and maximum principles for a.e. r (instead of for every r), we show that their arguments remain valid under this milder monotonicity condition. We take the chance for a slight generalization and obtain the oscillation estimates assuming only that the maps are pseudomonotone. Given a continuous, increasing function h : [0, ∞) → [0, ∞) such that h(0) = 0, the h-Hausdorff measure H h(·) (E) of a set E ⊂ R n is defined as
2)
where A n−1 is the Young function given by |f |dx · Cr n A n−1 (1)r = 0.
As a consequence, for all σ > 0 we can find an open set U ⊂ Ω such that U ⊂ E and U |f (x)|dx < σ, using the absolute continuity of the density |f (x)|. Fix ε > 0, and define We will prove that H h(·) (E ǫ ) = 0. By Vitali's covering theorem, for any δ > 0 there exist disjoint balls Using that A n−1 is increasing and the definition of h(r) it is straightforward to show that h(5r) ≤ 5 n h(r), ∀r > 0. We then proceed in the estimate: The conclusion follows by letting δ → 0 and then σ → 0. Since H h(·) (E ǫ ) = 0, ∀ǫ > 0, we conclude that H h(·) (E) = 0.
One part of the proof of [7,Thm. 3.1] consists in obtaining the estimate (3.14) below and the a.e. differentiability of Orlicz maps from the Gehring oscillation estimate (3.8) (stated in [7] as Eq. (4.15)). In order to make this connection more explicit we state it as a separate proposition. ess osc whenever B 2r ⊂⊂ Ω. Moreover, there exists a representative of f that is differentiable a.e. Remark 3.4. As explained in [7,Rmk. 3.2], another way of seeing that weakly monotone maps with A(|∇f |)dx < ∞ for some A satisfying (2.9) are a.e. differentiable is by recalling that maps with this integrability have gradients in the Lorentz space L n−1,1 (thanks to [26], see Proposition 2.6 above) and that weakly monotone maps with ∇f ∈ L n−1,1 were proved to be a.e. differentiable in [35,Thm. 1.2]. and let x 0 ∈ E. Then there exists λ > 0 such that for a.e. t < dist(x 0 , ∂Ω) By [7,Prop. 4.3], A n−1 satisfies the ∆ 2 condition at infinity. Hence, for some fixed positive t 0 and C ′ . Integrating over the interval [0, r]: with h defined as in (3.2). The result then follows by applying Lemma 3.1 to f (x) := A(|Du(x)|).
Remark 3.6. It follows from (3.5) that for every Borel set E ⊂ R n . This will allow us to define, in Section 4, a precise representative of u that is continuous outside an H 1 -null set. This improves the result that u is H h(·) -continuous with h(s) = s log −γ ( 1 s ), for all γ > n − 2 − α, in [7, Example 5.1(iii)]. More generally, neither Proposition 3.5 nor the H 1 -continuity are a consequence of [7,Thm. 3.6]. Indeed, in order to obtain the H 1 -continuity from [7, Thm. 3.6] we would need that for h(s) = s and some continuous function σ : tσ(t) = ∞, but it can be shown that for any such σ the integral in (3.21) is not convergent near 0.
Orientation-preserving functions creating no new surface
Our analysis is set up in the following functional class, for a given N -function A satisfying (2.8) and the ∆ 2 -condition at infinity. Definition 4.1. We define A as the set of u ∈ W 1,A (Ω, R n ) such that det Du ∈ L 1 loc (Ω), det Du > 0 a.e. and E(u) = 0.
Intuitively, the maps that satisfy det Du > 0 a.e. and E(u) = 0 are those for which ∂u(Ω) = u(∂Ω) (recall the interpretation of E(u) as the area of the surface created by u, mentioned after Definition 2.15). It can be seen, using the density of the linear combinations of functions of separated variables, that E(u) = 0 if and only if Div (adj Du)g • u = ((div g) • u) det Du ∀g ∈ C ∞ c (R n , R n ). This is a regularity requirement. The identity is satisfied by C 2 maps u, thanks to Piola's identity. It is closely related to the well-known equation Det Du = det Du, satisfied by all W 1,n maps. In fact, for maps in W 1,p with p > n − 1 it was proved in [5,Corollary 4.7] that det Du > 0 a.e. and E(u) = 0 if and only if det Du(x) = 0 for a.e. x ∈ Ω, Det Du = det Du, and deg(u, B, ·) ≥ 0 for every ball B belonging to U u . The condition deg(u, B, ·) ≥ 0 for all B is known in topology to be the right way to express that u preserves orientation. Along these lines it was proved in [22,Thm. 7.2] that without the regularity requirement that E(u) = 0 the condition det Du > 0 a.e. is insufficient to ensure the preservation of orientation and the positivity of the Brouwer degree, even if Det Du = det Du.
Fine properties
Recall the notation N from Section 2.4.
Remark 4.3. The statement in [5, p. 773] that the conditions that Det Du = det Du and det Du > 0 a.e. are enough to ensure that the components of u are weakly monotone is incorrect. The construction in [22,Thm. 7.2] constitutes a counterexample. We were not able to determine whether the stronger condition that E(u) = 0 renders the conclusion true.
It is well known (see, e.g., [24,Ch. 2]) that the weak monotonicity implies regularity properties. In particular, for W 1,p -maps with p > n − 1, a representative of u is continuous H n−p -a.e. (if p ≤ n) and differentiable a.e. In our case, we get that u is continuous H h(·) -a.e., where h is defined in (3.2). However, we will not deal with the representative normally used in the theory of monotone maps (see, e.g., [40,31,41,19,24]) but rather with the one defined in [33,Th. 7.4], which we explain in the following paragraphs. u(x) dx converges, as r ց 0, to some u * (x 0 ) ∈ R n .
c) The mapû defined everywhere in Ω bŷ is such thatû(x) = u(x) for every x ∈ Ω 0 andû(x) ∈ im T (u, x) for every x ∈ Ω. Moreover, it is continuous at every point of x ∈ Ω \ N C, differentiable a.e., and such that L n (û(N )) = 0 for every N ⊂ Ω with L n (N ) = 0.
Proceeding as in Part (b) of the proof of [33,Thm. 7.4], it can be seen thatũ is continuous at every point of x ∈ Ω \ N C (using (4.2) instead of [33,Lemma 7.3(i)]). One of the consequences of this continuuity is that P is contained in N C, and, hence,ũ(x) =û(x) for every x ∈ Ω.
Thatû satisfies Lusin's property can be proved as in [33,Th. 10.1] (with a slightly shorter proof since Det Du = det Du).
That N C is an H 1 -null set will be proved at the end. At this point, let us show how to obtain the a.e. differentiability ofû under the assumption that L n (N C) = 0. Let x 1 ∈ Ω \ N C be a Lebesgue point for A(|Du|) and let x 2 ∈ Ω \ N C satisfy B(x 1 , 2(|x 2 − x 1 | + ρ)) ⊂ Ω for some ρ > 0. Let A n−1 be the Young function given by (4.4) Using (3.14) (with radius |x 2 − x 1 | + ρ) we find that for every r ∈ (0, ρ) and a.e. z ∈ B(0, 1) Sinceû is continuous outside N C, Letting ρ ց 0 we find that lim sup From this point onwards the a.e. differentiability can be obtained exactly as in the proof of [5,Prop. 5.9]. We now show how to adapt Part (c) of the proof of [33,Thm. 7.4] in order to obtain that H 1 (N C) = 0. Set where u i denotes the i-th component ofû. By (3.20) and Proposition 3.5, it suffices to show that N C ⊂ E. With this aim observe that for every x 0 in N C there exists λ > 0 such that diam im T (û, B(x 0 , r)) > λ whenever B(x 0 , r) ∈ Uû, because im T (u, x) is contained in im T (û, B(x 0 , r)). By Definition 2.10 and Convention 2.7, the restrictionû| ∂B(x0,r) may be assumed to be continuous. Since im T (û, B(x 0 , r)) is a compact set whose boundary is contained inû(∂B(x 0 , r)), there exist x 1 and x 2 on ∂B(x 0 , r) such that |û(x 2 ) −û(x 1 )| > λ. By Definition 2.10, almost every point of ∂B(x 0 , r) belongs to Ω 0 . Sinceû| ∂B(x0,r) is continuous, without loss of generality we may assume that x 1 and x 2 belong to Ω 0 . By Definitions 2.4 and 2.1, points in Ω 0 are points of approximate continuity forû. As a consequence, there exist measurable sets A 1 , A 2 ⊂ B(x 0 , r) of density 1 2 with respect to x 1 and x 2 , respectively, such that Since this is true for every r such that B(x 0 , r) ∈ Uû, we conclude that x 0 ∈ E, completing the proof.
Openness and properness
We begin by noting that equality (4.1) implies an openness property for u: for every U ∈ U u , im T (u, U ) = im G (u, U ) a.e.
Local invertibility
Definition 4.8. Let u ∈ A. We denote by U in u the class of U ∈ U u such that u is one-to-one a.e. in U (see Definition 2.3), and by U N,in Define The set Ω in consists of the sets of points around which u is locally a.e. invertible: x ∈ Ω in if and only if there exists r > 0 such that u is one-to-one a.e. in B(x, r). It does not depend on the particular representative of u (as explained after Def. 4.4 in [5]).
The local invertibility theorem of Fonseca & Gangbo [18] for W 1,p maps with p > n was generalized, under the assumption E(u) = 0, to all p > n − 1. Here it is shown to hold also in the Orlicz-Sobolev case under the growth condition (2.8).
Proposition 4.9. For every u ∈ A the set Ω in is of full measure in Ω.
Proof. It can be proved that every x 0 ∈ Ω whereû is differentiable and det Dû(x 0 ) > 0 belongs to Ω in , with the same arguments as in [5,Proposition 4.5.(d)].
Equality (4.6) makes it possible to define the local inverse having for domain an open set. Definition 4.10. Let u ∈ A and U ∈ U in u . The inverse (u| U ) −1 : im T (u, U ) → R n is defined a.e. as (u| U ) −1 (y) = x, for each y ∈ im G (u, U ), and where x ∈ U ∩ Ω 0 satisfies u(x) = y.
A careful inspection of the proofs shows that [23,Th. 3.3] remains valid in the class A or Orlicz-Sobolev maps with positive Jacobian, zero surface energy and an integrability above W 1,n−1 . (Use is made in [23] of the stronger invertibility condition INV of Müller & Spector; this condition holds for every U ∈ U in u thanks to (4.6).) Proposition 4.11. Let u ∈ A and U ∈ U in u . Then If, in addition, the sequence {det D(u j | B ) −1 } j∈N is equiintegrable in V , then the convergence in c1) holds in the weak topology of W 1,1 (V, R n ), and the convergence in c2) holds in the weak topology of L 1 (V ).
Proof. Part a): Let U and K be a set in U N u and a compact subset of im T (u, U ). By Proposition 4.7 there exists δ > 0 such that for a.e. t ∈ (0, δ) By the embedding of Proposition 2.6, the weak continuity of minors of [3,Thm. 4.11], and [22, Lemma 8.2], for a.e. such t there exists a subsequence for which (cof Du j )ν t ⇀ (cof Du)ν in L 1 (∂U t , R n ), where ν t is the unit exterior normal to U t . That K ⊂ im T (u j , U t ) ⊂ im T (u j , Ω) then follows by Lemma 2.11 and the homotopy-invariance of the degree (as in [5,Lemma 3.6]).
Part b):
The same proof of [5, Thm. 6.3(b)] remains valid. It is necessary to take into account that if a map is differentiable at at given point then the condition of regular approximate differentiability, used in [5], is automatically satisfied. Also, the proof uses [5, Prop. 2.6 and Lemma 2.24], which have to be replaced by Proposition 4.5 and Lemma 2.11 (their Orlicz counterparts). The other main conclusions in [5] are the lower semicontinuity for Div-quasiconvex integrals (under the constraint of incompressibility) of Proposition 7.6; the lower semicontinuity for the model for plasticity of [12,18]; the existence of minimizers in Theorem 8.6 for the Landau-de Gennes model for nematic elastomers of [6]; and Theorem 8.9 for the magnetostriction model of [28] where minimizers (u, m) are sought for All of these results (not only the existence of minimizers for (1.1), stated in Theorem 5.1) can be proved under the milder coercivity condition (2.8) considered in this paper, using the results of Sections 3 and 4. | 2018-12-21T15:55:30.000Z | 2018-12-21T00:00:00.000 | {
"year": 2018,
"sha1": "dd99b5458808ca4618623faad6c6d95fa990c06f",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.na.2019.04.012",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "dd99b5458808ca4618623faad6c6d95fa990c06f",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
29373795 | pes2o/s2orc | v3-fos-license | Semantic Measures for the Comparison of Units of Language, Concepts or Instances from Text and Knowledge Base Analysis
Semantic measures are widely used today to estimate the strength of the semantic relationship between elements of various types: units of language (e.g., words, sentences, documents), concepts or even instances semantically characterized (e.g., diseases, genes, geographical locations). Semantic measures play an important role to compare such elements according to semantic proxies: texts and knowledge representations, which support their meaning or describe their nature. Semantic measures are therefore essential for designing intelligent agents which will for example take advantage of semantic analysis to mimic human ability to compare abstract or concrete objects. This paper proposes a comprehensive survey of the broad notion of semantic measure for the comparison of units of language, concepts or instances based on semantic proxy analyses. Semantic measures generalize the well-known notions of semantic similarity, semantic relatedness and semantic distance, which have been extensively studied by various communities over the last decades (e.g., Cognitive Sciences, Linguistics, and Artificial Intelligence to mention a few).
Introduction
Semantic measures (SMs) are widely used today to estimate the strength of the semantic relationship between elements such as units of language, concepts or even semantically characterized instances, according to information formally or implicitly supporting their meaning or describing their nature. They are based on the analysis of semantic proxies from which semantic evidences can be extracted. These evidences are expected to directly or indirectly characterize the meaning/nature of the compared elements. The semantic likeness of terms or concepts is sometimes better understood as the probability of a mental activation of one term/concept when another term/concept is discussed. Notice that the notion of SM is not framed in the rigorous mathematical definition of measure. It should instead be understood as any theoretical tool or function which enables the comparison of elements according to semantic evidences. SMs are therefore used to estimate the degree of the semantic relatedness i of elements through a numerical value.
Two broad types of semantic proxies can be used to extract semantic evidences. The first type corresponds to unstructured or semi-structured texts ii (e.g., plain texts, dictionaries). These texts contain informal evidences of the semantic relationship(s) between units of language. Intuitively, the more two words are related semantically, the more frequently they will co-occur in texts. For instance, the word coffee is more likely to co-occur with the word sugar than with the word cat, and, since it's common to drink coffee with sugar, most will agree that the pair of words coffee/sugar is more semantically coherent than the pair of words coffee/cat. It is therefore possible to use simple assumptions regarding the distribution of words to estimate the strength of the semantic relationship between two words based on the assumption that words semantically related tend to co-occur.
The other type of semantic proxy from which semantic evidences can be extracted is more general. It encompasses a large range of computer-readable and understandable resources, from structured vocabularies to highly formal knowledge representations (KRs). Contrary to the first type of semantic proxy (i.e., texts), proxies of this type are structured and explicitly model knowledge about the elements they define. As an example, in a knowledge representation defining the concepts Coffee and Sugar, a specific relationship will explicitly define that Coffee -can be drink with -Sugar. SMs based on knowledge analysis rely on techniques used to take advantage of semantic graphs (e.g., thesaurus, taxonomies, lightweight ontologies), or even highly formal KRs such as ontologies based on (description) logic.
A large diversity of measures exist to estimate the similarity or the dissimilarity between specific data structures (e.g., vectors, matrices, graphs) and data types (e.g., numbers, strings, dates). The specificity of SMs relies in the fact that they are based on the analysis of semantic proxies to take into account the semantics in the definition of the function which will be used to drive the comparison of elements. As an example, the measures used to compare two words according to their sequences of characters cannot be considered as SMsonly the characters of the words and their ordering is taken into account, not their meaning. Therefore, according to such measures, the two words foal and horse will be regarded as unrelated words.
From gene analysis to recommendation systems, SMs have recently found a broad field of applications and are today essential to leverage data mining, data analysis, classification, knowledge extraction, textual processing or even information retrieval based on text corpora or formal KRs. Due to their essential roles in numerous treatments requiring the meaning of compared elements (i.e., semantics) to be taken into ac-count, the study of SMs has always been an interdisciplinary effort. Psychology, Cognitive Sciences, Linguists, Natural Language Processing, Semantic Web, and Biomedical informatics are among the most active communities which contribute to the study of SMs (2013). Due to this interdisciplinary nature of SMs, last decades have been very prolific in contributions related to the notion of semantic relatedness, semantic similarity or semantic distance, to mention a few. Before defining the technical terminology required to further introduce SMs, let's focus on their large diversity of applications.
Semantic Measures in Action
SMs are used to solve problems in a broad range of applications and domains. They enable to take advantage of the knowledge encompassed in unstructured/semi-structured texts corpora and KRs to compare things. They are therefore essential tools for the design of numerous algorithms and treatments in which semantics matters. Diverse practical applications which involve SMs are presented in this section. Three domains of applications are considered in particular: (i) Natural Language Processing, (ii) Knowledge Engineering/Semantic Web and Linked Data, and (iii) Biomedical informatics and Bioinformatics. Additional applications related to information retrieval and clustering are also briefly considered. The list of usages of SMs presented in this section is far from being exhaustive and only gives an overview of the large diversity of perspectives they open. Therefore, as a supplement to this list, an extensive classification of contributions related to SMs is proposed in appendix 1. This classification underlines the broad range of applications of SMs and highlights the large number of communities involved -it can thus be used to gain more insight on their usages in numerous contexts.
Natural Language Processing
Linguists have, quite naturally, been among the first to study SMs in the aim of comparing units of language (e.g., words, sentences, paragraphs, documents). The estimation of words/concepts relatedness plays an important role to detect paraphrase, e.g., duplicate content and plagiarism (Fernando & Stevenson 2008), to generate thesaurus or texts (Iordanskaja et al. 1991), to summarize texts (Kozima 1993), to identify discourse structure, and to design question answering systems (Bulskov et al. 2002;Freitas et al. 2011;C. Wang et al. 2012) to mention a few. The effectiveness of SMs to resolve both syntactic and semantic ambiguities have also been demonstrated multiple times, e.g., (Sussna 1993;Resnik 1999;Patwardhan et al. 2003).
Several surveys relative to usages of SMs and to the techniques used for their design for natural language processing can be found in (Curran 2004;S. M. Mohammad & Hirst 2012).
Knowledge Engineering, Semantic Web and Linked Data
Communities associated to Knowledge Engineering, Semantic Web and Linked Data play an import role in the definition of methodologies and standards to formally express machine-understandable KRs. They extensively study the problematic associated to the expression of structured and controlled vocabularies, as well as ontologies, i.e., formal and explicit specification of a shared conceptualisation defining a set of concepts, their relationships and axioms to model a domain i (Gruber 1993). These models rely on structured KRs in which the semantics of the concepts (classes) and relationships (properties) are rigor-7 ously and formally defined in an unambiguous way. Such KRs are therefore proxies of choice to compare the concepts and the instances of the domain they model. As we will see, a taxonomy of concepts, which is the backbone of most if not all KR, is particularly useful to estimate the degree of similarity of two concepts.
SMs are essential to integrate heterogeneous KRs and more generally for data integration. They play an important role to find correspondences between ontologies (ontology alignment i ), in which similar concepts defined in different ontologies must be found (Euzenat & Shvaiko 2007). SMs are also used for the task of instance matching, in the aim of finding duplicate instances across data sources. Applications to provide inexact search capabilities based on KR analysis have also been proposed, e.g., (Hliaoutakis 2005;Varelas et al. 2005;Hliaoutakis et al. 2006;Kiefer et al. 2007;Sy et al. 2012;. SMs have also been successfully applied for learning tasks using Semantic Web technologies (D'Amato 2007). Their benefits to take advantage of the Linked Data paradigm in the definition of recommendation systems have also been stressed in (Passant 2010;Harispe, Ranwez, et al. 2013a).
Biomedical Informatics & Bioinformatics
A large number of SMs have been defined for biomedical or bioinformatics studies. Indeed, in these domains, SMs are commonly used to take advantage of biomedical ontologies to study various types of instances (genes, proteins, drugs, diseases, phenotypes) which have been semantically characterized through a KR, e.g., ontologies or controlled vocabularies ii . Several surveys relative to usages of SMs in the biomedical domain can be found; we orient the reader to Pesquita, Faria, et al. 2009;Guzzi et al. 2012).
The Gene Ontology (GO) (Ashburner et al. 2000) is the example of choice to highlight the large success encountered by ontologies in biology iii . Indeed, the GO is extensively used to conceptually annotate gene products on the basis of experimental observations or automatic inferences. These annotations are used to formally characterize gene products regarding their molecular functions, the biological processes they are involved in or even their cellular location. Thus, using SMs, these annotations make possible the automatic comparisons of genes' products not on the basis of particular gene properties (e.g. sequence, structural similarity, gene expression) but rather on the analysis of biological aspects formalized by the GO. Therefore, genes can further be analysed by considering their representation in a multi-dimensional semantic space expressing our current understanding of particular aspects of biology. In such cases, conceptual annotations bridge the gap between global knowledge of biology (e.g., organisation of molecular functions or cellular component) and fine-grained understanding of specific instance (e.g., the specific role of a gene at molecular level). SMs enable to take advantage of this knowledge to analyse instances, here genes and, open interesting perspectives to infer new knowledge about them.
Various studies have highlighted the relevance of SMs for assessing the functional similarity of genes ; Z. , building gene clusters , validating and studying protein-protein interactions , analysing gene expression , evaluating gene sets' coherence (Diaz-Diaz & Aguilar-Ruiz 2011) or recommending gene annotations , to mention a few. A survey dedicated to SMs applied to the GO can be found in . i The reader interested to ontology alignment may also consider the related problematic of schema matching and mapping (Bellahsène et al. 2011). The classification of the elementary matching approaches proposed by (Euzenat & Shvaiko 2007) is also an interesting starting point for a broad overview of the large diversity of measures and approaches proposed for alignment tasks. ii Biology and biomedicine are heavy users of ontologies and controlled vocabularies, e.g. BioPortal, a portal dedicated to ontologies related to biology and the biomedical domain, references hundreds of ontologies (Whetzel et al. 2011). iii More than 11k citations between 2000 and 2013!
Information Retrieval
SMs are used to overcome limitations of information retrieval techniques based on plain lexicographic term matching, i.e., simple models consider that a document is relevant according to a query, only if the terms specified in the query are used in the document. SMs can be used to take into account the meaning of words by going over syntactic search, and can therefore be used to refine models, e.g., synonyms will not be considered as words totally different anymore. SMs have successfully been used in the design of ontology-based information retrieval systems and for query expansion, e.g., (Hliaoutakis 2005;Hliaoutakis et al. 2006;Baziz et al. 2007;Saruladha, Aghila & Raj 2010b;Sy et al. 2012).
SMs based on KRs also open interesting perspectives for the field of information retrieval as they enable to analyse and to query non-textual resources, e.g. genes annotated by concepts .
GeoInformatics
GeoInformatics actively contributes to the study of SMs. In this domain, SMs have, for instance, been used to compute the similarity between locations according to semantic characterizations of their geographic features ), e.g. estimating the semantic similarity of tags defined in the OpenStreetMap Semantic Network (Ballatore et al. 2012). Readers interested in the applications of SMs in this field may also refer to the various references proposed in Appendix 1, e.g. (Akoka et al. 2005;Rodríguez et al. 2005;Formica & Pourabbas 2008;Janowicz et al. 2008).
Organization of this Survey
This contribution proposes both a general introduction to SMs and a technical survey regarding a specific type of measures based on KR analysis. It is organized as follows: Section 2 introduces general notions related to SMs. Several cognitive models defined to better understand human cognition regarding his appreciation of similarity are briefly presented. As we will see, these cognitive models play an essential role for the design of SMs and are critical to deeply understand technical aspects of the measures. Several mathematical notions related to the notions of distance and similarity are also introduced. They are needed to formally define SMs in mathematical terms by taking into consideration key mathematical contributions related to distance and similarity. In this section, the reader is also introduced to the commonly adopted terminology associated to SMs; notions of semantic similarity, dissimilarity, distance, relatedness, or even taxonomical distance will be defined.
Based on the introduction of the broad notion of SMs, section 3 presents a classification of the large diversity of strategies proposed for the definition of SMs. The proposed classification relies on the analysis of: The type of compared elements (units of language, concepts/classes, instances semantically characterized). The canonical form used to represent these elements.
The semantic proxy which is used to extract the semantics associated to the compared elements, i.e.., corpora of texts, KRs.
According to the type of semantic proxy on which is based the comparison, three families of SMs are further distinguished: Distributional measures which mainly analyse corpora of texts. Knowledge-based measures which take advantage of structured knowledge to extract the semantics on which the SMs rely. Hybrid measures which take advantage of both, text corpora and KRs.
Section 4 is dedicated to the practical computation and evaluation of SMs. Several software solutions for the computation and the analysis of measures are presented. We also discuss the protocols and methodologies commonly used to assess the accuracy and the performance of measures in specific usage contexts.
Section 5 is dedicated to a technical and in-depth presentation of a specific type of SMs based on KR analysis. In this section we focus on SMs based on graph analysis, a highly popular approach used to compare structured terms, concepts, groups of concepts or even instances defined in KRs such as ontologies.
At the light of this study, section 0distinguishes some of the challenges faced by SMs designers and scientific communities contributing to the topic. A general conclusion ends this article.
General Notions and Definitions
SMs have been studied through various notions and not always in rigorous terms. Some definitions are even still subject to debate and not all communities agree on the semantics carried by the terminology they use. Thus, the literature related to the topic manipulates notions of semantic similarity, relatedness, distance, taxonomic distance and dissimilarity (I let your creativity speak); these notions deserve to be rigorously defined. This reflects the difficulty to limit the semantic similarity, as detected by humans, inside formal (and partial) logical mathematical models.
This section first introduces generalities related to the domain and a more precise definition of the notion of SM is proposed. The main models of similarity defined in cognitive sciences are next introduced. As we will see, they play an important role to understand the (diversity of) approaches adopted to design SMs. Several mathematical definitions and properties related to distance and similarity are next presented. These definitions will be used to distinguish mathematical properties of interest for the characterization and the study of SMs.
Semantic Measures: Definition
Human cognitive system is sensitive to similarity, which explains that the capacity to estimate the similarity of things is essential in numerous treatments. It is indeed a key element to initiate the process of learning in which the capacity to recognize similar situations i , for instance, helps us to build our experience, to activate mental traces, to make decisions, to innovate applying experience gained in previously solved problem to similar problems ii (Holyoak & Koh 1987;Ross 1987;Novick 1988;Ross 1989;Vosniadou & Ortony 1989;Gentner & Markman 1997). According to the theories of transfer, the process of learning is also subject to similarity since new skills are expected to be easier to learn if they are similar to skills already learned (Markman & Gentner 1993). Similarity is therefore a central component of memory retrieval, categorization, pattern recognition, problem solving, reasoning, as well as social judgment, e.g., refer to (Markman & Gentner 1993;Hahn et al. 2003;Goldstone & Son 2004) for associated references.
In this context, the goal of SMs is easy to understandthey aim to capture the strength of the semantic interaction between elements (e.g., words, concepts) regarding their meaning. Are the words car and auto more semantically related than the words car and mountain? Most people will agree to say yes. This has been proved in multiple experiments, inter-human agreement on semantic similarity ratings is high, e.g. (Rubenstein & Goodenough 1965;Miller & Charles 1991;Pakhomov et al. 2010) iii .
Appreciation of similarity is obviously subject to multiple factors. Our personal background is an example of such a factor, e.g., elderly persons and teenagers will probability not associate the same score of semantic similarity between the two concepts Phone and Computer iv . However, most of the time, a coni Cognitive models based on categorization consider that human classify things, e.g., experience of life, according to their similarity to some prototype, abstraction or previous examples (Markman & Gentner 1993). ii The similarity is here associated to the notion of generalization and is measured in terms of probability of inter-stimulus-confusion errors (Nosofsky 1992). iii As an example, considering three benchmarks, (Schwartz & Gomez 2011) observed 73% to 89% human inter-agreement between scores of semantic similarity associated to pairs of words. iv Smartphone are today kinds of computers and very different from the first communication device patented in 1876 by Bell. sensus regarding the estimation of the strength of the semantic link between elements can be reached -this is what makes the notion of SMs intuitive and meaningful i .
The majority of SMs try to mimic human capacity to assess the degree of relatedness between things according to semantic evidences. However, strictly speaking, SMs evaluate the strength of the semantic interactions between things according to the analysis of semantic proxies (texts, KRs), that's it. Therefore, not all measures aim at mimicking human appreciation of similarity. Indeed, in some cases, SMs' designers only aim to compare elements according to the information defined in a semantic proxy, no matter if the results produced by the measure correlate with human appreciation of semantic similarity/relatedness. This is, for instance, often the case in the design of SMs based on KRs. In these cases, the KR can be associated to our brain and the SM can be regarded as our capacity to take advantage of our knowledge to compare things. The aim is therefore to be coherent with the knowledge expressed in the semantic proxy considered, without regard on the coherence of the knowledge modelled. As an example, a SM based on a KR built by animal experts will not consider Sloth and Monkey to be similar, even if most people think sloths are monkeys.
Given that SMs aim at comparing things according to their meaning captured from semantic evidences, it's difficult to further define the notion of SMs without defining the concepts of Meaning and Semantics.
Taking the risk to disappoint the reader, this section will not face the challenge of the demystification of the notion of Meaning. As stressed by (Sahlgren 2006) "Some 2000 years of philosophical controversy should warn us to steer well clear of such pursuits". The reader can refer to the various theories proposed by linguists and philosophes. In this contribution, we only consider that we are dealing with the notion of semantic meaning proposed by linguists: how meaning is conveyed through signs or language. Regarding the notion of semantics, it can be defined as the meaning or interpretation of any lexical units, linguistic expressions or instances semantically characterized according to a specific context. Definition: Semantic Measures are mathematical tools used to estimate quantitatively or qualitatively the strength of the semantic relationship between units of language, concepts or instances, through a numerical or symbolic description obtained according to the comparison of information formally or implicitly supporting their meaning or describing their nature.
It is important to stress the diversity of the domain (in a mathematical sense) on which SMs can be used. They can be used to drive word-to-word, concept-to-concept, text-to-text or even instance-toinstance comparison. In this paper we will therefore, as much as possible, refer to any element of the domain of measures through the generic term element. An element can therefore be any unit of language (e.g. word, text), a concept/class, an (abstract) instance semantically characterized in a KR (e.g., gene products, ideas, locations, persons) i Despite some hesitations and interrogations regarding the notion of similarity, it's commonly admitted that semantic measure design is meaningful. Examples of authors questioning the relevance of notions such as similarity are numerous, e.g. "Similarity, ever ready to solve philosophical problems and overcome obstacles, is a pretender, an impostor, a quack." (Goodman 1972) or "More studies need to performed with human subjects in order to discover whether semantic distance actually has any meaning independent of a particular person, and how to use semantic distance in a meaningful way" (Delugach 1993), see also (Murphy & Medin 1985;Robert L Goldstone 1994;Hahn & Ramscar 2001).
We formally define a SM as a function: with the set of elements of type and , the various types of elements which can be compared regarding their semantics, e.g., words, concepts, sentences, texts, web pages, instances annotated by concepts, … , and This expression can be generalized to take into account the comparison of elements of different types. This could be interesting to evaluate entailment of texts or to compare words and concepts to mention a few. However, in this paper, we restrict our study to the comparison of pairs of elements of the same nature (which is already a vast subject of research). We stress that SMs must implicitly or explicitly take advantage of semantic evidences. As an example, as we said in the introduction, measures comparing words through their syntactical similarity cannot be considered to be SMs; recall that semantics refers to evidences regarding the meaning or the nature of compared elements.
The distinction between approaches that can and cannot be assimilated to SMs is sometime narrow; there is no clear border distinguishing non-semantics to semantic-augmented approaches, but rather a range of approaches. Some explanations can be found in the difficulty to clearly characterize the notion of Meaning. For instance, someone can say that measures used to evaluate lexical distance between words, such as edit distances, capture semantic evidences regarding the meaning of words. Indeed, the sequence of characters associated to a word derives from its etymology which is sometimes related to its meaning, e.g. words created through morphology derivation such as subset from set.
Therefore, the notion of SM is sometimes difficult to distinguish from measures used to compare specific data structures. This fine line can also be explained by the fact that some SMs compare elements which are represented, in order to be processed by computer algorithms, through canonical forms corresponding to specific data structures for which specific (non-semantic) similarity measures have been defined. As an example, pure graph similarity measures can be used to compare instances semantically characterized through semantic graphs.
In some cases, the semantics of the measure is therefore not captured by the measure used to compare the canonical forms of the compared elements. It's rather the process of mapping an element (e.g., word, concept) from a semantic space (text, KR) to a specific data structure (e.g., vector, set), which makes the comparison semantically enhanced. This is however an interesting paradox, the definition of the rigorous semantics of the notion of SM is hard to definethis is one of the challenges this contribution tries to face.
Semantic Relatedness and Semantic Similarity
Among the various notions associated to SMs, this section defines the two central notions of semantic relatedness and semantic similarity. They are among the most used in the literature related to SMs. Several authors already distinguished them in different communities, e.g., (Resnik 1999;Pedersen et al. 2007). Based on these works, we propose the following definitions.
Definition Semantic relatedness: strength of the semantic interactions between two elements without restriction regarding the types of semantic links considered.
Definition Semantic similarity: specialises the notion of semantic relatedness, by only considering taxonomical relationships in the evaluation of the semantic strength between two elements.
In other words, semantic similarity measures compare elements regarding the properties they share and the properties which are specific to them. The two concepts Tea and Cup are therefore highly related despite the fact that they are not similar: the concept Tea refers to a Drink and the concept Cup refers to a Vessel. Thus, the two concepts only share few of their constitutive properties. This highlights a potential interpretation of the notion of similarity, which can be understood in term of substitution, i.e., evaluating the implication to substitute the compared elements: Tea by Coffee or Tea by Cup.
In some specific cases, communities such as Linguists will consider a more complex definition of the notion of semantic similarity for words. Indeed, word-to-word semantic similarity is sometimes evaluated not only considering (near-) synonymy, or the lexical relations which can be considered as equivalent to the taxonomical relationships for words, e.g., hyponymy and hypernymy or even troponymy for verbs. Indeed, in some contributions, linguists also consider that the estimation of the semantic similarity of two words must also take into account other lexical relationships such as antonymy (S. M. . In other cases, the notion of semantic similarity refers to comparison of the elements, not the semantics associated to the results of the measure. As an example, designer of SMs relying on KRs sometimes use the term semantic similarity to denote measures based on a specific type of semantic relatedness which only considers meronymy, e.g., partial ordering of concepts defined by part-whole relationships. The semantics associated to the scores of relatedness computed from such restrictions differ from semantic similarity. Nevertheless, technically speaking, as we will see, most approaches defined to compute semantic similarities based on KR can be used on any restriction of semantic relatedness considering a type of relationship which is transitive i , reflexive ii and antisymmetric iii (e.g., part-whole relationships). In this paper, for the sake of clarity, we consider that only taxonomical relationships are used to estimate the semantic similarity of compared elements.
Older contributions relative to SMs do not stress the difference between the notions of similarity and relatedness. The reader should be warned that in the literature, authors sometimes introduce semantic similarity measures which estimate semantic relatedness and vice versa. In addition, despite the fact that the distinction between the two notions is commonly admitted by most communities, it is still common to observe improper use of both notions.
A large terminology refers to the notion of SMs and contributions related to the domain often refer to the notions of semantic distance, closeness, nearness or taxonomical distance, etc. The following subsection attempts to clarify the semantics associated to the terminology commonly used in the literature. i A binary relation over a set is transitive if for ( ) ( ) ( ). ii A binary relation on a set is reflexive if ( ). iii A binary relation on a set is antisymmetric if for ( ) ( ) .
The Diversity of Types of Semantic Measures
We have so far introduced the broad notion of SMs. We also distinguished the two notions of semantic relatedness and semantic similarity. A large terminology has been used in the literature to refer to the notion of SM. We define the meaning of the terms commonly used (the list may not be exhaustive): Semantic relatedness, sometimes called proximity, closeness or nearness, refers to the notion introduced above. Semantic similarity has also already been defined. In some cases, the term taxonomical semantic similarity is used to stress the fact that only taxonomical relationships are used to estimate the similarity. Semantic distance: Generally considered as the inverse of the semantic relatedness, all semantic interactions between the compared elements are considered. These measures respect (most of the time) the mathematical properties of distances. These properties will be introduced in subsection 2.3.1. Distance is also sometimes denoted farness. Semantic Dissimilarity is understood as the inverse of semantic similarity. Taxonomical distance also corresponds to the semantics associated to the notion of dissimilarity. However, these measures are expected to respect the properties of distances.
Figure 1 presents a chart in which the various notions related to SMs are structured through semantic relationships. Most of the time, the notion considered to be the inverse of semantic relatedness is denoted semantic distance, without regard if the measure respects the mathematical properties characterizing a distance. Therefore, we introduce the term semantic unrelatedness to denote the set of measures which have a semantics which is the inverse to the one carried by semantic relatedness measures, without necessary respecting the properties of a distance. This is, to our knowledge, a notion which has never been used in the literature i . Figure 1: Semantic graph defining the relationships between the various types of semantics which can be associated to SMs in the literature. Black (plain) relationships correspond to taxonomical relationships (i.e., subclass-of), inverse relationships refer to the semantic interpretation associated to the score of the measure, e.g., semantic similarity and dissimilarity measures have inverse semantic interpretations. i Our aim is not here to make the terminology heavier but rather to be rigorous in the characterization of measures.
Cognitive Models of Similarity
In this subsection, we provide a brief overview of the psychological theories of similarity by introducing the main models proposed by cognitive sciences to study and explain (human) appreciation of similarity. Here we do not discuss the notion of semantic similarity; the process of similarity assessment should be understood in a broad sense, i.e., as a way to compare objects, stimuli.
Cognitive models of similarity generally aim to study the way human evaluate the similarity of two mental representations according to some kind of psychological space. They are based on assumptions regarding the KR from which the similarity will be estimated. Indeed, as stated by several authors, the notion of similarity, per se, can be criticized as a notion purely artificial. In (Goodman 1972), the notion of similarity is defined as "an imposture, a quack" as objectively, everything is equally similar to everything else. This can be disturbing but, conceptually, two random objects have infinitively properties in common and infinitively different properties i , e.g. a flower and a computer are both smaller than 10m, 9.99m, 9.98m…. An important notion to understand, which have been underlined by cognitive sciences, is that differential similarities emerge only when some predicates are selected or weighted more than others. This observation doesn't mean that similarity is not an explanatory notion but rather that the notion of similarity is heavily framed in psychology. Similarity assessment must therefore not be understood as an attempt to compare object realizations through the evaluation of their properties, but rather as a process aiming to compare objects as they are understood by the agent which rule the estimation of the similarity (e.g., a person). The notion of similarity, therefore, only makes sense according to the consideration of a partial mental representation from which the estimation of objects will be based on.
Contrary to real objects, representations of objects do not contain infinitesimal properties. As an example, our mental representations of things only capture a limited number of dimensions of the object which is represented. Therefore, the philosophical worries regarding the soundness of similarity vanish considering that similarity aim at comparing partial representations of objects, e.g., human mental representation of objects (Hahn 2011). The similarity is thus estimated between mental representations. Considering that these representations are defined by a human agent, the notion of similarity may thus be understood as how similar objects appear to us. Considering the existential requirement of representations to compare things and to consider similarity as a meaningful notion, much of the history of research on similarity in cognitive sciences focuses on the definition of models of mental representation of objects.
The central role of cognitive sciences regarding the study of similarity relies in the design of cognitive models of both, mental representations and similarity. These models are further used to study how humans store their knowledge and interact with it to compare objects represented as pieces of knowledge. They next test these models according to our understanding of human appreciation of similarity. Indeed, evaluations of human appreciation of similarity help us to distinguish constraints/expectations on the properties an accurate model should have, which is essential to reject hypothesis and improve the models. As an example, studies have demonstrated that appreciation of similarity is sometimes asymmetric: the similarity between a painter and his portrait is commonly expected to be greater than the inverse -isn't it? Therefore, the expectation of asymmetric estimation of similarity is incompatible with the mathematical properties of a distance, which is symmetric by definition. Models based on distance axioms have therefore to be re-vised or to be used with moderation. In this context, the introduction of cognitive models of similarity will be particularly useful to understand the foundations of some approaches adopted for the definition of SMs.
Cognitive models of similarity are commonly organized in four different approaches; (i) Spatial models, (ii) Feature models, (iii) Structural Models and (iv) Transformational models. We briefly introduce these four models; a more detailed introduction can be found in (Goldstone & Son 2004) and (Schwering 2008). A captivating talk introducing to cognition and similarity, on which is based this introduction, can be also be found in (Hahn 2011).
Spatial Models
The spatial models, also named geometric models, rely on one of the most influencal theory of similarity in cognitive sciences. They are based on the notion of psychological distance and consider objects (here perceptual effects of stimuli or concepts for instance) as points in a multi-dimensional metric space. Spatial models consider similarity as a function of the distance between the mental representations of the compared objects. These models derive from Shepard's spatial model of similarity, in which objects are represented in a multi-dimensional space. The locations of the objects are defined by their dimensional differences (Shepard 1962).
In his seminal and highly influencal work on generalization, (Shepard 1987) provides a statistical technique in the form of Multi-Dimensional Scaling (MDS) to derive the locations of objects represented in a multi-dimensional space. Indeed, MDS can be used to derive some potential spatial representations of objects from proximity data (similarity between pairs of objects). Based on these spatial representations of objects, Shepard derived the universal law of generalization which demonstrates that various kinds of stimuli (e.g., Morse code signals, shapes, sounds) have the same lawful relationship between distance (in an underlined MDS) and perceive similarity measures (in term of confusability) -the similarity between two stimuli was defined as an exponentially decaying function of their distance i .
By demonstrating a negative exponential relationship between similarity and generalization Shepard established the first sounded model of mental representation on which cognitive sciences will base their studies on similarity ii . The similarity is in this case assumed to be inversely proportional to the distance which separates the perceptual representations of the compared stimuli (Ashby & Perrin 1988). The similarity defined as a function of distance is therefore constrained to the axiomatic properties of distance (properties which will be detailed in the following section).
A large number of geometric models have been proposed. They have long been among the most popular in cognitive sciences. However, despite their intuitive nature and large adoptions, geometric models have been subject to intense criticisms due to the constraints defined by the distance axioms. Indeed, several empirical analysis have questioned and challenged the validity of the geometric framework (i.e., both the model and the notion of psychological distance), by underlying inconsistencies with human appreciation i The similarity between two stimuli is here understood as the probability that a response to a stimulus will be generalized to the other (Shepard 1987). With ( ) the similarity between two stimuli and ( ) their distance, we obtain the tion ( ) ( ) , that is d ( ) ( ) a form of entropy. ii A reason which also explains the success encountered by spatial models is to find in their central role in another highly successful formal model provided by psychology studies: the (Nosofsky 1986) generalized context model of classification. Nosofsky showed that a classification task, i.e., the probability that an item belongs to a category, can be explained as a function of the sum of the similarities between the item to categorize and all known items of a category normalized by the sum of the similarities between the item to categorize and all the other items (Hahn 2011). of similarity, e.g., violation of the symmetry, triangle inequality and identity of the indiscernibles, e.g., (Tversky 1977;Tversky & Gati 1982) i .
Feature Models
This approach, introduced by (Tversky 1977) to answer the limitation of the geometric models, proposes to characterize an object through a set of features, considering that a feature "describes any property, characteristic, or aspect of objects that are relevant to the task under study" (Tversky & Gati 1982). Feature models evaluate the similarity of two stimuli according to a feature-matching function which makes use of their common and distinct features: The function is expected to be non-decreasing, i.e., the similarity increases when common (distinct) features are added (removed). The feature model is therefore based on the assumption that is monotone and that common and distinct features of compared objects are sufficient for their comparison. In addition, an important aspect is that the feature-matching process is expressed in term of matching function as defined in set theory (i.e., binary evaluation).
The similarity is further derived as a parameterized function of the common and distinct features of the compared objects. Two models, the contrast model ( ) and the ratio model ( ) have initially been proposed by Tversky. They can be used to compare two objects and represented through sets of features and : The symmetry of the measures produced by the two models can be tuned according to the parameters and . This enables the design of asymmetric measures. In addition, one of the major constructs of the feature model is the function which is used to capture the salience of a (set of) feature(s).
The salience of a feature is defined as a notion of specificity: "the salience of a stimulus includes intensity, frequency, familiarity, good form, and informational content" (Tversky 1977). Therefore, the operators and are based on feature matching and the function evaluates the contribution of the common or distinct features (distinguished by previous operators) to estimate the similarity. Notice that the concept of the salience of features implicitly defines the possibility to design measures which do not respect the identity of the indiscernibles, i.e. which enable non-maximal self-similarity. i Note that recent contributions propose to answer these inconsistencies by generalizing the classical geometric framework through quantum probability. Compared objects are represented in a quantum model in which they are not seen as points or distributions of points, but entire subspaces of potentially very high dimensionality, or probability distributions of these spaces (Pothos et al. 2013).
Structural Alignment Models
Structural Models are based on the assumption that objects are represented by structured representations. Indeed, a strong critic of the feature model was that (features of) compared objects are considered to be unstructured, contrary to evidences suggesting that perceptual representations are well characterized by hierarchical systems of relations, e.g., (Markman & Gentner 1993;Gentner & Markman 1994).
Structural alignment models are structure mapping models in which the similarity is estimated using matching functions which will evaluate the correspondences between the compared elements (Markman & Gentner 1993;Gentner & Markman 1994). The process of similarity assessment is here expected to involve a structural alignment between two mental representations in order to distinguish matches -the more the number of correspondences, more similar the objects will be considered. In some cases, the similarity is estimated in an equivalent manner to analogical mapping (Markman & Gentner 1990) and similarity is expected to involve mapping between both features and relations.
Another example of structural model was proposed by (R.L. Goldstone 1996). The authors proposed to model similarity as an interactive activation and mapping model using a connectionism activation networks based on mappings between representations.
Transformational Models
Transformational models assumes that similarity is defined by the transformational distance between mental representations (Hahn et al. 2003). The similarity is framed in Representational Distortion (Chater & Hahn 1997) and is expected to be assessed based on the analysis of the modifications required to transform one representation to another. The similarity, which can be explained in terms of Kolmogorov complexity theory (Li & Vitányi 1993), is therefore regarded as a decreasing function of transformational complexity (Hahn et al. 2003).
Unification of Cognitive Models of Similarity
Several studies highlighted correspondences and deep parallels between the various cognitive models. (Tenenbaum & Griffiths 2001) propose a unification of spatial, feature-based and structure-based models through a framework relying on generalization of Bayesian inference (see (Gentner 2001) for critics). Alternatively (Hahn 2011) propose an interpretation of the models in which the transformational model is presented as a generalization of the spatial, feature and structure-based models.
In this section, we have presented several cognitive models proposing to explain and study (human) appreciation of similarity. These models are characterized by particular interpretations and assumptions on the way knowledge is characterized, mentally represented, and processed. Despite several meaningful initiatives for the unification of the cognitive models in order to develop framework generalizing existing models, we have stressed that one of the fundamental differences between the models rely on their mathematical properties, e.g., symmetry, triangle inequality... The next section proposes an overview of mathematical notions required to rigorously manipulate the notions of similarity and distance. Several mathematical properties which can be used to characterize semantic measures are next introduced.
From Distance Metrics and Similarities to Semantic Measures
Are SMs mathematical measures? What are the specific properties of a distance or a similarity measure? Do semantic similarity measures correspond to similarity measures in the way mathematicians understand them? As we have seen in section 2.1, contributions related to SMs most of the time do not rely on formal definitions of the notion of measures or distance. Indeed, generally, contributions related to SMs rely on the commonly admitted and intuitive expectations regarding these notions, e.g. similarity (resp. distance) must be higher (resp. lower) the more (resp. less) the two compared elements share commonness i . However, the notions of measure and distance have been rigorously defined in mathematics through specific axioms from which particular properties derive. These notions have been expressed for weel defined objects (element domain). Several contributions rely on these axiomatic definitions and interesting results have been demonstrated according to them. This section briefly introduces the mathematical background relative to the notions of distance and similarity. It will help us to rigorously define and better characterize SMs in mathematical terms; this is a prerequisite to clarify the fuzzy terminology commonly used in studies related to SMs.
For more information on the definition of measures, distance and similarity, the reader can refer to: (i) the seminal work of (Deza & Deza 2013) -Encyclopedia of Distances, (ii) the work of (Hagedoorn & others 2000) chapter 2, a theory of similarity measures, and (iii) the definitions proposed by (D'Amato 2007). Most of the definitions proposed in this section have been formulated based on these contributions, and more particularly based on (D'Amato 2007). Therefore, for convenience, we will not systematically refer to them. In addition, contrary to most of the definitions presented in these works, we here focus on highlighting the semantics of the various definitions according to the terminology introduced in section 2.1.
Mathematical Definitions and Properties of Distance and Similarity
For the definitions presented hereafter, we consider a set which defines the elements of the domain we want to compare and a totally ordered set ( ), with the element of such as and such as .
Definition Distance: a function is a distance on if, , the function is: To be considered as a distance metric, i.e., a distance in a metric space, the distance must additionally respect two properties: The identity of indiscernibles also known as strictness property, minimality or self-identity, that is ( ) if and only if . The triangle inequality, the distance between two points must be the shortest distance along any path: Despite the fact that some formal definitions of similarity have been proposed, e.g., (Hagedoorn & others 2000;Deza & Deza 2013), contrary to the notion of distance, there is no axiomatic definition of similarity that sets the standard in mathematics. The notion of similarity appears in different fields of mathematics, e.g., figures with the same shape are denoted similar (in geometry), similar matrices are expected to have the same eigenvalues, etc. In this paper, we consider the following definition.
Definition Similarity: a function is a similarity on if, for all , the function sim is non-negative ( ), symmetric and reflexive, i.e., ( ) and ( ) ( ).
Definition Normalized function: Any function on with values in (e.g. similarity, distance) is said to be normalized if: ( ) , i.e., and .
Notice that a normalized similarity can be transformed to a distance considering multiple approaches (Deza & Deza 2013). Inversely, a normalized distance can also be converted to a similarity. Some of the approaches used for the transformations are presented in appendix 5.
As we have seen, distance and similarity measures are formally defined in mathematics as functions with specific properties. They are most of the time defined considering . They are extensively used to demonstrate results and develop proofs. However, the benefits to fulfil some of these properties, e.g., triangle inequality for distance metric, have been subject to debate among researchers. As an example, (Jain et al. 1999) stress that the mutual neighbour distance used in clustering tasks do not satisfy the triangle inequality but perform well in practice -to conclude by "This observation supports the viewpoint that the dissimilarity does not need to be a metric".
A large number of properties not presented in this section have been distinguished to further characterize distance or similarity functions, e.g. (Deza & Deza 2013). These properties are important as specific theoretical proofs require studied functions to fulfil particular properties. However, as we have seen, the definition of SMs proposed in the literature is not framed in the mathematical axiomatic definitions of distance or similarity. In some case, such a distortion among the terminology creates difficulties to bridge the gap between the various communities. As an example, in the encyclopaedia of distances, (Deza & Deza 2013) do not distinguish the notions of distance and dissimilarity, which is the case in the literature related to SMs (refer to section 2.1.3). In this context, the following section defines the terminology commonly adopted in the study of SMs, with regard to the mathematical properties already introduced.
Flexibility of Semantic Measures Regarding Mathematical Properties
Notice that we didn't introduced the precise and technical mathematical definition of a measure proposed by measure theory. This can be disturbing considering that this paper extensively refers to the notion of SM. The notion of measure we use is indeed not framed in the rigorous mathematical definition of the mathematical concept of measure. It refers to any "measuring instruments" which can be used to "assess the importance, effect, or value of (something)" (Oxford dictionary 2013)in our case, any functions answering the definitions of semantic distance/relatedness/similarity… proposed in section 2.1.
Various communities have used the concepts of similarity or distance without considering the rigorous axiomatic definitions proposed in mathematics but rather using their broad intuitive meanings i . To be in accordance with most contributions related to SMs, and to facilitate the reading of this paper, we will not limit ourselves to the mathematical definitions of distance and similarity.
The literature related to SMs generally refers to a semantic distance as any (non-negative) function, designed to capture the inverse of the strength of the semantic interactions linking two elements. Such functions must respect: the higher the strength of the semantic interactions between two elements is, the lower their distance. The axiomatic definition of a distance (metric) may not be respected. A semantic distance is most the time what we define as a function estimating semantic unrelatedness (please refer to the organization of the measures proposed in section 2.1.3). However, to be in accordance with the literature, we will use the term semantic distance to refer to any function designed to capture semantic unrelatedness.
We will explicitly precise that the function respects (or not) the axiomatic definition of a distance (metric) when required.
Semantic relatedness measures are functions which are associated to an inverse semantics to the one associated to semantic unrelatedness: the higher the strength of the semantic interactions between two elements is, the higher the function will estimate their semantic relatedness.
In this paper, the terminology we use (distance, relatedness, similarity), refers to the definitions presented in sections 2.1.2 and 2.1.3. To be clear, the terminology refers to the semantics of the functions, not their mathematical properties. However, we further consider that SMs must be characterized through mathematical properties. Table 1 and Table 2 summarize some of the properties which can be used to formally characterize any function designed in order to capture the intuitive notions of semantic distance and relatedness/similarity. These properties will be used in the next sections to characterize some of the measures we will consider. They are essential to further understand the semantics associated to the measures and to distinguish SMs which are adapted to specific contexts and usages.
Properties
Definitions Non-negative ( ) and Table 2: Properties which can be used to characterize any function which aims to estimate the notion of similarity/relatedness between two elements.
We have so far introduced the cognitive models used to study the notions of similarity as well as the formal mathematical definitions of the notion of distance and similarity. Several mathematical properties which can be used to characterize SMs have also been presented. Before we look at the classification of SMs we will first introduce the notion of knowledge representation (KR).
A Brief Introduction to Knowledge Representations
Here we use the term computational or formal knowledge representation (KR in short), to refer to any computational model or computational artefact used to formally express knowledge in a machine understandable form i . This section doesn't aim to introduce the reader to knowledge engineering or to present the various computational models which can be used to formally express knowledge in a machine understandable form. We only aim to give an overview of the notion of KRs in order to introduce SMs which take advantage of them. For more information, the reader can refer to some of the seminal contributions related to the topic, e.g., (Minsky 1974;Sowa 1984;Davis et al. 1993;Gruber 1993;Borst 1997;Studer et al. 1998;Baader 2003;Guarino et al. 2009;Robinson & Bauer 2011;Hitzler et al. 2011).
Generalities
From simple taxonomies and terminologies to complex KRs based on logics, a large spectrum of approaches has been proposed to express knowledge. Figure 2 presents several approaches which can be used to express KRs going from weak semantic descriptions of terms and linguistic relationships, to more refined and complex conceptualization associated to strong semantics. The challenging problematic tackled by the field of knowledge engineering, how to formally express knowledge, have gained a lot of attention in artificial intelligence. Such a success is naturally explained by the large implications of formal expressions of knowledge in computer science, it gives computers and algorithms an access door to our knowledge. Therefore, over the last decades, contributions relative to computational KRs have been numerous. Ontology is probably the most famous and mysterious word of this domain. It has been overused to the point of becoming a propaganda tool, and to be honest, as well as reinsuring those which have been lost in the translation, it is today difficult to find two knowledge engineers which will give the same definition of an ontologyno offense to the seminal contributions which focused on the demystification of the notion of ontology, e.g. (Guarino et al. 2009), it's today a concrete reality in both academic and industrial fields. Indeed, based on (Gruber 1993;Borst 1997), an ontology is often defined in highly abstract terms, as "a formal, explicit specification of a shared conceptualization" (Studer et al. 1998). However, despite its popularity, this definition relies on the informal and rarely questioned definition of conceptualization (Guarino et al. 2009).
As we will see, a conceptualization can partially be seen as a formal expression of a set of concepts and instances within a domain, as well as the relationships between these concepts/instances. For others, a conceptualization of a domain relies on the definition of its vocabulary. In this section, based on (Guarino et al. 2009), we will adopt a specific definition of the notions of ontology and conceptualization; these notion are (commonly) admitted in knowledge engineering but may differ to usages in other communities.
As we already said, we will use the general term KR to refer to all formal and machine understandable representation of knowledge, e.g. all the range of structured and logic-based approaches which appear in Figure 2. This choice has been motivated by the difficulty to consider several formal KRs as ontologies (and by the will to differentiate them from conceptualizations), e.g., lexical databases or term-oriented models of thesaurus structure which use terms as primitive elements and not concepts (thesaurus, classification scheme, etc.). However, to ease the presentation of the various KRs, we denote any abstract view of a set of common things through the generic term class or concept.
Knowledge Representations as Conceptualization
Regardless of the particularities of some domain-specific KRs and regardless of the language considered for the modelling, all approaches used to represent knowledge share common components: Classes or Concepts used to denoted set of things sharing common properties, e.g., Human. Instances, i.e. members of classes, e.g., alan (an instance of the class Human). Predicates, the types of relationships which define the semantic relationships which can be established between instances or classes, e.g., taxonomic relationships. Relationships, concrete links between classes and instances which carry a specific semantics, e.g., (alan, isA, Human), (alan, worksAt, Bletchley Park). Attributes, properties of instances or classes, e.g.; (Alan, hasName, "Turing") Axioms, for instance defined through properties of the predicates, e.g. "taxonomical relationships are transitive", or constraints on properties and attributes, e.g., "Any Human has exactly 2 legs".
A simple knowledge representation (KR) can therefore be formally defined by , with: the set of classes/concepts and the set of predicates which can be used to link two classes or two predicates, e.g., {subClassOf, partOf, subPredicateOf}. A predicate is also named a type of relationship. The set of classes and predicates are expected to be disjoint, i.e.
. , the set of oriented relationships of a specific type which link a pair of classes or a pair of predicates. Any relationship is therefore characterized by a triplet ( ) with or and . Note that a triplet is also called a statement. a set of axioms defining the interpretations of each class and predicat.
Only considering leads to a labelled graph structuring classes and relationships through labelled oriented edges. A vocabulary can be associated to any class and predicate. In addition, a lexical reference (didactical device) is generally considered to be used to refer, in an unambiguous manner, to a specific class/predicate, e.g., the string Mammal refers to a specific clade of animals. In practice, the unique identifier is generally an Internationalized Resource Identifier (IRI) i .
The semantics of a class/predicate is so far implicitly defined by the definition of unambiguous (lexical) references. As we will see, the formal semantics of the KR expressed in the graph is specified by the set of axioms .
The set of axioms defines that is not a simple graph data structure. They will be used to define the interpretations of the classes and predicates. The set of axioms for instance defines the properties associated to the predicates. Among the numerous properties which can be used to characterize predicates, we can distinguish the transitivity, reflexivity and anti-symmetry. These specific properties characterize taxonomical relationships used to define a partial ordering among classes and predicates. We name subClassOf the taxonomical relationship which specifies that a class is subsumed by another ii and subi Which is a generalization of the Uniform Resource Identifier (URI) in which specific characters (from ISO 10646) can be used. ii In some contributions, the isA relationship commonly refers to the taxonomical relationship. However, the relationships characterized by the predicate subClassOf define taxonomies of concepts/classes and the those corresponding to isA relationship are used to type instances, i.e. to define that an instance is of a specific type (see rdf:type in RDF specification).
PredicateOf the taxonomical relationship defining that a predicate inherits from another predicate i . As an example, it can be defined that:
Mammal subClassOf Animal isFatherOf subPredicateOf isParentOf
The semantics associated to the predicates through the axiomatic characterization of their properties leads to the definition of both taxonomies of classes and predicates. These taxonomies are expressed by the relationships defined in . We distinguish:
, the taxonomy of classes which defines a partial ordering of classes. We note if is subsumed by (or subsumes ). As an example, the statement Mammal Animal formally means that all mammals are animals. The taxonomical relationship between classes is the type of relationship the most represented in ontologies. Therefore, for convenience, we use the term taxonomical relationship to refer to taxonomical relationship between classes. The taxonomy of classes forms a Directed, Acyclic, and connected Graph (DAG). Indeed, a unique class, denoted the root, is generally considered to subsume all the classes. A fictive root is considered if none is explicitly specified (e.g., Thing). the taxonomic structuration of the predicates, isFatherOf isParentOf. Like for , a root predicate is expected to be defined and the graph defined by is also a directed, acyclic, and connected.
can further describe constraints on interpretations associated to predicates by defining a domain and a range (co-domain) of any predicate of . They can be used to define a specific interpretation of a type of relationship, e.g., the domain and the range of the predicate isParentOf are defined to be the class Person, which means that for all statement (x,isParentOf,y) we can infer that x and y are members of the class Person.
The axiomatic definition of classes and predicates can be based on a large variety of logical constructs (e.g., negation, conjunction, and disjunction). They are used to further constraint the interpretation of classes and predicates, and enable more complex expressions of knowledge. The presentation of the various logical constructors which can be used is out of the scope of this paper; please refer to (Baader 2003) for an introduction to logic-based KR. Hereafter, we only briefly present some knowledge expressions which can be based on logical constructs: The classes Man and Women are disjoint as the sets of instances of the two classes are expected to be disjoint, i.e., , that is to say, with ( ), the set of instances of the class , we have ( ) ( The literature generally distinguishes lightweight ontologies from highly formal ontologies depending on the degree of complexity of (e.g., predicate properties, logical constructs).
More Refined Knowledge Representations
In knowledge modelling, the general abstract KR , (i.e., classes and predicates definitions, class relationships and axioms) is generally distinguished from the knowledge relative to the instances of the considered domain. The conceptualization of the general abstract knowledge is named the TBox (Terminological Box). The statement 'All Mammals are Animals ', i.e. (Mammal, subClassOf, Animal), is an example of statement found in the TBox. However, KR is not only about conceptualization. In some cases you may also want to express knowledge about specific instances of your domain, i.e., specific realization of the classes defined in .
Knowledge relative to instances of a domain are expressed in the form of statements, e.g., (bob, isA, Man). These statements must be compliant with the TBox. As an example, if it's defined that Man and Women are two disjoint classes, the conceptualization is violated by the definition of both statements (bob, isA, Man) and (bob, isA, Women). The set of statements related to instances are defined by the term ABox (Assertional box).
To be compliant with the introduction of information relative to instances, the formal definition of a KR introduced above must be revisited. To this end, we introduce a set of instance and we authorize both the definition of statements between instances and between both instances and class, . We denote ( ) the set of instances of the class .
Note that in some cases, data values of specific data type can also be used to further characterize classes or instances, e.g. to specify the age of a person (bob, hasAge, 52). Example of models introducing attributes of specific data types to classes can be found in (Ehrig et al. 2004). In this case, a set of datatypes and their structuration can be defined. A set of attributes of a specific type can therefore be associated to a data type. An attribute can be represented as a specific predicate (type of relationship). A data value can therefore be considered as an instance of a specific data type i and a specific semantic relationship between a concept and a data value can be used to represent the value of an attribute which is associated to a class.
Formally, to enable data value to be used considering the model introduced so far, it is needed to: Distinguish a set of data values associated to specific data type (which can also be structured in some cases), Further extends such as statements enabling data value to be used are possible.
However, to facilitate the reading we will not introduce further notations.
Knowledge Representations as Semantic Graphs
In this paper, we consider a semantic graph as any declarative KR in which unambiguous resources are represented through nodes interconnected through semantic relationships associated to a controlled semantics. We therefore consider that any synsets/concepts/classes structured through a semantic graph can be considered in an equivalent manner i . A semantic graph is therefore a specific type of KR in which the axiomatic definitions do not relies on complex logical constructs such as negation, conjunction, disjunction and so far (e.g., lightweight ontologies).
A semantic graph is composed of a set of statements, each of them composed of a triplet subjectpredicate-object, e.g. (Human, subClassOf, Mammal). The name of a class will be used to refer to both the conceptualization and the corresponding node in the graph. Any node which refers to a realization of a class will be named an instance. Notice that we always use the term relationship to refer to a binary relationship between a subject and an object even if we consider that all relationships are associated to a predicates.
Figure 3 presents a basic example of a simple KR represented as a semantic graph. The graph structures few classes through taxonomical relationships (plain black relationships) and relationships carrying a specific meaning (e.g., hunts). As we have seen, some semantic graphs represent a KR which not only contains TBox statements but also knowledge relative to the instances of the domain. As an example, in WordNet, instances are distinguished from the classes. Figure 4 presents a more complex representation of a semantic graph which corresponds to a KR involving classes, predicates, instances and data values. Various classes defined in the graph are taxonomically structured in the layer C. Several types of instances are also defined in layer I, e.g. music bands, music genres. These instances can be characterized according to specific classes, e.g., (rollingStones, isA, MusicBand) and can be interconnected through specific predicate, e.g., i We therefore consider hyponymy (hyperonymy) as a taxonomical relationship of specialization (generalization, equivalent of 'subclass of' for terms).
(rollingStones, hasGenre, rock). In addition, specific data values (layer D) can be used to specify information relative to both classes and instances, e.g., (rollingStones, haveBeenFormedIn, 1962). All relationships linking the various nodes of the graph are directed and semantically characterized, i.e., they carry an unambiguous and controlled semantics. Notice that extra information such as the taxonomy of predicates or axiomatic definitions of predicate properties are not represented in this figure.
We consider that a semantic graph doesn't rely on complex logic-based semantics, i.e., logical constructors (such as disjunction) are not required to understand the semantics associated to the KR. Specific mapping techniques can be used to reduce any KR to a semantic graph. In addition, the knowledge defined in highly formal KR might not be explicit and a reasoner can be required to deduce implicit knowledge, e.g. applying entailment rules or inference mechanisms, prior to applying mapping techniques. We will not broad this technical subject in this section, detailed information relative to the construction of a semantic graph from several types of KR is presented in section 5.2.
A semantic graph is therefore considered as any declarative KR which can be expressed through a graph and which carries a specific semantics, e.g., a semantic network/net, a conceptual graph, a lexical database (WordNet), an RDF(S) graph i , a lightweight ontology, to mention a few. (Harispe, Ranwez, et al. 2013a).
i RDF graph are in some cases expected to be entailed, i.e. the semantics requires to be taken into account in order to materialize specific relationships implicitly defined in the graph, e.g. using RDFS. A discussion related to the subject is provided in appendix 4.
Conceptual annotations as a Semantic Graph
In some cases, a collection of instances is annotated by concepts structured in a KR. In those cases, the knowledge base is expected to be composed of a collection of annotations and a KR. To ease the reading, we consider that an instance which is characterized through a set of conceptual annotations can also be represented in a semantic graph, i.e., the instance can be represented by a node which establishes semantic relationships to the concepts associated to its annotations.
As an example, if a document is annotated to a specific concept/class, let's say the document docA is annotated to a set of concepts structured in a KR {Physics, Gravitation}, we can create the relationship (docA, isAnnotatedBy, Physics) and (docA, isAnnotatedBy, Gravitation) to model this knowledge through a unique KR.
Examples of Knowledge Representations Commonly Processed as Semantic Graphs
Several KR considered as semantic graphs, have been used to design and evaluate SMs. Among the most used we distinguish: WordNet i (Miller 1998;Fellbaum 2010) is widely used in natural language processing and computational linguistics. It models the lexical knowledge of native English speakers in a lexical database structured through a semantic network composed of synsets/concepts linked by semantic and lexical relations. In WordNet, concepts are associated to a specific meaning defined in a gloss. They are also characterized by a set of cognitive synonyms (synset) which can be composed of nouns, verbs, adjectives, adverbs. According to the official documentation, WordNet 3.0 is composed of 117 000 concepts (synsets), which are linked together by different types of semantic relationships, e.g., hyperonymy, hyponymy. SENSUS (Swartout et al. 1996) is another semantic graph derived from WordNet.
Cyc ii (OpenCyc), an ontology defining concepts (called constants), instances, and relationships between the concepts and instances. Other constructs are also provided The Gene Ontology iii and gene product annotations. The Gene Ontology (GO) defines a structured vocabulary related to molecular biology. It can be used to characterize various aspects of gene products (molecular function, biological processes, and cellular component).
MeSH iv -Medical Subject Headings, structured controlled vocabulary defining a hierarchy of biological and medical terms. The MeSH is provided by the U.S. National Library of Medicine and is used to index PubMed articles.
Classification of Semantic Measures
We have seen that various mathematical properties can be used to characterize technical aspects of SMs. This section distinguishes other general aspects which may be interesting to classify SMs. They will be used to introduce the large diversity of approaches proposed in the literature. We first present some of the general aspects of SMs which can be relevant for their classification. We next introduce two general classes of measures.
How to Classify Semantic Measures
The classification of SMs can be made according to several aspects; we propose to discuss four of them: The type of elements the measures aim to compare. The semantic proxies used to extract the semantics required by the measure. The semantic evidences and assumptions considered during the comparison. The canonical form adopted to represent an element and to handle it.
Types of Elements to Compare: Words, Concepts, Sentences…
SMs can be used to compare various types of elements: Units of language: words, sentences, paragraphs, documents. Concepts/Classes, groups of concepts. Instances semantically characterized.
We considered the notion of concept through a broad sense, i.e., class of instances which can be of any kind (abstract/concrete, elementary/composite, real/fictive) (Smith 2004). We also consider that a concept can also be represented through a synset, i.e., group of data elements considered semantically equivalent. A concept can therefore be represented as any set of words referring to the same notion, e.g., the terms dog, Canis lupus familiaris refer to the concept Dog. Note that we use both the notions of concept and class interchangeably. However, we will, as much as possible, favour the use of the term class as specifications used to express KRs generally refer to it, e.g. RDF(S) i .
The notion of instance semantically characterized encompasses several situations in which an object is described through information from which semantic analyses can be performed. A semantic characterization could be the RDF description of the corresponding instance, a set of conceptual annotations associated to it, a set of tags, or even a subgraph of an ontology, to mention a few.
SMs can therefore be classified according to the type of elements they aim to compare. i Note that in OWL a class is denoted a concept. The introduction of RDF (Resource Description Framework), RDFS (RDF-Schema) and OWL (Web Ontology Language) as languages for the expression of knowledge representations is considered out-of-the-scope of this paper. Please refer to the official documentation proposed by the W3C. The book of (Hitzler et al. 2011) proposes an easy-to-read comprehensive introduction to semantic web technologies.
Semantic Proxies from which Semantics is Distilled
A semantic proxy is considered as any source of information from which the semantics of the compared elements, which will be used by a SM, can be extracted. Two broad types of semantic proxies can be distinguished: Unstructured or semi-structured texts: Text corpora, controlled vocabularies, dictionaries. Structured: thesaurus, structured vocabularies, and ontologies.
Semantic Evidences and the Assumptions Considered
Depending on the semantic proxy used to support the comparison of elements, various types of semantic evidences can be considered. The nature of these evidences conditions the assumptions associated to the measure.
Canonical Forms Used to Represent Compared Elements
The canonical form, i.e. representation, chosen to process a specific element can also be used to distinguish the measures defined for the comparison of a specific type of elements. Since the representation adopted to process an element corresponds to a specific reduction of the element, the degree of granularity with which the element is represented may highly vary depending on it. The selected canonical form is of major importance since it influences the semantics associated to the score produced by a measure, that is to say, how a score must be understood. This particular aspect is essential when inferences must be driven from the scores produced by SMs.
A SM is defined to process a given type of element represented through a specific canonical form. Figure 5: Partial Overview of the landscape of the types of semantic measures which can be used to compare various types of elements (words, concepts, instances …).
Distributional measures enable the comparison of units of language through the analysis of unstructured texts. They are mainly used to compare words, sentences or even documents studying the repartition of words in texts (number of occurrences, location in texts) i . An introduction to this type of measures for the comparison of pair of words can be found in (Curran 2004;S. M. Mohammad & Hirst 2012).
Several contributions have been proposed to tackle the comparison of pairs of sentences or documents (text-to-text measures). Some of these measures derive from word-to-word SMs; others rely on specific strategies based on lexical/ngram overlap analysis, Latent Semantic Analysis extensions , or even topic model using Latent Dirichlet Allocation (Blei et al. 2003). Text-to-text SMs are not presented in this section; we here focus on distributional measures which can be used to compare the semantics of two words regarding a collection of texts. Additional information and pointers regarding textto-text measures are provided in appendix 2.
Distributional measures rely on the distributional hypothesis which considers that words occurring in similar contexts tend to be semantically close (Harris 1981). This hypothesis is one of the main tenets of statistical semantics. It was made popular through the idea of (Firth 1957): "a word is characterized by the company it keeps" ii . Considering that the context associated to a word can be characterized by the words surrounding it, the distributional hypothesis states that words occurring in similar contexts, i.e., often surrounded by the same words, are likely to be semantically similar as "similar things are being said about both of them" (S. M. . It is therefore possible to build a distributional profile of a word according to the contexts in which it occurs. A word is classically represented through the vector space model: a geometric metaphor of meaning in which a word is represented as a point in a multidimensional space modelling the diversity of the vocabulary in use (Sahlgren 2006). This model is used to characterize words through their distributional properties in a specific corpus of texts iii . To this end, words are generally represented through a matrix of cooccurrenceit can either be a word-word matrix or more generally a word-context matrix in which the context is any lexical unit (surrounding words, sentences, paragraphs or even documents). Such a characterization of a word regarding a specific corpus, sometimes denoted as word space model (Sahlgren 2006), is analogue to the vector space model widely known in Information Retrieval (Salton & McGill 1986).
Generally, the design of a SM for the comparison of words corresponds to the definition of a function which will assess the similarity of two context vectors. The various distributional measures are therefore mainly distinguished by the: i In the literature, distributional measures are sometimes defined as a specific type of a more general type of measures denoted corpusbased measures (Panchenko & Morozova 2012). In this survey we consider distribution measures as any measure which relies on location and number of occurrences of words in text and there is therefore no need to distinguish them from corpus-based measures. ii Also implicitly discussed in (Weaver 1955) originally written in 1949 (source: wiki of the Association for Computational Linguistics http://aclweb.org/aclwiki accessed 09/13) iii According to (Sahlgren 2006), numerous limitations in the design of semantic measures to compare words are a consequence of the distributional methodology adopted as a discovery procedure.
Type of context used to build the co-occurrence matrix. Frequency weighting (optional). The function used to transform the raw counts associated to each context in order to incorporate frequency and informativeness of the context (Curran 2004). Dimension reduction technique (optional) used to reduce the co-occurrence matrix. This aspect defines the type of co-occurrences which is taken into account (e.g. first order, second order, etc.). Vector measure used to assess the similarity/distance of the vectors which represent the words in the co-occurrence matrix. In some cases, vectors will be regarded as (fuzzy) sets.
Several distributional measures have been proposed. Here, we only briefly introduce the three main types of approaches; Spatial/Geometric, which evaluate the relative positions of the two words in the semantic space defined by the context vectors, and the Set-based and Probabilistic approaches which are based on the analysis of the overlap of the contexts in which the words occurs, e.g. (Ruge 1992).
Geometric Approach
The geometric approach is based on the assumption that compared elements are defined in a semantic space corresponding to the intuitive spatial model of similarity proposed by cognitive sciences (see section 2.2). A word is for instance considered as a point in a multi-dimensional space representing the diversity of the vocabulary in use. Two words are therefore compared regarding their location in this multidimensional space. The dimensions considered to represent the semantic space are defined by the context used to build the co-occurrence matrix. Words are represented through their corresponding vector in the matrix and are therefore compared through measures used to compare vectors. Among the most used, we can distinguish: Scalar product or measures from the L p Minkowsky family -L 1 Manhattan distance, L 2 Euclidian distance. Cosine similarity, the cosine of the angle between the vectors of the compared words (the smaller is the angle, the stronger the likeness will be considered). Measures of correlation can also be used in some cases (Ganesan et al. 2003).
(Fuzzy) Set-based Approach
Words are compared regarding the number of contexts in which they occur which are common and different (Curran 2004). The comparison can be made using classical set-based measures (e.g., Dice index, Jaccard coefficient). Several set-based operators have for instance be used to compare words (Terra & Clarke 2003;Bollegala 2007b). Extensions have also been proposed in order to take into account a weighting scheme through fuzzy sets, e.g. (Grefenstette 1994). Set-based measures relying on information theory metrics have also been proposed, they are introduced in the following subsection which presents the measures based on probabilistic approaches.
Probabilistic Approach
The distributional hypothesis enables to express the semantic relatedness of words in term of probability of co-occurrence, i.e. regarding both, the contexts in which compared words appear and the contexts in which the two words co-occur. These two evidences can intuitively be used to estimate the strength of association between two words. This strength of association can also be seen as the mutual information of two words. The mutual information can be expressed regarding the probability the two words occur in the corpus, as well as the probability the two words co-occur in the same context. Once again a large diversity of measures have been proposed, only those which are frequently used are considered (Dagan et al. 1999;S. Mohammad & Hirst 2012): (Fano 1961). The PMI was first adapted for the comparison of words by (Church & Hanks 1990). The vectors obtained from the co-occurrence matrix can also be seen as distribution functions corresponding to distribution profiles. Notice that these vectors can also correspond to vectors of strength of association if one of the metrics presented above have been used to convert the initial co-occurrence matrix. As an example, co-occurrence vectors can be transformed to mutual information vectors modifying the co-occurrence matrix using the PMI function. In both cases, the comparison relies on the comparison of two distribution functions/vectors. Therefore, despite their conceptual differences, the probabilistic approaches generally rely on the mathematical tools used by the geometrical approaches. The functions commonly used are: Kullback-Leibler divergence (information gain or relative entropy) is a classic measure used to compare two probability distributions and is often characterized as the loss of information when a probability distribution is approximated by another (Cover & Thomas 2012). Jensen-Shannon divergence. This function also measures the similarity between two probability distributions. It is based on the Kullback-Leibler divergence with the interesting properties of being symmetric and to always produce a finite value. Skew Divergence (Lee n.d.), denoted SD ( stands for asymmetrical). Measures presented in the geometric approaches can also be used: L-Norm, cosine similarity.
An excerpt of the similarity functions which can be used to compare probability distributions can be found in ); a comprehensive survey presenting a large collection of measures is also proposed in (Cha 2007) Several combinations can therefore be used to mix both the strength of association (weighting scheme, e.g., PMI) and the measures used to compare the probability functions/vectors. Fuzzy Metrics can also be used to compare words according to their strength of association, refer to (S. for detailed examples.
i An interesting correlation analysis between measures is also provided.
Capturing Deeper Co-occurrences
The probabilistic approach presented so far can be used to estimate the similarity of words regarding their first order co-occurrence, i.e., the similarity is assessed regarding if the two words occur in the same context. However, a strong limitation of first order co-occurrence studies is that similar words may not cooccur. As an example, some studies of large corpus have observed that the words road and street almost never co-occurs, although they can be considered as synonyms in most cases (Lemaire & Denhière 2008). Specific techniques have therefore been proposed to highlight deeper relationships between words, e.g., second order co-occurrences. These techniques will transform the co-occurrence matrix to enable evidences of deeper co-occurrence to be captured. The measures presented to compare words through the vector space model (see above) will be used after the matrix transformation.
Statistical analysis can be used to distinguish valuable patterns in order to highlight deeper cooccurrences between words. These patterns, which represent the relationships between words, can be identified using several techniques; among them we distinguish: Latent semantic analysis (LSA). Hyperspace Analogue to Language (HLA). Syntax or dependency-based model. Random indexing… Latent Semantic Analysis/Indexing (LSA) use singular value decomposition (SVD) (Landauer et al. 1998) to capture the relationships between two words occurring with a third one (but not necessarily occurring together). To this end, the co-occurrence matrix is reduced by the SVD algorithm. SVD is a linear algebra operation which can be used to emphasize correlations among rows and columnsit reduces the number of dimensions with the interesting property to highlight second-order co-occurrences. The comparison of words is finally made comparing their corresponding rows in the matrix through vector similarity measures. LSA is often presented as an answer to the drawbacks of the standard vector space model such as sparseness and high dimensionality.
Hyperspace Analogue to Language (Lund & Burgess 1996) uses a word-to-word co-occurrence matrix according to a specific word window. The weight of a co-occurrence is defined according to the position of the two words (before/after) in the context window. An asymmetric co-occurrence matrix is therefore obtained (directional word-to-word matrix). Low entropy columns can therefore be dropped from the matrix prior to the comparison. Words are generally compared based on the concatenation of their respective row and column.
Advantages and Limits of Distributional Measures
We list some of the advantages and limits of distributional SMs.
Advantages of Distributional Measures
Unsupervised, they can be used to compare the relatedness of words expressed in corpora without prior knowledge regarding their meaning or usage.
Limits of Distributional Measures
The words to compare must occur at least few times. Highly depend on the corpus used. This specific point can also be considered as an advantage as the measure is context-dependent. Sense-tagged corpora are most of the time not available (Resnik 1999;Sánchez & Batet 2011).
The construction of a representative corpus of text can be challenging in some usage context, e.g., biomedical studies. Difficulties to estimate the relatedness between concepts or instances due to the disambiguation process required prior to the comparison. Distributional measures are mainly designed for the comparison of words. However, some pre-processing and disambiguation techniques can be used to enable concepts or instances comparison from text analysis. Nevertheless, their computational complexity is most of the time a drawback making such approaches impracticable to be used with large corpora analysis. Difficulty to estimate the semantic similarity. Nevertheless, different observations are provided in the literature. It is commonly said that distributional measure can only be used to compare words regarding relatedness as co-occurrence can only be seen as an evidence of relatedness, e.g., (Batet 2011a). However, (S. specifies that similarity can be captured performing specific pre-processing. Nevertheless, capturing the similarity between words from text analysis requires elaborated techniques which are not tractable for large corpora analysis. Difficulty to explain and to trace the semantics of the relatedness. The interpretation of the score is almost driven by the distributional hypothesis; it is however difficult to deeply understand the semantics associated to the co-occurrences.
Knowledge-based measures rely on any form of knowledge representation (KR), e.g., structured vocabularies, semantic graphs or ontologies, from which the semantics associated to the compared elements will be extracted. These measures are therefore based on the analysis of semantic graphs or expressive KRs defined using logic-based semantics.
A large diversity of measures have been defined to compare both concepts and instances defined in a KR. Two main types of measures can be distinguished considering the type of the KR which is taken into account: Measures based on graph analysis or framed in the relational setting. They consider KR as semantic graphs. They rely on the analysis of the structural properties of the semantic graph. Elements are compared studying their interconnections, in some case, by explicitly taking advantage of the semantics carried by the relationships. Measures relying on logic-based semantics, such as description logics. These measures use a higher degree of semantic expressivity; they can take into account logical constructors, and can be used to compare richer descriptions of knowledge.
Most of SMs have been defined to compare elements defined in a single KR some SMs have also been proposed to compare elements defined in different KRs. In this section, we mainly consider the measures defined for a single KR. Measures taking advantage of multiple KRs are briefly presented in section 3.4.3.
Semantic Measures Based on Graph Analysis
SMs based on graph analysis do not take into account logical constructors which can sometimes be used to define the semantics of a KR. These measures only consider the semantics carried by the semantic relationships (relational setting), e.g., specific treatments can be performed regarding the type of relationship processed. Some properties associated to the relationships defined in the graph can be considered by the measures. The transitivity of the taxonomic relationship will for instance be implicitly or explicitly used in the design of the measures. In other cases, the taxonomy of predicates (the types of semantic relationships) can also be taken into account.
A large number of approaches have been proposed to express SMs using this approach. Section 5 is dedicated to them. We invite the reader willing to have more technical information to refer to it; here, we only present a non-technical overview of these measures focusing on those used to compare a pair of classes.
SMs based on graph analysis are commonly classified in four approaches: (i) Structure-based, (ii) Feature-based, (iii) Information-Theory and (iv) Hybrid.
The Structural Approach
SMs based on the structural approach compare the elements defined in the graph through the study of the structure of the graph induced by its relationships. The measures are generally expressed as a function of the strength of the interconnections of the compared elements in the semantic graph. The structural approach corresponds, in some sense, to the design of SMs according to the structural model defined in cognitive sciences (refer to section 2.2). The graph corresponds to a structured space in which the compared elements are described.
The first measures based on the structural approach proposed to compare two classes regarding the shortest path linking them in the graph: the shorter the path, stronger the strength of their semantic relatedness. The types of relationships considered define the semantics of the measures, e.g., only the taxonomical relationships will be considered to estimate the semantic similarity. The similarity/relatedness of the compared elements is therefore estimated according to their distance in the graph.
As an example, considering Figure 6, the length of the shortest path between the classes Computer and Tablet is 2. Only considering taxonomical relationship, the length of the shortest path between the classes Computer and Mouse is 5. As expected, the classes Computer will therefore be considered to be more similar to the class Tablet than to the class Mouse. A large diversity of structural measures have been proposed to compare elements structured in a graph as a function of the strength of the interconnection linking them (e.g., random-walk approaches). More refined measures take advantage of intrinsic factors analyses to better estimate the similarity, e.g., by considering non-uniform weights of relationships.
The Feature-based Approach
This approach can be associated to the feature model defined by Tversky. Measures estimate the semantic similarity or relatedness considering specific properties of the elements during the comparison.
A central element of the measures based on this approach is the function which characterizes the features of the elements on which will be based their comparison. Various strategies have been proposed. As an example, the features used to characterize a concept can be the senses it encompasses, its ancestors in the graph, i.e., all concepts which subsumes the concept according to the partial ordering defined by the taxonomy. Adopting this strategy we will consider the following feature-based representation of concepts Computer, Tablet and Mouse: The comparison of two elements can therefore be made evaluating the number of features they share according to a feature-matching function. In this case, the pair Computer / Tablet will be estimated as more similar than the pair Computer / Mouse as the former pair share more senses than the latter (respectively 2 and 1).
The Information Theoretical Approach
This approach is based on Shannon's Information Theory (Shannon 1948). The elements, generally classes, are compared according to the amount of information they share and to the amount of information which is distinct between the two classes. These measures extensively rely on estimators of the information content of classes.
The Hybrid Approach
Hybrid measures are defined taking into account some of the specificities of the approaches briefly introduced above.
Measures Relying on Logic-based Semantics
SMs based on the relational setting cannot be used to compare complex descriptions of classes or instances, relying on logic-based semantics, e.g. description logics (DLs). To this end, SMs capable of taking into account logic-based semantics have been proposed. These measures are for example used to compare knowledge models expressed in OWL i .
Among the diversity of proposals, measures based on simple description logics, e.g., only allowing concept conjunction (logic ), have initially been proposed (Borgida et al. 2005). More refined SMs have also been designed to exploit high expressiveness of DLs, e.g. ALC, ALN, SHI description logics (
Semantic Measures based on Multiple Knowledge Representations
Several approaches have been designed to estimate the relatedness of classes or instances using multiple KRs. These approaches are generally named cross ontology semantic similarity/relatedness measures in the literature. Their aim is double: (i) to enable the comparison of elements which have not been defined in the same KR given that the KRs in which they have been defined model a subset of equivalent elements, (ii) to refine the comparison of elements incorporating a larger amount of information during the process.
These measures are in some senses related to those commonly used for the task of ontology alignment/mapping and instance matching. Therefore, prior to their introduction we will first highlight the relation between these measures and those designed for the aforementioned processes.
Comparison with Ontology Alignment/Mapping and Instance Matching
The task of ontology mapping aims at finding correspondences between the classes and predicates defined in a collection of KRs (e.g., ontologies). These mappings are further used to build an alignment between KRs. Instance Matching focuses in finding similar instances defined in a collection of KRs. These approaches generally rely on multiple matchers which will be aggregated to evaluate the similarity of the compared elements (Shvaiko & Euzenat 2013). The matchers commonly distinguished are: Terminologicalbased on string comparison of the labels or definitions of the elements. Structuralmainly based on the structuration of classes and predicates. Extensionalbased on instance analysis. Logic-basedrely on logical constructs used to define the elements of the KRs.
The score produced by these matchers are generally aggregated. A threshold is next used to estimate if two (groups of) elements are similar enough to define a mapping between them. In some cases, the mapping will be defined between an element and a set of elements, e.g., depending on the difference of granularity of the compared KRs, a class can be mapped to a set of classes.
The problematic of ontology alignment and instance mapping is a field of study in itself. The techniques used for this purpose involve semantic similarity measures for the design of specific matchersstructural, extensional and logic-based (terminological matchers are not semantic). However, their aim being to find exact matches, they are generally not suited for the comparison of non-equivalent elements defined in different KRs. Indeed, techniques used for ontology alignment are for instance not suited to answer questions such as "To which degree the two concepts Coffee and Cup are related?".
Technically speaking, nothing prevents the use of matching techniques to estimate the similarity or relatedness of elements defined in different KRs since they have been designed to this specific purpose. Nevertheless, in applications, compared to approaches used for ontology alignment and instance matching, semantic measures based on multiple KRs: Can be used to estimate the semantic relatedness and not only the similarity. Sometimes rely on strong assumptions and approximations which cannot be considered to derive alignments, e.g., measures based on shortest-path techniques. Focus on the design of techniques for the comparison of elements defined in different KRs which generally consider a set of existing mappings between KRs.
In short, ontology alignment and instance matching are complex processes which use specific types of (semantic) similarity measures and which can be used to support the design of semantic measures involving multiple KRs.
Main approaches for the definition of semantic measures using multiple KRs
The design of semantic measures for the comparison of elements defined in different KRs have gained less attention than classical semantic measures designed for single KRs. They have been successfully used to support data integration (Rodríguez & Egenhofer 2003;, clustering , or information retrieval tasks (Xiao & Cruz 2005) to cite a few. In this context, several contributions have focused in the design of cross-KRs semantic measures without focusing on a specific application context (Rodríguez & Egenhofer 2003;Xiao & Cruz 2005;Petrakis et al. 2006; M.C. Coates et al. 2010;Sánchez, Solé-Ribalta, et al. 2012;Batet et al. 2013).
Advantages of Knowledge-based Measures
They can be used to compare all types of elements defined in a KR, i.e., terms/classes, instances. Thus, these measures can be used to compare elements which cannot be compared using text analysis, e.g., comparison of gene products according to conceptual annotations corresponding to their molecular functions. Fine control on the semantic relationships taken into account to compare the elements. This aspect is important to understand the semantics associated to a score of SMs, e.g., semantic similarity/relatedness. Generally easier and less complex to compute than distributional measures.
Limits of Knowledge-based Measures
Require a knowledge representation describing the element to compare. The use of logic-based measures can be challenging to compare elements defined in large knowledge bases (high computational complexity). Measures based on graph analysis most of the time require the knowledge to be modelled in a specific manner in the graph and are not designed to take into account of non-binary relationships. Such relationships are used in specific KRs and play an important role to define specific properties to relationships, e.g., a simple triplet cannot be used to model that a user has sent an email to another user at a specific date. Reification techniques are used to express such knowledge by defining a ternary relationship. The (binary) relationship is expressed by a node of the graph; specific triplets will be used to represent the sender (subject), the receiver (object), the type of relationship (predicate) and additional information associated to the binary relationship, such as the date in the given example. Despite the fact that some supervised approaches can be used to take advantage of such a type of knowledge expression, e.g. (Harispe, Ranwez, et al. 2013a), most measures based on graph analysis are not adapted to this case. This aspect is relative to the mapping of a KR to a semantic graph; a more detailed discussion of this specific aspect is proposed in appendix 4.
Mixing Knowledge-based and Distributional Approaches
Hybrid measures have been proposed to take advantage of both texts and KRs to estimate the similarity or relatedness of elements: units of language, concepts/classes and instances. They most of the time combine several single SMs (Panchenko & Morozova 2012).
Among the various mixing strategies, we can distinguish two broad types: Measures which take advantage of both corpus and KR analysis. This strategy has been used to estimate the specificity of concepts or terms structured in a (taxonomic) graph. As an example, (Resnik 1995) proposed to estimate the amount of information carried by a concept as the inverse of the probability the concept occurs in texts. Others propose to mix text analysis and structure-based measures. The extended gloss overlap measure introduced by (Banerjee & Pedersen 2002), and the two measures based on context vectors proposed by (Patwardhan 2003) are good examples. Interested readers may also consider Patwardhan & Pedersen 2006 Several studies have demonstrated the gain of performance mixing knowledge-based and distributional approaches (Panchenko & Morozova 2012). See also the work of ).
i Several aggregations have been proposed to compare groups of concepts. Please refer to section 5.6.2.2 for more information relative to aggregation functions.
Computation and Evaluation of Semantic Measures
This section introduces information relative to the comparison and the evaluation of SMs. Software solutions which can be used for the computation and analysis of SMs are first presented. In a second step, protocols, methodologies and benchmarks commonly used to assess and compare performances and accuracies of measures are presented.
Software Solutions for the Computation of Semantic Measures
This subsection presents the main software solutions dedicated to SM computation which are available to date (2013); new software solutions or more recent versions of those presented herein may be available. Please notice also the potential conflict of interest as the authors are involved in the Semantic Measures Library project (Harispe, Ranwez, et al. 2013b) which is related to the development of a software solution dedicated to SMs. Tools are presented in their alphabetical order.
Software Solutions Dedicated to Distributional Measures
List of existing software solutions for the computation of distributional measures: Disco: Java library dedicated to the computation of the semantic similarity between arbitrary words. A command-line tool is also available. Reference: (Kolb 2008), License: Apache 2.0, Last version: 2013, Website: http://www.linguatools.de/disco/ Semilar: a toolkit and software environment dedicated to distributional SMs. It can be used to estimate the similarity between texts. It implements simple lexical overlap method for the comparison of texts, as well as word-to-word based measures. More sophisticated methods based on LSA/LDA are also provided. Semilar is available as an API and as a Java application with a graphical user interface. It also provides a framework for the systematic comparison of various SMs. Reference: , License: Unknown, Last Version: 2013. Website: http://www.cs.memphis.edu/~vrus/SEMILAR/ SemSim: Java library dedicated to the computation of semantic similarity between words. Last Version: 2013, Website: http://www.marekrei.com/projects/semsim/ Other tools: -Semantic similarity of sentences: http://sourceforge.net/projects/semantics/
Software Solutions Dedicated to Knowledge-based Measures
This subsection presents a list of existing software solutions for the computation of knowledge-based measures.
Generic software solutions
Generic software solutions can be used to compare concepts/classes, groups of concepts or instances according to their descriptions in KRs. These software solutions can be used with a large variety of KRs (i.e. thesaurus, ontologies, and semantic graphs). The proposed solutions are listed alphabetically.
OWL Sim: Java library dedicated to the comparison of classes defined in OWL. Reference: (Washington et al. 2009)
Gene Ontology
List of the software solutions which can be used to compute semantic similarity/relatedness between Gene Ontology (GO) terms and gene products annotated by GO terms.
Generic software solutions: The Semantic Measures Library and the Similarity Library support SM computation using the GO. They can be used to compare GO terms and gene product annotations. In addition, the SML-Toolkit, a command-line toolkit associated to the Semantic Measures Library, can also be used by non-developers to compute SMs using the GO. See above for more information.
MeSH
List of software solutions which can be used to compute semantic similarity/relatedness between MeSH descriptors.
Generic software solutions: The Semantic Measures Library and the Similarity Library support SM computation using the MeSH. The Semantic Measures Library also supports the comparison of groups of MesH descriptors and can be used by non-developers through a command-line interface. See above for more information.
Disease Ontology
Generic software solutions: The Semantic Measures Library can be used to compare pairs of (group of) terms defined in the Disease Ontology. See above for more information.
Evaluation of Semantic Measures
Evaluation protocols and benchmarks are essential to discuss the benefits and drawbacks of existing or newly proposed SMs. They are of major importance to objectively evaluate new contributions and to guide SMs' users in the selection of the best suited measures according to their needs. Nevertheless, despite the large literature related to the field, only few contributions focus on this specific topic.
Generally, any evaluation aims to distinguish the benefits and drawbacks of the compared alternatives according to specific criteria. Such comparisons are most of the time used to rank the goodness of measures regarding the selected criteria. Therefore, to be compared, three important questions deserve to be answered: (1) What are the criteria which can be used to compare SMs? (2) How to evaluate the goodness of a measure regarding a specific set of criteria?
(3) Which criteria must be considered to evaluate measures for a specific application context?
Criteria for the Evaluation of Measures
Several criteria can be used to evaluate measures. Among them, we distinguish: The accuracy and precision of a measure. The computational complexity, i.e., algorithmic complexity. The mathematical properties of the measure. The semantics carried by the measure.
As we will discuss, these criteria can be used to discuss several aspects of measures.
Accuracy and Precision
The accuracy of a measure can only be discussed according to predefined expectations regarding the results produced by the measure. Indeed, as defined in metrology, the science of measurement, the accuracy of a measurement must be understood as the closeness of the measurement of a quantity regarding the true value of that quantity (BIPM et al. 2008).
The precision of a measure (system of measurement) corresponds to the degree of reproducibility or repeatability of the score produced by the measure under unchanged conditions. Since most SMs are based on deterministic algorithms, i.e., they produce the same result given a specific input, here we focus on the notion of accuracy. We will further discuss the precision of a measure as a mathematical property.
The notion of accuracy of a measure is compulsory tight to a context, e.g., semantic proxy (specific corpus, KR, etc.), tuning of the parameters of the measure (if any). Indeed, there is no guaranty that a measure, which has been proved accurate in a specific context, will be accurate in all contexts. As we will see, SMs' accuracy is therefore evaluated according to expected results.
Computational Complexity
The computational complexity or algorithmic complexity of SMs is of major importance in most applications. Considering the growing volumes of datasets processed in semantic analysis (large corpus of texts and KRs), the algorithmic complexity of measures plays an important role for the adoption of SMs.
Considering equivalent accuracy in a specific context, most SMs' users will prefer to make concessions on accuracy for a significant reduction of computational time. However, the literature relative to SMs is very limited on this subject. It is therefore difficult to discuss algorithmic implications of current proposals, which hampers the non-empirical evaluation of measures and burden the selection of measures.
It is however difficult to blame SMs' designer for not providing detailed algorithmic analyses of their proposal. Indeed, computational complexity analyses of measures are both technical and difficult to make. In addition, most of the time, they depend on a specific type of data structure used to represent the semantic proxy taken into account by the measures, which sometimes create a gap between theoretical possibilities and practical implementations.
Despite its major importance, evaluation of SMs regarding their computational complexity is today difficult.
Mathematical Properties
Several properties of interest of measures have been distinguished in section 2.3, e.g., symmetry, identity of the indiscernibles, precision (for non-deterministic measures), and normalization. These mathematical properties are of particular importance for the selection of SMs. They are for instance essential to apply specific optimization techniques. They also play an important role to better understand the semantics carried by the measures, i.e., the meaning carried by the results produced by the measures.
Mathematical properties are central for the comparison of measures since they are, most of the time, required to ensure the coherency of treatments relying on SMs, this is for instance the case when inferences have to be made based on the scores produced by the measures. As an example, the implication of the lack of respect of the identity of the indiscernibles can be strong, It can be conceptually disturbing that comparing a class to itself can produces non-maximal or even low similarity scores, it is however the case using some measures in specific contexts i .
As we will see, mathematical properties analyses are required to deeply understand measures and therefore evaluate their relevance for domain-specific application.
i As an example using Resnik's measure based on the notion of information content of concepts (introduced in section 5.5.3), the similarity of a general concept (near to the root) to itself will be low.
Semantics
The meaning (semantics) of SMs' results deserve to be understood by end-users of measures. This semantics is defined by the assumptions on which relies the algorithmic design of the measures. Some of these assumptions can be understood through the mathematical properties of the measures.
The semantics of a measure is also defined by the cognitive model on which relies the measure, the semantic proxy in use and the semantic evidences analysed. As we have seen in section 2.1.3, the semantic evidences taken into account by the measure define its type and its general semantics (e.g., the measure evaluates semantic similarity, relatedness…).
It is difficult to compare measures regarding the semantics they carry. It is however essential for SMs users to understand that measure selection may in some case strongly impact the conclusions which can be supported by the measurement (e.g. semantic similarity, relatedness, etc.).
Existing Protocols and Benchmarks for Accuracy Evaluation
Accuracy of SMs is considered as the de-facto metric to evaluate the performance of measures. SM's accuracy can be evaluated using a direct or an indirect approach. In most cases, measures are evaluated i using a direct approach, i.e., based on expected scores of measurement (e.g., similarity, relatedness) of pairs of elements. In all cases, evaluation of SMs' accuracy is performed regarding specific expectations/assumptions: Direct evaluation: based on the correlation of SMs with expected scores or other metrics.
Measures are for instance evaluated regarding their capacity to mimic human rating of semantic similarity/relatedness. In this case, the accuracy of measures is discussed based on their correlations with gold standard benchmarks composed of pairs of terms/concepts associated to expected ratings. For domain-specific studies, a set of experts is used to assess the expected scores which will compose the benchmark (e.g., physicians in biomedical studies). The measure can also be evaluated regarding their capacity to produce scores highly correlated to metrics which summarize our knowledge of compared elements. As an example, in bioinformatics, evaluations of measures designed to compare gene products according to their conceptual annotations, is sometimes supported by studying their correlation with other measures aiming to compare genes (e.g., sequence similarity) .
Indirect Evaluation: The evaluation of the measures is based on the analysis of the performance of applications or algorithms which rely on SMs. The treatment considered is domain-specific dependant, e.g., accuracy of term disambiguation techniques, performance of a classifier or clustering relying on a SM to mention a few.
Thereafter, we present the benchmarks which can be used to compare SMs according to human ratings of similarity/relatedness. We next introduce other approaches which have been used to evaluate measures in specific domains.
i Please, in this subsection, understand evaluation as evaluation of accuracy.
Evaluation of measures based on human ratings
Benchmarks based on human-ratings are composed of pairs of elements for which humans have been asked to assign scores of similarity or relatedness. Existing benchmarks are mainly composed of pairs of terms. They have been built using a set of humans (experts). The distinction between the notions of similarity and relatedness has been introduced and the subjects have been trained prior to the experiment.
The measures are most of the time evaluated regarding their correlation (Pearson or Spearman) with averaged scores. In some cases, cleaning techniques are used to exclude abnormal ratings. In some cases, word-based benchmarks are conceptualized, i.e., words are mapped to concepts, in order to evaluate knowledge-based approaches i , i.e., terms are manually mapped to concepts/synsets.
We distinguish the general benchmarks, dealing with common words, and domain-specific benchmarks involving a specific and technical terminology.
(Rubenstein & Goodenough 1965)semantic similarity -n=65
This benchmark is composed of 65 pairs of nouns (ordinary English words), e.g. glass/magician. Subjects were paid college undergraduates (n=51). They were asked to evaluate the (semantic) similarity using a 0-4 scale. The notion of semantic similarity was defined as the 'amount of similarity of meaning' in the experiment.
The study focused on synonymy evaluation. The intra-subject reliability (n=15) on 36 pairs was r=0.85 using Pearson's correlation. Inter-subject correlation is not communicated but mean judgment of two different groups was impressively high (r=0.99).
Domain specific Benchmarks
Biomedical domain - )semantic relatedness A set of 101 medical concepts rated for semantic relatedness. A subset of this set, composed of 29 pairs with higher inter-agreement, is generally considered.
- )semantic similarity and relatedness Two sets of UMLS concepts pairs. The first set contains 566 pairs of concepts and is dedicated to semantic similarity. The second set is composed of 587 pairs of concepts rated for semantic relatedness.
Other benchmarks
The Semantic textual similarity campaign also proposes benchmark campaign for the comparison of texts (See http://ixa2.si.ehu.es/sts). Other datasets can be adapted to evaluate semantic similarity or relatedness, e.g. benchmarks used to evaluate distributional semantic models (Baroni & Lenci 2011).
Semantic Measures Based on Graph Analysis
This section is dedicated to a detailed and technical introduction to knowledge-based SMs which rely on graph analysis. These measures have been briefly presented in section 3.4.1, they are also denoted graphbased SMs or measures framed in the relational setting in the literature (D'Amato 2007). They differ from knowledge-based measures taking advantage of logic-based semantics analysis as they are designed to take advantage of KRs which do not rely on semantics based on logical constructors and formal grammar.
For convenience, SMs based on graph analysis will be denoted SMs in this section. They can be used to compare classes and instances and by extension groups of classes and instances. This section is structured as follows.
The first part discusses the importance of graph-based SMs and explains why they have gained a lot of attention in the last decades.
The second part extends the introduction to KR presented in section 2.4 and introduces a formal introduction of a semantic graph. The notations which will be considered to discuss technical aspects of the measures are also given (e.g., graph notations). In this part, we also discuss the construction of a semantic graph from other KRs. Specific treatments of semantic graphs which are required to ensure coherencies of (some) measures are also introduced.
The third part discusses the important notion of semantic evidence and present several metrics which can be used to extract semantics from a semantic graph. The pivotal notion of class specificity and strength of connotation, as well as strategies which have been proposed for their estimation are presented.
Part 4 presents the narrow link between the semantics associated to a measure (e.g., relatedness, similarity) and the information of the graph which is taken into account by the measures. Several approaches for the design of SMs are next introduced.
Part 5 to 8 are dedicated to the presentation of graph-based SMs which have been designed for the comparison of pair of classes: Part 5, estimation of the semantic similarity of pair of concepts. Part 6, estimation of the semantic similarity of comparison of groups of concepts. Part 7, discusses the unification of the measures defined for the estimation of the semantic similarity of (sets of) classes. Part 8, estimation of the semantic relatedness of two classes. Finally, part 9, is dedicated to SMs which have been proposed for the estimation of the semantic relatedness of two instances.
Importance of Graph-based Semantic Measures
As we have seen, two main families of SMs can be distinguished: distributional measures, which take advantage of unstructured or semi-structured texts, and knowledge-based measures which rely on KRs.
Distributional measures are essential to compare units of languages such as words, or even concepts, when there is no formal expression of knowledge available to drive the comparison. As we stressed, these measures rely on algorithms governed by assumptions to capture the implicit semantics of the elements they compare (i.e., mainly the distributional hypothesis). On the contrary, knowledge-based SMs rely on formal expressions of knowledge explicitly defining how must be understood the compared elements. Thus, they are not constrained to the comparison of units of language and can be used to drive the comparison of any piece of knowledge formally describing a large diversity of elements, e.g., concepts, genes, person, music bands, etc.
We have underlined the limitation of knowledge-based measures, which mainly rely on their strong dependence on the availability of a KR -an expression of knowledge which can be difficult to obtain and may therefore not be available for all domain of studies. However, in the last decades, we have observed, both in numerous scientific communities and industrial fields, the growing adoption of knowledgeenhanced approaches based on KRs. As an example the Open Biological and Biomedical Ontology (OBO) foundry gives access to hundreds of KRs related to biology and biomedicine. Therefore, thanks to the large efforts made to standardize the technology stack which can be used to define and take advantage of KRs (e.g., RDF(S), OWL, SPARQL -triple stores implementations) and thanks to the increasing adoption of the Linked Data and Semantic Web paradigms, a large number of experts and initiatives give access to KRs in numerous domains (e.g., biology, geography, cooking, sports).
Even large corporations adopt KRs to support their large-scale worldwide systems. The most significant example of the recent years is the adoption of the Knowledge Graph by Google, a graph built from a large collection of billions of non-ambiguous subject-predicate-object statements used to formally describe general or domain-specific pieces of knowledge. This KR is used to enhance their search engine capabilities and millions of users benefit from it daily. Several examples of such large KRs are today available, some of them for free: DBpedia, Freebase i , Wikidata, Yago.
Another significant fact of the increasing adoption of KRs is the joint effort made by the major search engines companies, e.g., Bing (Microsoft), Google, Yahoo! and Yandex ii , to design Schema.org, a set of structured schemas defining a vocabulary which can be used to characterize in an unambiguous manner the content of web pages.
An interesting aspect of the last years is also the growing adoption of graph databases (e.g., Neo4J, Ori-entDB, Titan). These databases rely on a graph structure to describe information in a NoSQL fashion. They actively contribute to the growing adoption of the graph property model to describe information (Robinson et al. 2013).
In this context, a lot of attention has been given to KRs, which in numerous cases merely correspond to semantic graphscharacterized elements, concepts, classes, instances and relationships are defined in an unambiguous manner and the KR relies only on simple semantics expressions which do not take into account complex logical constructs. Such semantic graphs have the interesting properties to be easily expressed and maintained while ensuring a good ratio between semantic expressivity and effectiveness, for instance in term of computational complexity of the treatments which rely on them. This justifies the large i Patent by the company MetaWeb which was next acquired by Google (prior to the release of the Knowledge Graph) ii Popular in Russia number of contributions related to the design of SMs dedicated to semantic graphsa diversity of measures this section is dedicated to.
From Knowledge Representations to Semantic Graphs
For the sake of clarity, let's remind that we consider any machine understandable expression of knowledge through the generic term knowledge representation (KR), the language used to express this representation is called a KR language (e.g., RDF, OWL, OBO). Graph-based SMs can be used to take advantage of any KR which corresponds or can be embedded to a semantic graph.
In the literature, most knowledge-based SMs have been defined for a specific KR (language), e.g., WordNet, Gene Ontology, Linked Data represented as RDF graphs, ontologies taking advantage of specific description logic (languages). Since the aim of this section is to introduce the diversity of graph-based SMs, we must consider a generic formalism which can be used to refer to the expression of several KRs (e.g., RDFS graphs, OWL, simple taxonomy). Some of these KRs are not fully representable through a semantic graph i ; we will therefore detail the general process explaining how complex models are generally reduced to semantic graph.
Formal Definitions
We first extend the formal model of a KR introduced in section 2.4 and we introduce several notations relative to semantic graphs.
A Formal Model of a Knowledge Representation
A KR can be formally defined by , with: the set of classes. the set of predicates. the set of instances. the set of data value the set of data types. The set of classes ( ), predicates ( ), instances ( ), values ( ) and data types ( ) are expected to be mutually disjoint, e.g.
. Models enabling the use of data type to characterize specific attribute(s) (values) of classes and instances can also defined. These attributes can also be represented as semantic relationships (e.g., Figure 4 page 28).
We consider that each instance is member of at least a class and that all classes are expected to be connected, e.g., subsumed by a general class denoted the root. The graph containing both the instances and the classes is therefore connected.
The membership of an instance i to a class X is asserted by a triplet (i, isA, X). Depending on the language used to express the KR the isA relationships may change, e.g., RDF(S) uses rdf:type.
We denote I(X) the set of instances which are members of the class X considering the transitivity of , with: ( ) ( ) We also define ( ) as the set of instances which are directly associated to a class, i.e., evaluates class membership without considering . ( ) corresponds to the set of instances which are directly linked to a class by an isA relationship (considering that redundant relationships have been removed):
( ) | ( )
As an example, consider the following set of statements: We obtain: Notice that, in some cases, annotated instances are (indirectly) characterized by classes without being members of them. For example, books can be annotated to specific topics corresponding to classes: book_1 hasTitle "On the Origin of Species" . book_1 hasAuthor charles_Darwin . book_1 hasTopic EvolutionaryBiologyTopic .
EvolutionaryBiologyTopic subClassOf BiologyTopic
In this example, it cannot be considered that the instance book_1 is an instance of the class Evolu-tionaryBiologyTopic. However, in numerous cases, artificial instances of classes will be considered by SMs, e.g., the set of instances characterizing EvolutionaryBiologyTopic will contain book_1. The relevance of the function is to be in accordance with the partial ordering of the classes, i.e., Notice also that in other specific cases, a function can be used to characterized instances. Indeed, under specific conditions, SM dedicated to the comparison of classes will be used to compare instances structured in a partial ordered set. As an example, let's consider that the topics used to annotate the books have been defined as instance of a class Topic and have next be structured in a poset through relationships of predicate subTopicOf.
Formally we can define the mapping by:
( )
A KR with no axiomatic definition, , is a graph with specific types of nodes. Therefore KRs which rely on simple axiomatic definitions of the properties of the predicates can be mapped to semantic graphs without loss of semantics. Some of these KRs are sometimes denoted lightweight ontologies in the literature. On the contrary, KRs based on complex axioms and constraints (generally called heavyweight ontologies) can only be partially represented by semantic graphs; the mapping to a semantic graph implies a reduction and a loss of knowledge initially expressed in the KR (Corcho 2003).
Semantic Graphs, Relationships and Paths
We further introduce the notations used to refer to particular constitutive elements of a semantic graph.
Relationships/ Statements /Triplets
The relationships of a semantic graph are distinguished according to their predicate and to the pair of elements they link. The triplet ( ) corresponds to the unique relationship of type which links the elements , . In the triplet ( ), u is named the subject, t the predicate and v the object. Relationships are central elements of semantic graphs and will be used to define algorithms and to characterize paths graph.
Since the relationships are oriented, we denote the type of relationship carrying the inverse semantic of . We therefore consider that any relationship ( ) implicitly implies ( ), even if the type of relationship and the relationship ( ) are not explicitly defined in the graph. As an example, the relationship (Human, subClassOf, Mammal) implies the inverse relationship (Mammal, super-ClassOf, Human), considering that . The notion of inverse type will be considered to discuss detailed paths. In some KR languages, inverse types are explicitly defined by specific construct, e.g., owl:inverseOf.
Graph traversals
Graph traversals are often represented through path in a graph, i.e., sequence of relationships linking two nodes. To express such graph paths, we adopt the following notations i .
To lighten the formalism, if a single predicate is used, the path is denoted [ ] .
Path Pattern: We denote with , a path pattern which correspond to a list of predicates ii . Therefore, any path which is a sequence of relationships is an instance of a specific path pattern .
We extend the use of the path pattern notation to express concise expressions of paths: corresponds to the set of paths of any length composed only of relationships having for predicate . corresponds to the set of paths of any length composed of relationships associated to the predicate or . As an example, { } refers to all paths linking the concepts and which are only composed of relationships subclass-of (and do not contain relationships of type ).
We also use mixing of the notations to characterize set of paths between specific elements. As an example, { } represents the set of paths linking the elements u and v which starts by a relationship of predicate t and which finishes by a path (possibly empty) of subclass-of relationships. As an example the class membership function which has been introduced above to characterize the instances of a specific class can formally be redefined by:
( ) |
Since the set of paths { } corresponds to a singleton ( ) , or an empty set if the relationship ( ) doesn't exists in the graph, we consider that { } can be shorten by { }.
Notations for Ordered Sets and Taxonomies
A strict partial order is a binary relation over a set which is: A non-strict partial order is a binary relation over a set which is: The set with a partial order is named a partially ordered set (poset).
The taxonomy is the non-strict partial order defined by the taxonomical relationship over the set of classes . Below, we introduce the notations used to characterize a taxonomy , as well as its classes. Some of the notations have already been introduced and are repeated for clarity.
( ), shorten by refers to the set of classes defined in .
( ), shorten by refers to the set of relationships defined in which link two classes, i.e. the set of relationships named in the general introduction of a semantic graph.
A class subsumes another class if , i.e., . We also say that the class is subsumed by .
( ) the set of classes which subsumes , also named the ancestors of or its superclasses, i.e., ( ) | . We also denote ( ) ( ) , the exclusive set of ancestors of .
( ) the set of classes which are subsumed by , also named the descendants of , or its subclasses, i.e., ( ) | . We also denote ( ) ( ) , the exclusive set of descendants of .
( ), shorten by , the set of classes | ( ) . We call the the unique class, if any, which subsumes all classes, i.e., .
( ) the graph composed of ( ) and the set of relationship linking two classes in ( ). Note that, despite the fact that we here use the taxonomy of classes and a specific semantics to the notations, they can be used to characterize any partially ordered set.
A taxonomical tree is a special case of in which: ( ) | ( )|
Building a Semantic Graph from a Knowledge Representation
This section discusses the reduction of a KR to a semantic graph.
The main steps
A generic process can be used to model the main steps which can be applied to obtain a semantic graph from any KR. Knowledge modelling: Steps 1 and 2 represent the modelling of a piece of knowledge to a machine understandable and computational representation.
Step 2 defines the expression of a KR in a specific language, e.g., OWL, RDF(S). The language which is used conditions the expressivity of the language constructs.
Knowledge inference:
Step 3 represents the optional use of a reasoner to infer knowledge implicitly defined in the KR. As an example, in a KR expressed in RDF(S), this step corresponds to the entailment of the RDF graph according to the semantics defined by RDFS, i.e., the use of a reasoner to infer triplets according to RDFS entailment rules.
Mapping to a graph representation:
Step 4 is of major importance. It corresponds to the mapping of the knowledge base to the corresponding graph representation which can be processed by certain measures. In some cases, this step is implicit since the knowledge is already expressed through a graph, e.g., taxonomies, WordNet lexical database. Depending on the language used to express the KR, this phase may imply a loss of knowledge and must therefore be carefully considered.
Graph reduction / cleaning: Step 5 corresponds to the removal of some relationships or classes defined in the graph. It may be required to ensure the coherency of SMs.
The phase of knowledge modelling (steps 1 and 2) will not be further discussed in this section.
Step 3 briefly mentions knowledge inferences. We mainly focus on the mapping phase used to create a semantic graph from a KR.
Knowledge Inferences
As we have seen, implicit knowledge can be associated to a KR. Such knowledge may, in some cases, be included into the semantic graph built from the reduction of the KR. As an example, the definition of the domain and range of a specific predicates is important information which can be used to class instances. To ensure that implicit knowledge is inferred according to the semantics associated to the model, inference engines (reasoners) are expected to be used prior to the construction of the resulting semantic graph, e.g., to apply RDFS entailment rules on a RDF graph.
Mapping To a Graph Representation
In some cases, the mapping of a KR into a semantic graph is straightforward, considering that all the statements are materialized in the representation. As we have seen, in some cases, inferences must been used to generate implicit statements. Nevertheless, in other cases, KRs are defined using specific and expressive languages which imply certain considerations to be taken into account. The aim of this section is obviously not to define all rules which must be used to build a semantic graph from all languages enabling the definition of KRs, e.g. RDF, OWL. We invite the reader to the appendix 4 which discusses some mapping techniques which can be used to extract a semantic graph from some language-specific expressions of KRs, e.g. RDF, OWL.
It is therefore important to consider a distinction between the expression of a semantic graph in a particular language, which can be built on a graph-based formalism such as RDF, and the semantic graph which will be processed by SMs. As an example, OWL KR can be expressed (serialized) using RDF graph as an exchange format. However, since graph syntax based on triplets are limiting, many constructs used in OWL are encoded into a set of triplets (Horrocks & Patel-Schneider 2003). It's also important to understand that some KR defined using expressive language, such as description logic, may therefore only partially be modelled in a graph structure as expected by most SMs.
Semantic Graph Reductions
We denote ( ), shorten if there is no ambiguity, the reduction of the KR to a semantic graph. In addition, we denote ( ), also shorten if there is no ambiguity, the reduction of as a semantic graph only considering the relationships having as predicate . A common reduction of a KR as a graph is , shortened by and named the taxonomical reduction.
corresponds to the taxonomy , and therefore only contains classes. As we will see this reduction is widely used to compute the similarity between classes.
Graph reductions can naturally be more complex. The graph ( ), with , refers to the reduction which is composed of the relationships having as predicate subClassOf or isA. We denote such a graph (T stands for Taxonomical and I for isA relationship).
Studies relying on semantic graphs can be conducted taking the full semantic graph into account or focusing on a particular subgraph. Depending on the amount of information considered, some properties of the graph may change (e.g., acyclicity), along with the strategies and algorithmic treatments used for their processing. Since most SMs require the graph to fulfil specific properties, we briefly discuss the link between graph topologies and SMs.
Considering all types of semantic relationships, a semantic graph generally forms a connected directed graph which can contain cycles, i.e. path from a node to itself. The taxonomical reduction ( ), also leads to a graph given that a class can inherit from multiple classes. Nevertheless, due to the transitivity of taxonomical relationships, is expected to be acyclic. Taxonomic reductions composed of a unique class which subsumes the others form a Rooted Directed and Acyclic Graph ( ).
properties enable efficient graph treatments to be performed, numerous SMs take advantage of them. The graph is also a RDAG. Figure 8 presents some of the reductions of a semantic graph which are usually performed prior to consider SMs treatments. This example is based on the reduction of the Gene Ontology (GO) in order to extract the taxonomical knowledge which is related to a specific aspect of the GO. Such a reduction is generally performed before comparing pairs of classes. The figure shows the GO, which is composed of three subparts (sub-graphs): Molecular Function (MF), Biological Processes (BP), and Cellular Component (CC). The GO originally forms a cyclic graph composed of classes linked by various semantic relationships. The first reduction shows the isolation of the MF subgraph. Only classes composing the MF subpart and the relationships involving a pair of MF classes are considered. The resulting graph can be cyclic. The final reduction only contains MF classes linked by taxonomical relationships, which corresponds to a .
Semantic Graph Cleaning
The accuracy of treatments relying on SMs and the semantics of their results highly depends on the semantic graph which is processed. The better the semantic graph is, the more accurate the SMs results will be. In this context, the quality of semantic graph relies on both, the choices made to model the domain and the way this knowledge is defined. During the definition of semantic graph, relationship redundancies can be introduced. Such redundancies can impact SMs' results and thus have to be removed, e.g., documented in ).
Relationship redundancies appear when a direct semantic relationship between two elements can be inferred (explained) by an indirect one i . Redundancies involve transitive relationships. As an example, since the taxonomic relationship is transitive, if the semantic graph defines that (Human, subClassOf, Mammal) and (Mammal, subClassOf, Animal) a semantic reasoner can infer that (Human, subClas-sOf, Animal). In this case, a redundancy occurs when an explicit relationship (non-inferred) defines (Human, subClassOf, Animal). Figure 9 shows examples of such redundancies found in the GO.
Formally, a simple case of redundancy can be formalized by: Relationship redundancies can negatively impact results produced by SMs. As an example, in Figure 9, because of these redundancies, a naive SM relying on the edge counting strategy will underestimate the distance between two classes. In this case, the semantic distance between two classes is defined as a function of the length of the shortest path linking them, e.g. the distance between GO:2000731 and GO:0031327 will be set to 1 instead of 4.
i For those familiar to RDF(S), notice that the domain and the range (co-domain) of a predicate, if represented as a relationship, cannot induce redundancies, e.g. the triplet (is a parent of, rdfs:domain, Human) doesn't mean that the triplet (Jean, is-a, Human) is redundant considering that (Jean , is a parent of, Louise) is specified in the knowledge representation. A transitive reduction of has to be done to remove redundant relationships. An adaptation of the algorithm proposed by (Aho et al. 1972) can be used. Moreover, if the whole KR is considered, the transitive reduction has to (i) consider all transitive relationships and (ii) take into account redundant relationships explained by the transitivity of some predicates over others predicates, e.g., isA, hasPart and partOf are transitive over subClassOf.
A specific type of such redundancies are redundancies of class membership. As an example, knowing that (Mammal, subClassOf, Animal), defining that (jean, isA, Mammal) also implies that (jean, isA, Animal). The latter statement will therefore be considered as redundant if both are defined in a KR.
In some annotation repositories, instances can be annotated by multiple classes/concepts defined in a KR i . However, the process of annotation may vary depending on the application context considered, i.e., conceptual annotations of gene products can be defined manually by curators from multiple evidences (e.g. literature, lab experiences) or inferred by algorithms (Hill et al. 2008).
i Some communities will better understand "Some instances are defined as members of multiple classes". Nevertheless, usually, SMs expect instances to be characterized by a minimal set of statements. As an example, the transitivity of the taxonomical relationship must be taken into account. It must therefore be considered that an instance is indirectly annotated by (i.e. belongs to) the classes subsuming its assigned classes. This inference rule is defined as the true path rule in some communities, e.g. Bioinformatics (Camon et al. 2004). Therefore, if an instance is characterized by the class OxygenBinding, it is also characterized by the class Binding as the latter subsumes the former.
Due to concurrent and automatic annotations processes, redundant annotations are sometimes largely found in annotation files and large KRs. However, since some SMs can be affected by such redundancies, all inferable annotations are generally expected to be removed. This is for example the case when an instance is regarded as a set of concepts/classes. Figure 10 shows an example of redundant annotations which have been found in UniprotKB human gene products annotations i . Coloured classes ii represent the GO annotations of a particular gene product. However, according to the true path rule, the red classes are redundant as they can be inferred from those coloured in blue (bold frame).
The question of statement redundancies generalizes the detection of taxonomical redundancies and can also be solved efficiently using an adaptation of a transitive reduction algorithm. Red classes (normal frame) are redundant as they can be inferred from blue ones (bold frame). i We found that 45% of the 45014 UniprotKB annotated human genes contain undesired GO annotations (representing 13% of all human GO annotations) (date 06/2012). ii Here class refers to GO term.
Evidences of Semantics and their Interpretations
A semantic graph carries explicit semantics, e.g. through the taxonomy defining the partial ordering of the classes. It also contains implicit semantics evidences. We here consider semantic evidences as any information on which can be based interpretations regarding the meaning carried by the KR or the elements it defines (classes, instances, relationships).
Semantic evidences derive from the study of specific factors which can be used to discuss particular properties of the semantic graph or particular properties of its elemens. Therefore, a semantic evidence, either based on high assumptions or theoretically justified by the core semantics defined in the representation, relies on a particular interpretation of a specific property of the KR, e.g. the number of classes described in a taxonomy gives a clue on the degree of coverage of the ontology. Figure 11 schematizes the acquisition of semantic evidences which can be obtained when mining a semantic graph. Based on the analysis of specific factors, using particular metrics, some properties of both the semantic graph and the elements it defines can be obtained, e.g., the depth of classes and the depth of the taxonomy. Based on these properties, and sometimes considering particular assumptions, semantic evidences can be obtained. As an example one can consider that the deeper a class is regarding the depth of the taxonomy, the more expressive the class is expected to be.
As we will see, several properties are used to consider extra semantics from semantic graphs; they are especially important for designing SMs. Knowing (i) the properties which can be used, (ii) how they are computed and (iii) the assumptions on which their interpretation rely on, is essential for both SMs designers and users. Indeed, semantic evidences are the core elements of measures; they have been used for instance to: (i) normalize measures, (ii) estimate the specificity of classes and to (iii) weight the relationships defined in the graph, that is to say, to estimate the strength of connotation between classes/instances. Most of the properties which are used to obtain semantic evidences correspond to well-known graph properties defined by graph theory. In this section, we only introduce the main properties which are based on the study of the taxonomy . We next introduced two applications of these properties: the estimation of the specificity of classes and the estimation of the strength of connotation between classes.
Knowledge Representation
Properties Semantic Evidence
Analysis of specific factors using particular metrics
Interpretation which can be based on multiple properties and several assumptions
Semantics Evidences
In this section we mainly focus on taxonomical evidences. Two kinds of semantic evidences can be distinguished: -Intentional evidences also called intrinsic evidences. They are based on the analysis of properties associated to the topology of the graph, and mainly rely on the analysis of the topology of the taxonomy.
-Extensional evidences. They are based on the analysis of both the topology of the graph and distribution of the usage of classes (instance memberships) i , i.e., the number of instances associated to classes. In short, interpretations regarding the semantics of the classes can be driven by the study of their usage.
Among these evidences, we can distinguish those which are related to the properties of the full KR, i.e., based on global properties, from those which are related to specific elements (classes, instances, relationships) and rely on local properties analysis.
Global Properties
Depth of the taxonomy / maximal number of super classes. The depth of the taxonomy corresponds to the maximal depth in . It informs on the degree of expressiveness/granularity of the taxonomy. As an example, the deeper , the more detailed the taxonomy is expected to be.
The maximal number of super classes of a class has is also used as an estimator of the upper bound of the maximal degree of expressivity of a class. Inversely, the number of classes defined in the taxonomy (i.e., the number of subclasses of the root) can also be used as an upper bound of the maximal degree of generality of a class defined in the taxonomy.
Width of the taxonomy.
The width of taxonomy corresponds to the length of the longest shortest path which links two classes in . It informs on the degree of coverage of the taxonomy. Generally the taxonomy is assumed to better cover a domain the more important its width is. i Notice that we don't consider semantic evidences only based on the usage of classes, i.e., without taking into account the taxonomy.
Indeed, in most cases, to be meaningful, the distribution of the usage of classes must be evaluated considering the transitivity of the taxonomic relationship. If this is not the case, incoherent results could be obtained, e.g., that a class contains more instances than one of its superclass.
Local Properties
Local density.
It can be considered that relationships in dense part of a taxonomy represent smaller taxonomical distances. Metrics such as compactness can be used to characterize local density (Botafogo et al. 1992) i . Other metrics such as the (in/out)-branching factor of a class, i.e., the number of neighbours of a given class ii , can also be used (Sussna 1993). It is generally assumed that more important the number of subclasses of a class is, the more general the class is.
Number of super classes of class / depth / number of subsumed classes / number of subsumed leaves / distance to leaves.
The number of super classes of a class is often considered to be directly proportional to its degree of expressiveness. The more a class is subsumed, the more detailed/restrictive the class is expected to be. The number of superclasses can also be interpreted with regard to the maximal number of superclasses a class of the taxonomy can have. The depth of a class is also expected to be directly proportional to its degree of expressiveness. The more the depth of a class (according to the maximal depth), the more detailed/restrictive the class is regarded iii . The depth of a class can also be evaluated according to the depth of the branch in which it is defined.
In a similar fashion, in some cases, the distance of a class to the leaves it subsumes or the number of leaves it subsumes will be considered as an estimator of expressiveness, the more the distance/number the less expressive the class is considered.
Global Properties
Distribution of the instances among the classes.
The distribution of the instances among the classes can be used to evaluate the balance of the distribution and to design local correction factors, e.g. to correct the expressiveness of a class.
Local Properties
Number of instances associated to a class. i (Botafogo et al. 1992) also introduces interesting factors for graph-based analysis; the depth of a node is also introduced. ii Called (in/out) degree of a node in graph theory. iii Note that the depth of a class as an estimator of its degree of expressiveness can be seen as an inverse function of the notion of status already introduced by (Harary & Norman 1965) to analyze status phenomena in organization.
The number of instances of a class is expected to be inversely proportional to its expressiveness, the less a class has instances, the more specific the class is expected to be.
These semantic evidences and their interpretations have been used to characterize notions extensively used by SMs. They are indeed used to estimate the specificity of classes as well as the strength of connotations between classes.
Estimation of Class Specificity
Not all classes have the same degree of informativeness, specificity. Indeed, most people will commonly agree that the class Dog is a more specific description for a Living Being than the class Animal.
The notion of specificity can be associated to the concept of salience defined by Tversky to characterize a stimulus according to its 'intensity, frequency, familiarity, good form, and informational content". In (Bell et al. 1988) it is also specified that "salience is a joint function of intensity and what Tversky calls diagnosticity, which is related to the variability of a feature in a particular set [universe, collection of instances]". The idea is to capture the amount of information carried by a class which is expected to be directly proportional to its degree of specificity and proportional to its degree of generality.
The notion of specificity of classes is not completely artificial and can be explained by the root of the taxonomic organization of knowledge. Indeed, the transitivity of the taxonomical relationship specifies that not all classes have the same degree of specificity or detail. The ordering of two classes defines that the class which subsumes the other has to be considered as the more abstract one (less specific). In fact, the taxonomy explicitly defines that if a class subsumes another class , all the instances of are also instances of : Figure 12 in which we can see that the more a class is subsumed by numerous classes: (A) the number of properties which characterize the class increases, and (B) the number of instances associated to the class decreases. Therefore, another way to compare the specificity of two ordered classes is to study their usage, analysing a collection of instances. Indeed, in a total order of classes i , the comparison of the degree of specificity of two classes can be made regarding their number of instances. The class which contains the highest number of instances will be the less specific (its universe of interpretation is larger). In this case, it is therefore possible to assess the specificity of ordered classes either studying the topology of their ordering or the set of instances associated to these classes.
This expression is represented by
Nevertheless, in taxonomies, classes are generally only partially ordered. This means that the evidences used to compare the specificity of two classes without assumption cannot be used anymore, i.e., classes which are not ordered are in some sense not comparable. This aspect is underlined by Figure 13. It's impossible to compare, in an exact manner, the specificity of two non-ordered classes. This is due to the fact that the amount of shared and distinct properties can only be estimated regarding the properties characterizing the common class they derive from. However this estimation can only be a lower bound since extra properties shared by the two instances may not be carried by such a common class. However, the appreciation of the degree of specificity of classes is of major importance in the design of SMs. Therefore, given that discrete levels of class specificity are not explicitly expressed in a taxonomy, various approaches have been explored to define a function in the aim to evaluate the degree of specificity of classes. The function relies on the intrinsic and extrinsic properties presented above.
i for any pair of classes either or .
The function evaluating the specificity of classes must be in agreement with the taxonomical representation which defines that classes are always semantically broader than their specializations i . Thus, the function which estimates the specificity of classes must monotonically decrease from the leaves (classes without subclasses) to the root of the ontology, i.e.:
( ) ( )
We present the main approaches defined in the literature to express such a function .
Basic intrinsic estimators of class specificity
The specificity of classes can be estimated considering the location of its corresponding node in the graph. A naive approach will define the specificity of the class , ( ), as a function of some simple properties related to , e.g., ( ) ( ( )), ( ) ( ( )) or ( ) ( ( )) with ( ) and ( ) the ancestors and descendants of .
The main drawback of simple specificity estimators is that classes with similar depth or equal number of superclasses/subclasses will have similar specificities, which is not always true. In fact, two classes can be described with various degrees of detail independently of their depth, e.g., (Yu, Jansen & Gerstein 2007). More refined functions have been proposed to address this limitation.
Extrinsic Information Content
Another strategy explored by SMs designers has been to characterize the specificity of classes according to the well-known theoretical framework established in computer-sciences, namely Shannon's Information Theory. The specificity of a class will further be regarded as the amount of information the class conveys, its Information Content (IC). The IC of a class can for example be estimated as a function of the size of the universe of interpretations associated to it. The IC is a common expression of the function and was originally defined by (Resnik 1995) to assess the informativeness of concepts.
The IC of the class is defined as inversely proportional to ( ), the probability to encounter an instance of in a collection of instances (negative entropy). The original definition of the IC was proposed to estimate the informativeness of a concept as a function of its number of occurrences in a corpus of texts.
We denote any IC which relies on extensional information, i.e., a corpus or collection of instances. We consider the formulation of originally defined by (Resnik 1995): i This explains that the specificity of classes cannot be estimated only considering extrinsic information such as the number of instances directly characterized by a class (without inference). Indeed, the partial ordering of classes also needs to be taken into account when the specificity of classes is estimated. If the transitivity of the taxonomic relationship is not considered to propagate class usage/instance membership, the instance distribution can be incoherent with regard to the partial order defined in the underlying taxonomy, i.e., a class can have less instances than one of its superclass.
with ( ) the set of instances of the class , e.g., occurrences of a concept in a corpus. The suitability of the log function can be supported by the work of (Shepard 1987) i . Notice also the link with Inverse Document Frequency (IDF) commonly used in information retrieval (Jones 1972): The main drawback of functions based on extrinsic information is that they highly depend on the usage of the classes and will therefore automatically reflect its biases ii . In some cases, such a strong dependence between class usage and the estimation of its specificity is desired as all classes which are highly represented will be considered as less informative, even the classes which will be considered to be specific regarding intrinsic factors (e.g., depth of classes). However, in some cases, biases in class usage can badly affect the estimation of the IC and may not be adapted. In addition, the IC computation based on text analysis can be time consuming and challenging given that, in order to be accurate, complex disambiguation techniques have to be used to detect to which concept/class refers an occurrence of a word.
Intrinsic Information Content
In order to avoid the dependency of calculus to statistics related to class usages, various intrinsic IC formulations ( ) have been proposed. They can be used to define functions by only considering structural information extracted from the KR, e.g., the intrinsic factors we presented in the previous section 5.3.1.1. This IC formulation extends the basic specificity estimators presented above.
Multiple topological characteristics can be used to express iIC, e.g., number of descendants, ancestors, depth, etc. Schickel-Zuber & Faltings 2007;Zhou et al. 2008;). The formulation proposed by ) is presented. It enables to refine the contribution of both depth and number of subclasses ( ( )) to compute class specificity.
With | | the number of classes defined in the taxonomy, ( ) the depth of the class , the maximal depth of the ontology and [ ].
i Shepard derived his universal law of stimulus generalization based on the consideration that logarithm functions are suited to approximate semantic distance , please refer to section 2.2.1 for more details. ii As an example, this can be problematic for GO-based studies as some genes are more studied and annotated (e.g. drug related genes) and annotation distribution patterns among species reflect abnormal distortions, e.g. human/mouse .
are of particular interest as only the topology of the ontology is considered. They prevent errors related to biases on class usage. However, the relevance of relies on the assumption that the ontology expresses enough knowledge to rigorously evaluate the specificities of classes. Therefore, as a counterpart, are sensitive to structural biases in the taxonomy and are therefore sensible to unbalanced taxonomy, degree of completeness, homogeneity and coverage of the taxonomy .
Non-taxonomical Information Content
Both introduced and only take taxonomical relationships into account. (Pirró & Euzenat 2010a) proposed the extended IC ( ) in order to take advantage of all predicated and semantic relationships.
With ( ) the set of classes linked to the class by any relationship of type . In this formula, the contribution of the various relationships of the same predicate is averaged. However, the more a class establishes relationships of different predicates, the more its will be high. We thus propose to average the (by | |) or to weight the contribution of the various types of relationships. Normalized depth or max depth can be used. In a graph, considering the minimal depth of a class doesn't ensure that the specificity increases according to the partial ordering (due to multi-inheritance). Depth non-linear (Seco 2005) No [0, 1] Use log to introduce non-linear estimation. IDF (Chabalier et al. 2007) Yes (Resnik 1995) Yes [0,inf[ , [0, 1] IC depends on class usage. ( ) (| ( )| | |). Normalized version have also been proposed, e.g., We have presented various strategies which can be used to estimate the specificities of classes defined in a partially ordered set. It's important to understand that these estimators are based on assumptions regarding the representation of the knowledge.
Estimation of Strength of Connotation between Classes
A notion strongly linked to the specificity of classes is the estimation of the strength of connotation between instances or classes, i.e., the strength of the relationship linking two instances or two classes.
Considering taxonomical relationships, it is generally considered that the strength of connotation between classes is stronger the deeper two classes are in the taxonomy. As an example, the strength of the taxonomical relationship linking SiberianTiger to Tiger will generally considered to be more important than the one linking Animal to LivingBeing. Such a notion is quite intuitive and has for instance been studied by Quillian and Collins in the early studies of semantic networks (Collins & Quillian 1969) -Hierarchical-Network model were built according to mental activations evaluated based on the time people took to correctly response to sentences linking two classes, e.g., a Canary is an Animal -a Canary is a Birda Canary is a Canary. Showing the variation of the response time to correctly response to sentences involving two ordered classes (Canary / Animal), the authors highlighted the variation of the strength of connotation and the link with the notion of specificity of classes.
It's worth to note that the estimation of the strength of connotation of two linked classes is in some sort a measure of the semantic similarity or taxonomical distance between the two ordered classes. The aim of the model proposed to define the strengths of connotation between classes is generally based on the assumption that the taxonomical distance conveyed by a taxonomical relationship shrinks with the depth of the two linked classes (Richardson et al. 1994). Given that the strength of connotation between classes is not explicitly expressed in a taxonomy, it has been proposed to consider several intrinsic factors to refine its estimation, e.g., (Young Whan & Kim 1990;Sussna 1993;Richardson et al. 1994).
A taxonomy only explicitly defines the partial ordering of its classes, which means that if a class subsumes another class , all the instances of are also instances of , i.e., ( ) ( ). Nevertheless, non-uniform strength of connotation aim to consider that all taxonomic relationships do not convey the same semantics.
Strictly speaking, taxonomic relationships only define class ordering and class inclusion. Therefore, according to the extensional interpretation of a class ordering, the universe of interpretation of a class, i.e., the set of possible instances of the class regarding the whole set of instances, must reduce the more a class is specialized i . This reduction of the universe of eligible interpretations corresponds to a specific understanding of the semantics of non-uniform strengths of connotation. Alternative explanations which convey the same semantics can also be expressed according to the insights of the various cognitive models which have been introduced in section 2.2: Spatial/Geometric model, it states that the distance between classes is a non-linear function which must take into account class saliency.
i We here consider a finite universe Feature model (which states that a class is represented by a set of properties). It can be seen as the difficulty to further distinguish a class which is meaningful to characterize the set of instances of a domain. Alignment and Transformational models: the effort of specialization which must be done to extend a class increases the more a class has been specialized.
All this interpretations state the same central notion -the strength of connotation linking two classes is a function of two factors: (i) the specificities of the linked classes and (ii) the variation of specificity between the two compared classes. The several semantic evidences introduced in the previous section, as well as the notion of information content of classes, can be used to assess the strength of connotation between two classes.
As an example, a simple model for the definition of the strength of connotation associated to a taxonomical relationship linking two classes with can be defined as a function of the information content of and (Jiang & Conrath 1997).
It's important to stress that supporting the estimation of the strength of connotations according to the density of classes, the branching factor, the maximal depth or the width of the taxonomy is based on assumptions regarding the definition of the KR (once again, refer to the section 5.3.1.1 which presents the semantic evidences and the assumptions on which they are based on).
Types of Semantic Measures and Graph Properties
Depending on the properties of the semantic graph the SMs evaluate, two main groups of measures can be distinguished: Measure adapted to semantic graphs composed of (multiple) predicate(s) potentially inducing cycles. Measure adapted to acyclic semantic graphs composed of a unique predicate inducing transitivity.
Semantic Measures on Cyclic Semantic Graphs
As we have seen, considering all predicates defined in potentially leads to a cyclic graph. Only few SMs framed in the relational setting are designed to deal with cycles. Since these measures take advantage of all predicates, they are generally used to evaluate the semantic relatedness and not the semantic similarity. Notice that they can be used to compare concepts and instances. Two types of measures can be further distinguished: Graph-Traversal measures, pure graph-based measures. These measures have initially been proposed to study node interactions in a graph and essentially derive from graph theory contributions. They can be used to estimate the relatedness of nodes considering that the more two nodes interact, directly or indirectly, the more related they are. These measures are not SMs per se but graph measures used to compare nodes. However, they can be used on semantic graphs and can also be adapted in order to take into account evidences of semantics defined in the graph.
Graph property model measures designed to compare elements described using the graph property model. These measures consider classes or instances as a set of properties expressed in the graph.
Graph-Traversal Measures
Measures based on graph traversals can be used to compare any pair of classes or instances, represented as a node. These measures rely on algorithms designed for graph analysis which are generally used in a straightforward manner. Nevertheless, some adaptations have also been proposed in order to take into account the semantics defined in the graph. Among the large diversity of measures and metrics which can be used to estimate the relatedness of two nodes in a graph, we distinguish: Shortest path approaches. Random-walk techniques. Other interconnection measures.
The main advantage of these measures is their unsupervised nature. Their main drawback is the absence of extensive control over the semantics taken into account, which generates difficulties in justifying and explaining the resulting scores. However, in some cases, these measures enables fine-grain control over the predicates considered. Indeed, approaches have naturally proposed to tune the contribution of each relationship or predicate in the estimation of the relatedness.
Shortest path Approaches
The shortest path problem is one of the most ancient problems of graph theory. The intuitive edge-counting strategy can be used on any graph. It can be applied to compare both pairs of instances and classes, considering their relatedness as a function of the distance between the nodes corresponding to the two compared elements. More generally, the relatedness is estimated as a function of the weight of the shortest path linking them. Classical algorithms proposed by graph theory can be used; the algorithm to use depends on specific properties of the graph (e.g., does it contains cycles? Are their nonnegative weights associated to relationships? Is the graph considered to be oriented?). were among the first to use the shortest path technique to compare two classes defined in a semantic graph. This approach is sometimes denoted as the edge-counting strategy in the literatureedge here refers to relationship. Because the shortest path can contain relationships of any predicate we call it unconstrained shortest path (usp).
One of the drawbacks of usp-based techniques is that the meaning of the relationships from which derive the relatedness is not taken into account. In fact, complex semantic paths involving multiple predicates and those only composed of taxonomic relationships are for instance considered equally. Therefore, propositions to penalize reflecting complex semantic relationships have been proposed (Hirst & St-Onge 1998;Bulskov et al. 2002) . Approaches have also been proposed to consider particular predicates in a specific manner. To this end, a weighting scheme can also be applied to in order to tune the contribution of each relationship or predicate in the computation of the final score.
Random Walk Techniques
These techniques are based on a Markov chain model of random walk. The random walk is defined through a transition probability associated to each relationship. The walker can therefore walk from node to node, i.e., each node represents a state of the Markov chain. Based on random walk techniques, several measures can be designed to compare two nodes and . A selection of measures introduced in ) is listed: The average first-passage time, i.e. the average number of steps needed by the walker starting to to reach . The average commute time, Euclidean Commute Time Distance. The average first passage cost. The pseudo inverse of the Laplacian matrix.
These approaches are closely related to spectral-clustering and spectral-embedding techniques (Saerens et al. 2004). Examples of measures based on random Walk techniques defined in the literature are (Hughes & Ramage 2007;Fouss et al. 2007;. Approaches based on graph-kernel can also be used to estimate the similarity of two nodes of a graph (Kondor & Lafferty 2002) and have been used to design SMs (Guo et al. 2006).
As an example, the Hitting time ( ) of two nodes is defined as the expected number of steps a random walker starting from will do before is reached. The hitting time can be defined recursively by: The commute distance corresponds to ( ) ( ) ( ), the expected time a random walker travel from to and back to . Therefore, the more paths connect and , the smaller their commute distance becomes. Critics of classical approach to evaluate hitting and commute approaches, as well as extensions, have been formulated in the literature, please refer to (Sarkar et al. 2008;von Luxburg et al. 2010).
These measures take advantage of second-order information which are generally hard to interpret in semantic terms.
Other Measures based on Interaction Analysis
Several approaches exploiting graph structure analysis can be used to estimate the similarity of two nodes of a graph through their interconnections. Such approaches estimate the proximity between two elements without explicitly taking into account the semantics carried by the graph. Consequently, the more the elements are interconnected, either directly or indirectly, the more related they will be assumed. (Chebotarev & Shamis 2006a;Chebotarev & Shamis 2006b) proposed to take into account indirect path linking two nodes using the matrix-forest theorem. SimRank, proposed by (Jeh & Widom 2002), is an example of such a measure. Considering as the set of nodes of the graph, ( ) the nodes linked to the node by a single relationship ending to (i.e., in-neighbors), and ( ) the nodes linked to by a single relationship starting from (i.e., out-neighbors), Sim-Rank similarity is defined by: SimRank is a normalized function. An adaptation of this measure have been proposed for semantic graph built from linked data (Olsson et al. 2011).
Semantic Measures for the Graph Property Model
The second approach which can be used to compare pair of instances and concepts defined in a (potentially) cyclic semantic graph are measures associated to the graph property model. These measures take advantage of semantic graphs encompassing expressive definitions of classes and instances through properties. The properties may sometimes refer to specific data types, e.g. strings. Therefore, the nodes composing the semantic graph can be data value, classes or instances. The semantic graphs generally correspond to RDF graphs or labelled graphs. Note that in RDF, a property corresponds to a specific type of relationship, i.e. predicate. Therefore, a direct property of an element corresponds to the set of values associated to through the relationships of predicate .
SMs based on property analysis can be used to consider specific properties of compared elements. These measures are particularly useful to compare objects defined in (relational) databases, to design ontology mapping algorithms and to perform instance matching treatments in heterogeneous knowledge bases. We present three approaches which can be used to compare elements through this notion of property.
Elements Represented as a List of Direct Property
An element can be evaluated by studying its direct properties, i.e., the set of values i associated to the element according to the study of a particular predicate. As an example, two types of predicate can be associated to an instance: Taxonomic relationships, i.e., the relationship links the instance to a class. Non-taxonomical relationships: o The property links the instance to another instance ii . o The property links the instance to a specific data value iii .
Two elements will be compared with regard to the values associated to each property considered. Each property is therefore associated to a specific measure which is used to compare the values taken by this property.
Properties which link an instance to other instances are in most occasions compared using setbased measures, which for example will evaluate the quantity of instances of shared sets (e.g., the number of music genres two groups have in common). Taxonomical properties are evaluated using SMs adapted to the class comparison. Properties associated to data values can be compared using measures adapted to the type of data considered, e.g., in using a measure to compare dates if the corresponding property is a date.
The scores produced by the various measures associated to the various properties are aggregated in order to obtain a global score of relatedness for two instances (Euzenat & Shvaiko 2007). Such a representation has been formalized in the framework proposed by (Ehrig et al. 2004). This is a strategy commonly adopted in ontology alignment, instance matching or link discovery between instances; SemMF , SERIMI (Araujo et al. 2011) and SILK (Volz et al. 2009) are all based on this approach.
Elements Represented as An Extended List of Property
Several contributions underscore the relevance of indirect properties in comparing entities represented through graphs, especially in object models (Bisson 1995). Referring to the KR proposed in Figure 14, such a representation might be used to consider the characteristics (properties) of the music genres for the purpose of comparing two music bands. (Harispe, Ranwez, et al. 2013a).
This approach relies on a representation of the compared elements which is an extension of the canonical form to represent an element as a list of properties. This approach can be implemented to take into account indirect properties of compared elements, e.g., properties induced by the elements associated to the element we want to characterize.
A formal framework, enhancing the one proposed in (Ehrig et al. 2004), has thus been proposed to capture some of the indirect properties (Albertoni & De Martino 2006). This framework is dedicated to the comparison of instances. It formally defines an indirect property of an instance along a path in the graph. The indirect properties to be taken into account are defined for a class and depend on a specific context, e.g. application context.
From a different perspective, Andrejko and Bieliková (Andrejko & Bieliková 2013) suggested an unsupervised approach for comparing a pair of instances by considering their indirect properties. Each direct property shared between the compared instances plays a role in computing the global relatedness. When the property links one instance to another, the approach combines a taxonomical measure with a recursive process to take into account the properties of instances associated with the instance being processed.
Lastly, in estimating the similarity between two instances, the measure aggregates the scores obtained during the recursive process. The authors have also proposed to weight the contributions of the various properties so as to define a personalized information retrieval approach.
Elements Represented Through Projections
The framework proposed by (Harispe, Ranwez, et al. 2013a) enables to compare elements through different projections characterizing the properties of interest.
The approach has initially been defined to compare two instances, but can also be used to compare classes. We here present the perspectives it opens for the comparison of instances. In addition to the formal characterization of an instance through an extended list of properties, this framework also considers complex properties of instances, i.e., properties that rely in combining various properties.
Indeed, using the other approaches, considering that the weight and the size of several persons have been specified in the graph, it is impossible to compare two persons by taking into account their body mass index, a metric which can be computed from the weight and the height. Therefore according to the following statements: luc asWeightInKG 70 . luc asHeightInM 1,75 . marc asWeightInKG 85 . marc asHeightInM 1,80 . steve asWeightInKG 120 . steve asHeightInM 1,70 Luc and Marc will be regarded as more similar than Luc/Steve and Marc/Steve according to their body mass index. Therefore the main idea is to enable the definition of complex property reflecting properties of the compared elements which are not materialized into the graph. The framework proposed by (Harispe, Ranwez, et al. 2013a) can be used to consider such properties.
Finally, the comparison of two instances is made through the aggregation of the value of similarity which is associated to the compared projections. As an example, the authors used a simple weighted sum in their experiments.
Semantic Measures on Acyclic Graphs
Note that all the measures which can be used on the whole semantic graph can also be used for any acyclic reduction . Nevertheless, numerous SMs have been defined to work on a reduction of . Depending on the topological properties of the reduction, two cases can be distinguished: 1. The reduction leads to a cyclic graph. Adapted SMs are therefore those previously presented for cyclic graphs in section 5.4.1.
2.
is acyclic, then particular techniques and algorithms can be used. Most SMs defined for acyclic graphs focus on taxonomical relationships defined in and consider the reduction to be the taxonomy of classes . However, some measures consider a specific subset of , e.g., , which also produces in some cases an acyclic graph ). The measures which can be used in this case are usually designed from a generalization of semantic similarity measures, i.e. measures only considering .
SMs applied to graph-based KRs were originally designed for taxonomies, i.e. , . Since most KRs are usually mainly composed of taxonomic relationships or poset structures, substantial literature is dedicated to semantic similarity measures. Thus, most SMs designed for semantic graphs focus on and have been defined for the comparison of pairs of classes.
Semantic Similarity between Pairs of Classes
The majority of SMs framed in the relational setting have been proposed to assess the semantic similarity or taxonomical distance of a pair of classes defined in a taxonomy. Given that they are designed to compare two classes, these measures are denoted pairwise measures in some communities, e.g., in bioinformatics ). As we will see, an extensive literature is dedicated to these measures as they can be used to compare any pairs of nodes expressed in a graph which defines a (partial) ordering, that is to say, any graph structured by relationships which are transitive, reflexive and antisymmetric (e.g., isA, partOf).
In section 3.4.1, we already distinguished the main approaches used for the definition of knowledge-based SMs framed in the relational setting. Considering the measures which can be applied to acyclic graphs, we distinguished: Measures based on graph structure analysis. They estimate the similarity as a function of the degree of interconnection between classes. They are generally regarded as measures framed in the spatial modelthe similarity of two classes is estimated as a function of their distance in the graph, e.g., based on the analysis of the lengths of the paths linking the classes. These measures can also be considered to be framed in the transformational model, considering them as functions which estimate the similarity of two classes regarding the difficulty to transform a class to another.
Measures based on class features analysis.
This approach uses the graph to extract features of classes. These features will next be analysed to estimate the similarity as a function of shared and distinct features of the compared classes. This approach is conceptually framed in the feature model. The diversity of feature-based measures relies on the fact that various strategies proposed to characterize class features and to take advantage of them to assess the similarity.
Measures based on Information Theory. Based on a function used to estimate the amount of information carried by a class, i.e. its information content (IC), these measures assess the similarity according to the evaluation of the amount of information which is shared and distinct between the compared classes. This approach is framed in information theory; it can however be seen as a derivative of the feature-based approach in which features are not appreciated using a binary feature-matching evaluation (shared/not shared), but incorporate also their saliency, i.e. the degree of informativeness.
Hybrid measures. Measures which are based on multiple paradigms.
The broad classification of measures we propose is interesting to introduce the basic approaches defined to assess the similarity of two classes, and to put them in perspectives with the models of similarity proposed by cognitive sciences. It's however challenging to constraint the diversity of measures to this broad classification. It's important to understand that these three main approaches are highly interlinked and cannot be seen as disjoint categories. As an example, all measures rely on the analysis of the structure of the ontology as they all take advantage of the partial ordering defined by the (structure of the) taxonomy. These categories must be seen as devices used by SMs designers to introduce approaches and highlight relationships between several proposals. Indeed, as we will see, numerous approaches can be regarded as hybrid measures which take advantage of techniques and paradigms used to characterize measures of a specific approach. Therefore, the affiliation of a specific measure to a particular category is often subject to debate, e.g., (Batet 2011a). This can be partially explained by the fact that several measures can be redefined or approximated using reformulations, in a way that further challenge the classification. Indeed, the more you analyse SMs, harder it is to constraint them to specific boxes; the analogy can be made with the relationship between the cognitive models of similarity i .
Several classifications of measures have been proposed. The most common one is to distinguish measures according to the elements of the graph they take into account (Pesquita, Faria, et al. 2009). This classification distinguishes three approaches: (i) edge-based, measures focusing on relationship analysis, (ii) node-based, measures based on node analysis and (iii) hybrid measures, measures which mix both approaches. In the literature, edge-based measures often refer to structural measures, node-based measures refer to measures framed in the feature-model and those based on information theory. Hybrid measures are those which implicitly or explicitly mix several paradigms.
Another interesting way to classify measures is to study if they are (i) intentional, i.e. based on the explicit definition of the classes expressed by the taxonomy, (ii) extensional, i.e., based on the analysis of the realizations of the classes (i.e., instances), or (iii) hybrid, measures which mix both intentional and extensional information about classes ii .
In some cases, authors will mix several types of classifications to present measures. In this section we will introduce the measures according to the four approaches presented above: (i) structural, (ii) feature-based, (iii) framed in information theory and (iv) hybrid. We will also specify the extensional, intentional, or hybrid nature of the vision adopted in the design of the measures.
Numerous class-to-class measures have been defined for trees, i.e. special graphs without multiple inheritances. In the literature, these measures are generally considered to be applied out-ofthe-box on graphs. However, in the context of graphs, some adaptations deserve to be made and several components of the measures generally need to be redefined in order to avoid ambiguity, e.g., to be implemented in a computer software. For the sake of clarity, we first highlight the diversity of proposals by introducing the most representative measures defined according to the different approaches. Measures will most of the time be presented according to their original definitions. When the measures have been defined for trees, we will not stress the modifications which must be taken into account for them to be used on graphs. These modifications will be discussed after the introduction of the diversity of measures.
Structural Approach
Structural measures rely on the graph-traversal approaches presented in subsection 5.4.1.1(e.g., shortest path techniques, random walk approaches). They focus on the analysis of the interconnection between classes to capture their similarity. However, they most of the time consider specific tuning in order to take into account specific properties and interpretations induced by the transitivity of the taxonomical relationships. In this context, some authors, e.g., (Hliaoutakis 2005), have linked this approach to spreading activation theory (Collins & Loftus 1975). The similarity is in this case seen as a function of propagation between classes through the graph.
Back in the eighties, ) expressed the taxonomical distance of two classes defined in a taxonomic tree as a function of the shortest path linking them i . We denote ( ) the shortest path between two classes and , i.e., the path of minimal length in . Remind that the length of a path has been defined as the sum of the weights associated to the edges which compose the path. When the edges are not weighted we refer to the edge-counting strategy and the length of the shortest path is the number of edges it contains. The taxonomical distance is therefore defined by ii : with the name given to the predicate subClassOf.
In a tree, the shortest path ( ) contains a unique common ancestor of and . This common ancestor is the Least Common Ancestor (LCA) iii of the two classes according to any function (since the function is monotonically decreasing) iv .
Note that distance-to-similarity conversions can also be applied to express a similarity from a distance (see Appendix 5). A semantic similarity can therefore be defined in a straightforward manner: Notice the importance to consider the transitive reduction of the tree/graph to obtain coherent results using shortest path based measuresin the following presentation, we consider that the taxonomy do not contains redundant relationships.
Several critics of the shortest path techniques have been formulated. The edge-counting strategy or more generally any shortest path approach with uniform edge weight, have been criticized for the fact that the distance represented by an edge linking two classes do not take into account of class specificities/salience v . Several modifications have therefore been proposed to break this constraining uniform appreciation of edges induced by the edge-counting strategy. Implicit or i It's worth to note that they didn't invented the notion of shortest path in a graph. In addition, in (Foo et al. 1992), the authors refers to a measure proposed by Gardner and Tsui (1987) to compare concepts defined in a conceptual graph using the shortest path technique. ii In this section, equations named dist refer to taxonomical distances. iii The Least Common Ancestor is also denoted the Last Common Ancestor (LCA), the Least Common Subsumer/Superconcept (LCS) or Lowest SUPER-ordinate (LSuper). iv Here rely the importance of applying the transitive reduction of the taxonomical graph/tree, redundant taxonomic relationships can challenge this statement and therefore heavily impact the semantics of the results. v As an example, (Foo et al. 1992) quotes remarks made in Sowa personal communication. explicit models defining non-uniform strength of association between classes, have therefore been introduced e.g., (Young Whan & Kim 1990;Sussna 1993;Richardson et al. 1994)they have been introduced in section 5.3.3.
One of the main challenge of SM designers over the years have therefore been to implicitly or explicitly take advantage of semantic evidences regarding the expressiveness of classes and the strength of connotation between classes in the design of measures. The different strategies and factors used to appreciate class specificity as well as strength of connotations have already been introduced in section 5.3. Another use of the various semantic evidences which can be extracted from has been to normalize the measures. As an example, (Resnik 1995) proposed to consider the maximal depth of the taxonomy to bound the edge-counting strategy: To simulate non uniform edge weighting (Leacock & Chodorow 1998) i introduced a log transform consideration of the edge counting strategy: with the cardinality of the union of the sets of nodes involved in the paths ( ( )) and ( ( )).
Authors have also proposed to take into account the specificity of compared classes, e.g., (Mao & Chu 2002), sometimes as a function of the depth of their LCA, e.g., (Wu & Palmer 1994;Pekar & Staab 2002;J. Wang et al. 2012).
As an example, the strategy proposed by (Wu & Palmer 1994) was to express the similarity of two classes as a ratio taking into account the shortest path linking the classes as well as the depth of their LCA.
This function is of the form: with the depth of the LCA of the two classes and the length of the shortest path linking , it is easy to see that for any given non-null length of the shortest path, this function is increasing with ; otherwise stated, to a given shortest path length, the similarity of increases with the depth of their LCA. In addition, as expected, for a given depth of the LCA, the more the length of the shortest path linking increases, less similar they will be considered. Based on a specific expression of the notion of depth, a parameterized expression of has been proposed in . A variation has also been proposed by (Pekar & Staab 2002): (Zhong et al. 2002) also proposed to compare classes taking into account the notion of depth: with a factor defining the contribution of the depth.
In a similar fashion, Li et al. 2006) defined a parametric function in which both the maximal depth and the length of the shortest path are taken into account:
( )
The parameter corresponds to the depth of the LCA of the compared classes, i.e. ( ( )). The parameter is used to tune the depth factor (df) and to set the importance to give to the degree of specificity. The function used to express df corresponds to the hyperbolic tangent which is normalized between 0 and 1. It defines the degree of nonlinearity to associate to the depth of the LCA. In addition controls the importance of the taxonomic distance expressed as a function of the length of the shortest path linking the two classes.
Approaches have also been proposed to modify specific measures in order to obtain particular properties. As an example, (Slimani et al. 2006) proposed an adaptation of Wu & Palmer measure to avoid the fact that in some cases neighbour classes can be estimated as more similar than classes defined in the same hierarchy. To this end, the authors introduced which is based on a factor used to penalized classes defined in the neighbourhood: In the same vain (Ganesan et al. 2012;Shenoy et al. 2012) recently proposed alternative measures answering the same problem. The approach proposed by (Shenoy et al. 2012) is presented i : i Note that we assume that the paper contains an error in the equation defining the measure. The formula is considered to be X / (Y+Z), not X/Y +Z as written in the paper.
with the weight of the shortest path computed by penalizing paths with multiple changes of type of relationships, e.g. a path following the pattern < , . Note that the penalization of paths inducing complex semantics, e.g. which involves multiple type of relationships, was already introduced in (Hirst & St-Onge 1998;Bulskov et al. 2002).
Several approaches have also been proposed to consider density of classes, e.g., through analysis of cluster of classes . Other adaptations also proposed to take into account classes distance to leaves ) and variable strengths of connotation considering particular strategies Zhong et al. 2002), e.g. using IC variability among two linked classes or multiple topological criteria (Jiang & Conrath 1997;Couto et al. 2003;M. Alvarez et al. 2011).
In the vein of the spreading activation theory, measures have also been defined as a function of transfer between the compared classes (Schickel-Zuber & Faltings 2007). ) use a similar approach based on a specific definition of the strength of connotation. Finally, pure graph-based approaches defined for the comparison of nodes can also be used to compare classes defined in a taxonomy (refer to section 3.4.1.1). As an example, Yang et al. 2012) used random walk techniques such as the personalized page rank approach to define semantic similarity measures.
As we have seen, most of structural semantic similarity measures are extensions or refinements of the intuitive shortest path distance considering intrinsic factors to consider both the specificity of classes and variable strengths of connotations. Nevertheless, the algorithmic complexity of the shortest path algorithms hampers the suitability of these measures for large semantic graphs. To remedy this problem, we have seen that shortest path computation is replaced by approximation based on the depth of the LCA of the compared classes i and that several measures proposed by graph theory can be used instead.
Towards Other Estimators of Semantic Similarity
Most of critics relative to the initial edge-counting approach were related to the uniform consideration of edge weights. As we have seen, several authors proposed to consider various semantic evidences to differentiate strengths of connotation between classes.
One of the central findings conveyed by the early developments in structure-based measures is that the similarity function can be decomposed in several components, in particular those distinguished by the feature model: commonality and difference. Indeed, the shortest path between two classes can be seen as the difference between the two classes, considering that all specialization add properties to a class. More particularly, in trees, or under specific constraints in graphs, we have seen that the shortest path linking two classes contains their LCA. The shortest path can therefore be break down in two parts corresponding to the shortest path linking the compared classes to their LCA. In this case, the LCA can thus be seen as a proxy who partially ii summarizes i The algorithmic complexity of the LCA computation is significantly lower than the computation of the shortest path. ii The LCA can indeed only be an upper-bound of the commonality since highly similar classes (Man, Women) may have for LCA a general class which only encompasses a limited amount of their commonalities (e.g., LivingBeing). Please refer to section 5.3 the commonality of the compared classes. The distance between the compared classes and their LCA can therefore be used to estimate the differences between the classes.
The fact that measures can be break down in specific components evaluating commonalities and differences is central in the design of the approaches we will further introduce: The Feature-based strategy. The Information Theoretical strategy.
These approaches will focus on characterizing the compared classes in order to express measures as a function of their commonalities and differences.
Feature-based Approach
The Feature-based approach generally refers to measures relying on a taxonomical interpretation of the feature model proposed by Tversky (Tversky 1977). However, as we will see, contrary to the original definition of the feature model, this approach is not necessarily framed in set theory i . The main idea is to represent classes as collections of features, i.e., characteristics describing the classes, to further express measures based on the analysis of the common and distinct features of the compared classes. The score of the measures will only be influenced by the strategy adopted to choose the features of the classes ii and the strategy adopted to compare them.
As we will see, the reduction of classes to collections of features makes it possible to set the semantic similarity back in the context of classical binary similarity or distance measures (e.g., set-based measures).
An approach commonly used to represent the features of a class is to consider its ancestors as features iii . We denote ( ) the set of ancestors of the class . Since Jaccard index, that was proposed 100 years ago, numerous binary measures have been defined in various fields. A survey of these measures distinguishes 76 of them (Choi et al. 2010). Considering that the features of a class are defined by ( ), an example of semantic similarity measure expressed from the Jaccard index was proposed in (Maedche & Staab 2001) iv : Recall that the feature matching function on which is based the feature model relies on binary evaluations of the features "In the present theory, the assessment of similarity is described as a feature-matching process. It is formulated, therefore, in terms of the set-theoretical notion of a matching function rather than in terms of the geometric concept of distance" (Tversky & Itamar 1978).
ii As stressed in (Schickel-Zuber & Faltings 2007), there is a narrow link with the multi-attribute utility theory (Keeney 1993) in which the utility of an item is a function of the preference on the attributes of the item.
iii Implicit senses if we consider that compared elements are not classes but concepts. iv This is actually a component of a more refined measure.
Another example of a set-based expression of a feature-based approach is the measure defined by (Bulskov et al. 2002): [ ] a parameter used to tune the symmetry of the measure. (Rodríguez & Egenhofer 2003) proposed a formulation derived from de ratio model defined by Tversky (introduced in section 2.2.2): with [ ], a parameter that enables to tune the symmetry of the measure. define the taxonomic distance of two classes as a function of the ratio between their distinct and shared features: Various refinements of these measures have been proposed to enrich class features taking into account their descendants, e.g., (Ranwez et al. 2006).
The feature-based measures have not to be intentional, i.e., they are not expected to solely rely on the knowledge defined in the taxonomy. When the instances of the classes are known, the feature of a class can also be seen by extension and be defined on the basis of the instances associated to the classes. As an example, the Jaccard index could also be used to compare two classes which are ordered according to their shared and distinct features, here characterized by extension: with ( ) , the set of instances of the class . Note that this approach makes no sense if the will is to compare classes not ordered since the set ( ) ( ) will tend to be empty. ) also define an extensional measures considering: The measures presented above summarize the features of a class through a set representation corresponding to the set of classes or instances. However, alternative approaches can also be explored. Therefore, even if to our knowledge such approaches have not been explored, the features of a class could be represented as a set of relationships, as a subgraph, etc.
In addition, regardless of the strategy adopted to characterize the features of a class (other classes, relationships, instances), the comparison of the features is not necessarily driven by a setbased measure. Indeed, the collections of features can also be seen as vectors. As an example, a class can be represented by a vector in a chosen real space of dimension | |, e.g., considering that each dimension associated to an ancestor of is set to 1. Vector-based measures will evaluate the distance of two classes by studying the coordinates of their respective projections. proposed to compare two classes according to their representation through the Vector Space Model (VSM). Considering a class to instance matrix, a weight corresponding to the inverse document frequency is associated to the cell ( ) of the matrix if the instance is associated to the class , i.e., ( ). The vectors representing two classes are next compared using the dot product.
Information Theoretical Approach
The Information Theoretical approach relies on Shannon's Information theory. Like for the feature-based strategy, Information Theoretical measure relies on the comparison of two classes according to their commonalities and differences, here defined in terms of information. This approach formally introduces the notion of salience of classes through the definition of their informativeness -Information Content (IC) i . (Resnik 1995) defines the similarity of a couple of classes as the IC of their common ancestor maximizing an IC function (originally eIC), i.e., their most informative common ancestor (MICA).
( ) ( Resnik's measure doesn't explicitly capture the specificities of compared classes. Indeed, couples of classes with an equivalent MICA will have the same semantic similarity, whatever their respective ICs are. To correct this limitation, several authors refined the measure proposed by Resnik to incorporate the specificities of compared classes; we here present the measures proposed by (Lin 1998) ii -, (Jiang & Conrath 1997) -, (Mazandu & Mulder 2013) -, ) -and (Pirró & Euzenat 2010a) -: i Section 5.3.1.2 introduces the notion of information content. ii The measure proposed is a redefinition commonly admitted of the original measure: Taking into account specificities of compared classes can lead to high similarities (low distances) when comparing general classes. As an example, when comparing general classes ing , the maximal similarity will be obtained comparing a (general) class to itself. In fact, the identity of the indiscernible is generally ensured (except for the root which generally has an IC equal to 0). However, some treatments require this property not to be respected. Authors have therefore proposed to lower the similarity of two classes according to the specificity of their MICA, e.g. Li et al. 2010). The measure proposed by ) is presented: with ( ( )) the probability of occurrence of the MICA. An alternative approach proposed by ) relies on the IC of the MICA and can therefore be used without extensional information of the classes using an intrinsic expression of the IC.
Authors have also proposed to characterize the information carried by a class summing the IC of their super classes Cross & Yu 2011): These measures can also be considered as hybrid strategies between the feature-based and the information theory approaches. One can consider that these measures rely on a redefinition of the way to characterize the information conveyed by a class (by summing the IC of the ancestors). Other interpretations can simply consider that features are weighted. Thus, following the setbased representations of features, authors have also studied these measures as fuzzy measures (Cross 2004;Cross & Sun 2007;Cross & Yu 2010;Cross & Yu 2011), e.g. defining the membership function of a feature corresponding to a class as a function of its IC.
Finally, other measures based on information theory have also been proposed, e.g., (Maguitman & Menczer 2005;Maguitman et al. 2006;Cazzanti & Gupta 2006). As an example in (Maguitman & Menczer 2005) the similarity is estimated as a function of prior and posterior probability regarding instances and class membership.
Hybrid Approach
Other techniques take advantage of the various paradigms introduced in the previous section. Among the numerous proposals, (Jiang & Conrath 1997;) defined measures in which density, depth, strength of connotation and information content of classes are taken into account. We present the measure proposed by (Jiang & Conrath 1997) i , which considers the strength of association defined as follow: The factor ̅̅̅̅̅̅̅ refers to the average density of the whole taxonomy (see publication for details). The factors and [ ] control the importance of the density factor and the depth respectively. ( ) defines the weight associated to the predicates. Finally, the similarity is estimated as the weight of the path which links the compared classes which are constrained by their LCA: Note that defining , and ( ) , we obtain the information theoretical measure defined by the same authors: (Singh et al. 2013) proposed a mixing strategy based on (Jiang & Conrath 1997) IC-based measure and the consideration of transition probability between classes relying on a depth-based estimation of the strength of connotation.
Considerations for the Comparison of Classes defined in Semantic Graphs
Several measures introduced in the previous sections were initially defined to compare classes expressed in a tree. However, several considerations must be taken into account in order to estimate the similarity of classes defined in a semantic graph. The content of this section is quite technical and may therefore not be suited for all publicsplease refer to the notations introduced in section 5.2.
Shortest path
A tree is a specific type of graph in which multi-inheritances cannot be encountered, which therefore implies that two classes which are not ordered will have no common subclasses, i.e., . Therefore, if there is no redundant taxonomical relationship, then, the shortest path linking two classes defined in a tree always contains a single common ancestor of the two classes. However, in a graph, since two non-ordered classes can have common subclasses, i.e., , the shortest path linking two classes can in some cases not contain one of their common ancestor. Figure 15 illustrates the modifications induced by multi-inheritances. Figure 15 The graph composed of the plain (blue) edges is a taxonomic tree, i.e., it doesn't contain classes with multi-inheritances. If the (red) dotted relationships are also considered, the graph is a directed acyclic graph (e.g., a taxonomical graph).
In Figure 15, the shortest path linking the two non-ordered classes C5 and C7 in the tree (i.e. without considering the red edges) is [C5-C3-C1-root-C2-C4-C7]. However, if we consider multiple inheritances (red edges), it's possible to link the two classes through paths which do not contain one of their common superclass, e.g., [C5-C3-C6-C4-C7] or even [C5-C8-C7]. Therefore, the shortest path containing a common ancestor of the compared classes is defined in the search space . In practice, despite the fact that in most graphs for two non-ordered classes, it is commonly admitted that the shortest path must contain a single superclass of the two compared classes. Given this constraint, the edge-counting taxonomical distance of and in is generally (implicitly i ) defined by: Note that when disjoint common ancestors are shared between compared classes, the ancestor which maximizes the similarity is expected to be considered. Depending on the function which is used, the shortest path doesn't necessarily involve the class of the NCCAs which maximize , e.g. the deeper. As an example, to distinguish the DCA to consider, (Schickel-Zuber & Faltings i Generalization of measures defined from trees to graph is poorly documented in the literature.
2007) took into account a mix between depth and reinforcement (number of different paths leading from a concept to another).
The shortest path techniques can also be relaxed to consider paths which do not involve common ancestors or which involve multiple common ancestors:
Notion of Depth
The definition of the notion of depth must also be reconsidered when the taxonomy forms a graph. Recall that, in a tree without redundancies, the depth of a class has been defined as the length of the shortest path linking the class to the root. The depth of a class is a simple example of estimator of its specificity. In a tree, this estimator makes perfect sense since the depth of a class is directly correlated to the number of ancestors it has, as ( ) | ( )| .
In a graph or in a tree with redundant taxonomical relationships, we must ensure that the depth is monotonically decreasing according to the ordering of the classes defined by the taxonomy. As an example, to apply depth-based measures to graphs, we must generally ensure that ( ( )) is lower or equal to both ( ) and ( ). To this end, the maximal depth of a class must be used, i.e., the length of the longest path in ( ) , denoted ( ( )). As an example, the measure proposed by (Pekar & Staab 2002) is therefore implicitly generalized to:
Notion of Least Common Ancestors
Most measures which have been presented take advantage of the notions of LCA or MICA of the compared classes. However, these measures do not consider disjoint semantic contributions, i.e., the set of common ancestors of the compared classes -( ) for the classes and . To remedy this, several authors have proposed to consider the whole set of DCAs in the design of measures. proposed GraSM and DiShIn strategies to consider the set of DCAs. proposed to modify information theoretical measures based on the notion of MICA. The authors recommended replacing the IC of the MICA by the average of the information contents of all the classes which compose the set of DCAs of the compared classes. A redefinition of the measure proposed by Lin is presented: also proposed to average the similarity between the classes according to their multiple DCAs: With ( ) the average length of the set of paths which contain the class and which link the class to the root of the taxonomy and ( ) the set of NCCAs of the classes and .
As we have underlined, numerous approaches have been defined to compare pairs of classes defined in a taxonomy, these measures can be used to compare any pair of nodes defined in a partially ordered set. Table 4 to Table 7 present some properties of a selection of measures defined to compare classes. 98
List of Pairwise Semantic Similarity Measures
Several SMs which can be used to compare classes defined in a taxonomy of classes or any pair of elements defined in a poset. Measures are ordered according to their date of publication. Other contributions studying some properties of pairwise measures can be found in (X. Yu 2010; Slimani 2013). IOI: Identify of the Indiscernibles.
Name
Type KR const.
Range IOI Comment
Shortest Path Sim / Rel None Yes Weight of the shortest path (sp) linking the compared classes. Several modifications can be considered in graphs depending on the strategy adopted, e.g., weighting of the relationships, predicates, constraints on the inclusion of a common super class of the compared classes, etc. ) Dist (ISA) DAG Yes Specific shortest path strategy with uniform weight and the shortest path is constrained to contain the LCA of the compared classes.
(Young Whan & Kim 1990) ) (Sussna 1993) Dist RDAG Yes Originally defined as a parametric semantic relatedness. Under specific constraints, this measure can be used as a semantic similarity. Shortest path technique taking into account non-uniform strength of connotation tuned according to the depth of the compared classes and specific weight associated to predicates. (Richardson et al. 1994) X X X X Propose to integrate several intrinsic metrics (e.g., depth, density) to weight the relationships and define hybrid measures mixing the structural and information theoretical approach. No measure explicitly defined. Build a meta-graph reducing the original ontology into cluster of related concepts. Similarity is assessed through a specific function evaluating LCA information content. simDIC Specific formulation of a set-based measure considering classes as their sets of ancestors. (Mazandu & Mulder 2013) Sim Nuniver Sim DAG [ ] Yes IC of the MICA of the compared classes divided by the maximal IC of the compared classes. Feature-based expression relying on the Jaccard index. Sim DAG DAG [0,1] Cosine similarity on a vector-based representation of the classes. The vector representation is built according to the set of instances of the classes. (Ranwez et al. 2006) Dist DAG Yes The distance is a function of the number of descendants of the LCA of the compared concepts. Distance properties have been proved (positivity, symmetry, triangle inequality).
Build a meta-graph reducing the original ontology into cluster of related concepts. Similarity is assessed through a specific function evaluating LCA information content. Sim DAG Yes Comparison of the classes according to their ancestors. Formulation expressed as a distance converted to a similarity using negative log. Table 6: Semantic similarity measures or taxonomical distances designed using a feature-based approach. These measures can be used to compare a pair of classes defined in a taxonomy or any pair of elements defined in a partial ordered set. Originally defined as a semantic relatedness, it can be used to compute semantic similarity. Define a non-linear approach to characterize the strength of connotation and the notion of S-value to characterize the informativeness of a class. Sim Multiple approaches are mixed Table 7: Semantic similarity measures or taxonomical distances designed using a hybrid approach. These measures can be used to compare a pair of classes defined in a taxonomy or any pair of elements defined in a partial ordered set.
Semantic Similarity between Groups of Classes
Two main approaches are commonly distinguished to introduce semantic similarity measures designed for the comparison of two sets of classes, i.e., groupwise measures: Direct approach, the measures which can be used to directly compare the sets of classes according to information characterizing the sets with regard to the information defined in the graph. Indirect approach corresponds to the measures which assess the similarity of two sets of classes using a pairwise measure, i.e. a measure designed for the comparison of pairs of classes. They are generally simple aggregations of the scores of similarities associated to the pairs of classes defined in the Cartesian product of the two compared sets.
Once again, a large diversity of measures have been proposed, we distinguish some of them.
Direct Approach
The direct approach corresponds to a generalization of the approaches defined for the comparison of pairs of classes in order to compare two sets of classes.
It is worth noting that two sets of classes can be compared using classical set-based approaches. They can also be compared through their vector representations, e.g., using the cosine similarity measure. Nevertheless, these are in most cases not meaningful since these measures do not take into account of the similarity of the elements composing the compared sets i .
Structural approach
Considering as the graph induced by the union of the ancestors of the classes which compose the set , (Gentleman 2007) defined the similarity of two sets of classes ( ) according to the length of the longest ( ) which links the class to the in .
Feature-based Approach
The feature-based measures are characterized by the approach adopted to express the features of a set of classes.
Several measures have been proposed from set-based measures. Considering ( ) as the set of classes contained in , we introduce SimUI (Gentleman 2007) ii , and the Normalized Term Overlap measure (NTO):
Information theoretical measures
Among other, proposed to consider the information content of the classes (originally ):
Indirect Approach
Section 5.5 presents numerous pairwise measures for the comparison of pairs of classes. They can be used to drive the comparison of sets of classes.
Improving the Direct Approaches for the Comparison of Sets of Classes
One of the main drawbacks of basic vector-based measures is that they consider dimensions as mutually orthogonal and do not exploit class relationships. In order to remedy this, vector-based measures have been formulated to: Weight dimensions considering class specificity evaluations (e.g., IC) (Huang et al. 2007;Chabalier et al. 2007;Benabderrahmane, Smail-Tabbone, et al. 2010). Exploit an existing pairwise measure to perform base vectors' products (Ganesan et al. 2003;Benabderrahmane, Smail-Tabbone, et al. 2010).
Therefore, pairwise measures can be used to refine the measures proposed to compare sets of classes using a direct approach.
Aggregation Strategies
A two-step indirect strategy can also be adopted in order to take advantage of pairwise measures to compare sets of concepts: 1. The similarity of pairs of classes obtained from the Cartesian product of the two compared sets has to be computed. 2. Pairwise scores are next aggregated using an aggregation strategy, also called mixing strategy.
Classical aggregation strategies can be applied (e.g. max, min, average); more refined strategies have also been proposed. Among the most used we present: Max average (AVGMAX), Best Match Max -BMM and Best Match Average -BMA : Groups of classes are represented using the Vector Space Model. The dimensions are not considered independent, i.e. the similarity of two dimensions is computed using an approach similar to the one proposed by Wu and Palmer. The similarity of the vector representation of the instances is estimated using to the cosine similarity. (Huang et al. 2007 Properties depend on the measures used to compute the pairwise scores which will be aggregated. In ) the max was used.
Unification of Similarity Measures for the Comparison of Classes
This section presents the works related to the unification of knowledge-based semantic measures dedicated to the comparison of classes.
Similitude between Semantic Measures
Several similitudes have been observed between SMs. As an example, in a tree, the edgecounting strategy defined by can also be expressed as a function of the depths of the compared classes and the depth of their LCA : Therefore the depth can be seen as a simple expression of the function used to estimate the specificity of a particular class. The edge-counting strategy can thus be defined through the abstract expression: We can see that this expression generalize the information theoretical distance proposed by (Jiang & Conrath 1997): In the same manner, it has also been stressed by several authors, e.g., , that, in a tree i , the measure proposed by Wu and Palmer can be reformulated by: Therefore, once again, this expression can be generalized by an abstract similarity measure: Such an abstract expression of a similarity measure highlights the relationship between the structural measure proposed by Wu and Palmer and the information theoretical measure proposed by Lin: i In which a transitive reduction has been performed.
A similar approach can be adopted to underline the relationship between some feature-based measures and information theoretical measures. Indeed, under specific tuning, comparing two classes using a feature-based measure, i.e., according to their shared and distinct features, can be equivalent to considering a particular expression of an information theoretical measure.
As an example, defining ( ) the features of the class , and using a SM based on the Dice index, we obtain the following feature-based measure: Since in a tree, two classes have a unique LCA, this feature-based expression can be reformulated as: Thus, this expression is a specific expression of the abstract formulation of Dice formula presented above, defining ( ) | ( )|.
Using a similar reformulation of the measure proposed by (Stojanovic et al. 2001), Blanchard et al. 2008) also underlined that, in trees, feature-based expressions can be reformulated using depth estimator (since ( ) ( ) ). Therefore, in a tree, we obtain:
Framework for the Expression of Semantic Measures
The feature model proposed by Tversky was the first formulation of a framework from which several similarity measures can be derived through parametric formulation of measures (Tversky 1977). The feature model proposes to compare objects represented through sets of features. It therefore requires the features of the elements we want to compare to be specified.
For the comparison of classes, this model requires the definition of a function characterizing the features of a class. The similarity is next intuitively defined based on the common and distinctive features of the compared classes. This approach is used for a long time to compare sets according to the study of their shared and distinct elements (e.g., Jaccard Index, Dice coefficient). As we have seen in section 2.2.2, Tversky defined the contrast model and the ratio model as func-tions which can be used to compare objects represented as sets of features. Below we recall the formulation of the ratio model: Such a general parameterized formulation of a similarity measure can be used to derive a large number of concrete measure. As an example, considering the salience of a set of feature (i.e., the function ) as the cardinality of the set, and , the ratio model leads to the original definition of the Jaccard index. Setting leads to the Dice coefficient. A large diversity of set-based measures can expressed from specific instances of such parameterized functions. In other words, such general measures are abstract similarity measures which can be used to instantiate concrete similarity measures through the definition of a limited set of parameters.
The framework proposed by Tversky constrains the compared objects to be represented as sets of features and the similarity to be assessed as a function of the commonalities and differences of the two sets. By definition the contrast model and the ratio model are therefore constrained to setbased formulations of measures. These models are more particularly constrained to fuzzy set theory, since, originally, Tversky defined the commonalities and differences of two objects as a function of the salience of their shared and distinct features.
Most set-based measures can be expressed using Caillez and Kuntz formulation, and Gower and Legendre formulation ). Since set-based measures can be used to design semantic measures, and can be generalized in a straightforward manner according to the Tversky feature approach: Therefore defining the function ( ) as the cardinality of the set of features , the abstract formulation can be used to derive the Simpson ( ) and Ochaiai ( ) coefficients, to cite a few (Choi et al. 2010). The reformulation can also be used to express other numerous measures, e.g. Sokal and Sneath ( ), and Jaccard index ( ) and Dice coefficient ( ) Choi et al. 2010).
In the authors propose a model of semantic distance relying on a graphbased approach which quantifies the distance between data values as a function of graph traversal. However, Blanchard and collaborators, were the first to take advantage, in an explicit manner, of abstract definition of SMs for the comparison of pairs of classes defined in KRs . In their studies, the authors focused on an information theoretical expression of semantic measures to highlight relationships between several measures proposed in the literature. As we have seen based on the intuitive notion of commonalities and differences, and based on a particular expression of the notion of specificity, the authors underlined that the expressions proposed by Wu and Palmer and Lin can both be derived from an abstract expression of the Dice Index.
Therefore both Wu and Palmer and Lin measures rely on a general expression of the Dice coefficient, here named ( ) which corresponds to the ratio model defining , they can also be seen as particular expression of with : Indeed, defining as the properties of the least common ancestors of the compared concepts which maximizes a functionwith ( ) ( ) for the measure proposed by Wu and Palmer and ( ) ( ) for the measure proposed by Lin, and considering ( ) ( ) with the respective function selected to distinguish the LCA in the two measures, we can derive both measures from the general expression Another expression derived from such a general expression of the Dice coefficient have been proposed by the authors in (Blanchard et al. 2006). Several other abstract expressions of measures can be found in .
In their studies, summarized in the PhD thesis of Blanchard i and in (Blanchard & Harzallah 2005;Blanchard et al. 2008), the authors stressed the suitability of the decomposition of SMs through abstract expressions to further characterize their properties and to study groups of measures.
Other authors have also demonstrated relationships between different similarity measures and took further advantage of abstract frameworks to design new measures or to study existing ones Pirró & Euzenat 2010a;Cross & Yu 2010;Sánchez & Batet 2011;Cross et al. 2013). These contributions mainly focused on establishing local relationships between set-based measures and measures framed in Information Theory. (Pirró & Euzenat 2010a) present an Information Theoretical expression of the component distinguished by the feature model (commonalities and difference) and therefore enable the expression of numerous measures based on the ratio model or the contrast model. Table 8 presents the mapping between feature-based and information theoretical similarity models proposed by the authors.
Description
Feature-based model Information-theoretic model Salience of Common Features ( ) ( ( )) Salience of the features of not shared with the features of ( ) ( ) ( ( )) Salience of the features of not shared with the features of ( ) ( ) ( ( )) (Pirró & Euzenat 2010a) between the feature model and the information theoretic approach (reproduction with some modifications in order to be in accordance with the notions and notations introduced).
In the same vain (Cross 2004;Cross et al. 2013) proposed a similar contribution in which feature-based approaches and measures based on Information Theory are expressed through the frame of the fuzzy set theory.
In (Mazandu & Mulder 2013), the authors propose another general framework and unified description of measures relying on the notion of information content for the comparison of pairs of classes. Like , the authors focused in an information theoretical definition of measures to underline similarities between existing measures.
Despite the suitability of these frameworks for studying some properties of SMs, only a few works rely on them to express measures (Sánchez & Batet 2011;Cross et al. 2013). Moreover, current frameworks only focus on a specific paradigm (e.g., feature-based strategy), to express measures. In fact, most existing frameworks only encompass a limited number of measures and were not defined in the purpose of unifying measures expressed using the variety of paradigms reviewed in section 3.4.1.
The main limitation of these frameworks rely on the fact that they derive from the feature model or an information theoretical expression of the feature model and are therefore by definition limited to these paradigms. To overcome this limitation recently proposed a framework framed in the strategy adopted to characterize the representation of the compared elements. This framework has its roots in the teaching of cognitive sciences regarding the central role played by the representation adopted to characterize compared elements. Therefore, contrary to the other frameworks, this proposal is not limited to specific approaches constrained by a particular representation of the compared elements (feature-based, structural, information theoretical). Indeed, this framework defines the possibility to explicitly express the strategy adopted to characterize the representation of a class (set-based representation, informationtheoretical, graph-based, etc.).The framework further distinguishes the primitive functions commonly found in SM expressions (e.g., functions used to characterize the commonality and the differences of the compared representations, the saliency of a representation).
Semantic Relatedness between Two Classes
The semantic measures which can be used to assess the semantic relatedness of a pair of classes generalize those defined for the estimation of the semantic similarity. These measures take advantage of all predicates defined in the semantic graph. Generally, these measures are specific expressions of the structural measures presented for the estimation of the semantic similarity of two classes. Refer to the contributions of (Sussna 1993;Wang et al. 2007).
Semantic Relatedness between Two Instances
This subsection presents the various approaches which can be used to compare a pair of instances i .
Evaluating the proximity between instances requires defining a representation (or canonical form) to characterize an instance. Four approaches can be distinguished depending on the canonical form adopted: Instances represented as graph nodes. Instances represented as sets of classes. Instances represented as sets of properties.
Hybrid techniques
Most of the measures used to compare instances have already been introduced in section 5.4.1. We briefly recall the various strategies which can be adopted according to the representation of an instance, and we particularly focus on the comparison of instances through the notions of projections.
Comparison of Instances Using Graph Structure Analysis
Two instances can be compared using their interconnections in the graph of relationships defined in the KR. In this case, structural measures introduced in section 5.4.1 can be used in a straightforward manner, e.g., shortest path techniques, random walk approaches, SimRank (Jeh & Widom 2002).
Instances as Sets of Classes
The semantic relatedness of two instances can be evaluated regarding reductions of the compared instances as sets of classes. In this case, the approaches defined to estimate the semantic similarity of two sets of classes are used (refer to section 5.6). Such an approach is commonly used to compare instances characterized by classes or concepts structured in a KR, e.g. gene products annotated by Gene Ontology terms, documents annotated by MeSH descriptors, etc.
Instances as a Set of Properties
The comparison of instances is most of the time driven by the comparison of their representation through sets of properties. The SMs which can be used to compare such representations of instances are the measures introduced for the graph property model in section 5.4.1.2.
The following presentation focuses on the comparison of instances characterized through the notion of projection. These approach has been defined in (Harispe, Ranwez, et al. 2013a) and generalizes the comparison of instances represented through sets of properties.
Characterization of Properties through Projections
i This is an extended version of the state of the art presented in (Harispe, Ranwez, et al. 2013a) A direct or indirect property of an instance corresponds to a partial representation of . In Figure 16 for example, the rollingStones instance can be represented by its name or music genres. A simple property of an instance is therefore expressed through resources linked to it. Representing an instance through its labels is therefore the same as considering all the labels for which a path links to through the relationship rdf:label. In other words, it correspond to considering all the labels for which a triplet (i,rdf:label,l) exists.
In a general manner, the path linking two resources is characterized by path pattern , with , the set of predicates defined in the KR. A path is therefore associated in this manner with a range defined by the type of resources specified by the range of , the last predicate composing the path pattern. The definition of path pattern thus enables to characterize some of the properties of instances through a path , with the range of path pattern , a set of values that may be included in or composed of values of the type rdfs:Datatype, e.g., String.
Let's distinguish three types of paths pattern depending on the range of their last predicate : -Data: the range of is a set of data values, e.g. Strings, Dates ( Figure 16, case 2).
-Instances: the range of is a set of instances ( Figure 16, case 1).
-Classes: the range of is a set of classes ( Figure 16, case 3).
A path pattern may be used to characterize simple (either direct or indirect) properties of an instance. Complex properties however require several paths in order to be expressed. As an example, the comparison of two music bands through the Euclidian distance between their places of origin does indeed involve defining a complex property encompassing the latitude and longitude of a place that requires two paths < and < (Figure 16, case 4). In other words, the information characterizing a music band via a property defining its place of origin corresponds to the projection of the instance onto two specific resources capable of being reached through paths in the semantic graph. In order to characterize all properties of an instance, the notion of path pattern can thus be generalized by introducing the notion of projection.
Definition of a Projection
A projection refers to projecting a mathematical structure from one space to another. In formal terms, a projection is composed of a set of paths and is defined by , with being the set defining the types of projection , onto which an instance can be projected.
The projection type corresponds to the range associated with the projection, i.e., the type of values potentially used to characterize the instance. When simple projections are used, i.e., when the projection is composed of a single path pattern, then the projection range is defined by the path range, i.e., . Yet when complex projections involving multiple paths are used, other types of projections can be defined, in yielding with being a set indicating the complex objects available for use in representing the complex properties of an instance. Let's note that complex objects are used to represent properties not explicitly expressed in the knowledge base, e.g. geographic location (latitude, longitude).
Four types of projections can therefore be distinguished: the three capable of being associated with a single path pattern (Data, Instances, Classes), and the Complex type used to represent an instance by means of a set of complex objects combining various (simple) properties. Let's denote the projection of range and ( ) the type projection of instance .
Characterization of Instances through Projections
Any instance can be represented by a set of projections. We therefore define a set of projections as a context of projection which can be associated to any set of instances, e.g., class, SPARQL query.
Let's consider the definition of a context of projection defined in order to characterize the instances of a specific class. We denote the context of projection associated to class . This context of projection defines the approach adopted to represent an instance of class , by distinguishing the various properties of interest when characterizing an instance of . The proximity of two instances of will next be computed regarding the projections defined in . The SM used to assess the proximity takes into account all projections composing the context of projection which has been defined. Therefore, this SM requires a method to compare two instances considering a specific projection.
Comparison of Two Projections
Each projection is associated with a measure that enables comparing a pair of instance projections of type , where [ ]. Recall the types of projections which have been defined: Data, Instances, Classes, and Complex.
Classes: Two projections of the Classes type can be compared using a SM adapted to a comparison of classes.
Data: A comparison of Data type projection requires defining a measure adapted to the type of values constituting the sets of data values produced by the given projection. As an example, two strings may be compared using the Levenshtein distance (Levenshtein 1966).
Instances type projections, the projection is associated to a set of instances, they can be compared using set-based measures, e.g., in order to evaluate the size of the intersection of the two sets.
Complex projections require defining a measure to enable comparing two complex objects. Let's note that in some cases, complex objects or compared values will require some data pre-processing prior to use of the proximity function; as an example, such a pre-processing step could consist of computing the body mass index from the size and weight of an instance of a class Person.
As previously observed, a projection defines a set of resources that characterize a specific property of an instance. To estimate the similarity of two instances relative to a specific projection, a measure must be specified so as to compare two sets (sometimes singletons) of resources. Various approaches are available for evaluating these two sets, namely: Cardinality: The measure evaluates the cardinality of both sets, e.g., by comparing two instances of a class parent with respect to the number of children they have.
Direct method: A measure adapted for a set comparison is to be used (e.g. Jaccard index); one example herein would be to compare instances relative to the number of overlapping resources, e.g. the number of common friends. Vector representations can also be used.
Indirect method: This method relies on evaluating the proximity of the pair of resources able to be built by considering the compared sets (a Cartesian product of sets), e.g. couples of strings. In this case, an aggregation strategy must be defined to aggregate the proximity scores obtained for all resource pairs built from the Cartesian product of the two compared sets. Classical operators such as Min, Max, Average or more refined approaches may be used to aggregate the scores (refer to section 5.6.2.2).
As pointed out above, when an indirect method is used to compare two projections, a measure enabling the comparison of two sets of resources needs to be defined. Several approaches are available for comparing sets of classes, strings or numerical values. Note that the relevance of a measure is once again defined by both the application context and the semantics the similarity scores are required to carry.
Two groups of instances can be compared by using a direct or an indirect approach. When an indirect approach is selected, a strategy to enable comparing a couple of instances must be determined. It is therefore possible to use the context of projection defined for the class of the two instances under comparison. This context of projection actually defines the properties that must be taken into account when comparing two instances of this specific type. Applying such a strategy potentially corresponds to a recursive treatment, for which a stopping condition is required. In all cases, computing the proximity of two projections should not imply use of the context of projection containing both projections. A proximity measure can thus be represented through an execution graph highlighting the dependencies occurring between contexts of projection. Consequently, this execution graph must be analysed to detect cycles, for the purpose of ensuring computational feasibility. If a cycle is detected, the measure will not be computable.
Comparison of Two Instances through Their Projections
Once a measure has been chosen to compare each projection, then a general SM can be aggregated between two instances and of the set of instances , e.g: where is the weight associated to the projection and the sum of weights equals 1. This measure exploits each projection shared between the compared instances. In other words, the instances of the class are compared based on a specific characterization of all relevant properties that must be taken into account in order to rigorously conduct the comparison. proposes an hybrid techniques taking into consideration the direct properties characterizing the compared instances as well as the shortest path linking the two instances.
Challenges
At the light of the state-of-the-art of the large diversity of SMs presented in this survey, this section highlights some of the challenges offered to the communities involved in SM study.
Better Characterize Semantic Measures and their Semantics
All along this paper we have stressed the importance to control the semantics associated to SMs, i.e., the meaning of the scores produced by the measures. This particular aspect is of major importance since the semantics of measures must be explicitly understood by users of SMs as it conditions the relevance to use a specific measure in a particular context. Indeed, the semantics of a SM is generally not discussed in proposals (expect some broad distinction between similarity and relatedness). However, for instance, measures only based on taxonomical analysis (knowledge-based semantic similarity measures) can have different meanings depending on the assumptions on which they rely. In this paper, we have underlined that the semantics associated to SMs can only be understood with regard to the semantic proxy used to support the comparison, the mathematical properties associated to the measures and the assumptions on which the measures are based.
The semantics of the measures can therefore only be captured if a deep characterization of SMs is provided. In the last decades, researchers have mainly focused in the design of SMs and despite the central role of the semantic of SMs, only few contributions focused on this specific aspect. As we have seen, this disinterest to the study of the semantics of the measures can be partially explained by the fact that numerous SMs have been designed in order to better mimic human appreciation of semantic similarity/relatedness. In this case, the semantics to be carried by the measures is expected to be implicitly constrained by the benchmarks used to evaluate measures' accuracy. Nevertheless, despite evaluation protocols based on ad-hoc benchmarks are meaningful to compare SMs in particular contexts of use, they do not give access to a deep understanding of measures and therefore lack to provide the information needed to take advantage of SMs in other contexts of use.
The implications of a better characterization of semantic measures are numerous. We already stressed its importance for the selection of SMs in specific contexts of use. Such a characterization could also benefit cognitive sciences. Indeed, as we have seen in section 2.2, the proposals of cognitive models aiming to explain human appreciation of similarity have been supported by the study of properties expected by the measures. As an example, recall that the spatial models have been challenged according to the fact that human appreciation of similarity has proven not to be in accordance with the axioms of distance. Therefore, characterizing: (i) which SMs best performed according to human expectations of semantic similarity/relatedness and (ii) the properties satisfied by these measures, could help cognitive scientists to improve existing models of similarity or derive more accurate ones.
In this paper, we have proposed an overview of the various SMs which have been proposed to compare units of language, classes or instances semantically characterized. In section 3.1, we distinguished various aspects of SMs which must be taken into account for their broad classification: The types of elements which can be compared. The semantic proxies used to extract semantic evidences on which will be based the measures. The canonical form adopted to represent the compared elements and therefore enable the design of algorithms for their comparison.
In section 2.3.1, we also recalled some of the mathematical properties which can be used to further characterize SMs. In section 2.1.3, based on the several notions introduced in the literature, we proposed a characterization of the general semantics which can be associated to SMs (e.g., similarity, relatedness, distance, taxonomical distance). Finally, all along this paper, and particularly in section 5.3, we distinguished several semantic evidences on which can be based SMs and we underlined the assumptions associated to their consideration.
We encourage SM designer to provide an in-depth characterization of the measures they propose. To this end, they can use the various aspects and properties of the measures distinguished in this paper. We also encourage the communities involved in the study of SMs to better define what a good semantic measure is and what makes a measure better than another. In this aim, the study of the role of contexts seems to be of major importance. Indeed, as we have seen in section 4.2, accuracy of measures can only be discussed with regard to specific expectations of measures. Several other properties of measures could also be taken into account and further investigated: -Algorithmic complexity.
-Degree of control on the semantics of the scores produced by the measures.
-The confidence which can be associated to a score.
-The robustness of a measure, i.e., the capacity for a measure to produce robust scores considering the uncertainty associated to expected scores or perturbations of the semantic proxies on which rely the measure (modification of the KRs, corpus modifications). -The discriminative power of the measure, i.e., the distribution of the scores produced by a measure.
Provide Tools for the Study of Semantic Measures
The communities studying and using SMs require software solutions, benchmarks, and theoretical tools to compute, compare and analyse SMs.
Develop benchmarks
As we have seen in section 4.2.2 several benchmarks exist to evaluate semantic similarity and relatedness. Most of these benchmarks aim at evaluating SMs' accuracy according to human appreciation of similarity. For the most part they are composed of a reduced number of entries, e.g., pairs of words/concepts, and have been computed using a reduced pool of subjects.
Initiative for the development of benchmarks must be encouraged in order to obtain larger benchmarks in various domains of study. Word-to-word benchmarks must be conceptualized as much as possible in order for them to be used to evaluate knowledge-based SMs i . It is also important to propose benchmarks which are not based on human appreciation of similarity, i.e. benchmarks relying on an indirect evaluation strategy.
Develop Generic Open-source Software Solutions for Semantic Measures
In section 4.1, we proposed an overview of the main software solutions dedicated to SMs. They are of major importance to: (i) ease the use of the theoretical contributions related to SMs, (ii) support large scale comparisons of measures and therefore better understand the measures, (iii) develop new proposals.
Software solutions dedicated to distributional measures are generally developed without being restricted to a specific corpus of texts. They can therefore be used in a large diversity of contexts of use as long as the semantic proxy considered corresponds to a corpus of texts.
Software solutions dedicated to knowledge-based SMs are most of the time developed for a specific domain (e.g., refer to the large number of solutions developed for the Gene Ontology alone). Such a diversity of software is limiting for SMs designers since implementations made for a specific KR cannot be reused in applications relying on others KRs. In addition, it hampers the reproducibility of results since some of our experiments have shown that specific implementations tend to produce different results ii . In this context, we encourage the development of generic open-source software solutions not restricted to specific KRs. This is challenging since the formalism used to express KRs is not always the same and specificities of particular KRs sometimes deserve to be taken into account to develop SMs. However, there are several cases in which generic software can be developed. As an example, numerous knowledge-based SMs rely on data structures corresponding to partial ordered set or more generally semantic graphs. Other measures are designed to take advantage of KRs expressed in standardized languages such as RDF(S), OWL. Generic software solutions can be developed to encompass these cases. The development of the Semantic Measures Library is an example of such an initiative (Harispe, Ranwez, et al. 2013b). Reaching such a goal could open interesting perspectives. Indeed, based on such generic and robust software supported by several communities, domain specific tools and various programming language interfaces can next be developed to support specific use cases and KRs.
The diversity of software solutions also has benefits as it generally stimulates the development of robust solutions. Therefore, another interesting initiative, complementary to the former, could be to provide generic and domain specific tests to facilitate both the development and the evaluation of the software solutions. Such tests could for instance be the expected scores to be produced by specific SMs according to a reduced example of a corpus/KR. This specific aspect is important in order to standardize the software solutions dedicated to SMs and to ensure users of specific solutions that the score produced by the measures are in accordance with the original definitions of the measures. i It is quite common to find papers describing knowledge-based SM evaluation using word-to-word benchmarks without giving access to the concepts associated to each words and the strategy adopted when multiple concepts could be associated to a word. ii This can be explained by bugs or particular interpretations on the definitions of measures or on the way to handle KRs. Refer to https://github.com/sharispe/sm-tools-evaluation for an example in the biomedical domain.
As we have seen in section 4.2, evaluation of SMs is mainly governed by empirical studies used to assess their accuracy according to expected scores/behaviours of the measures. Therefore, the lack of open-source software solutions implementing a large diversity of measures hampers the studies of SMs. It explains for instance that evaluations of measures available in the literature only involve the comparison of a subset of measures which is not representative of the diversity of SMs today available. Initiatives aiming at developing robust open-source software solutions giving access to a large catalogue of measures must therefore be encouraged. It's worth to note the importance of these solutions to be open-source. Our communities also lack open-source software dedicated to SMs evaluation. Indeed, despite some initiative in specific domains i , evaluations are not made through a common framework like it's done by most communities, e.g. information retrieval, ontology alignment.
Develop Theoretical Tools for Semantic Measures
The large amount of SMs which have been proposed is hard to study, e.g. deriving interesting properties of measures require the analysis of each measure. However, as we have seen in section 5.7, several initiatives have proposed theoretical tools to ease the characterization of measures; they open interesting perspectives to study groups of measures. Such theoretical frameworks have proven to be essential to better understand the limitation of existing measures and the benefits of new proposals. They are also critical to distinguish the main components on which the measures rely. Characterizing such components open interesting perspectives to improve families of SMs based on the components, e.g., the definition of the GrasM strategy to better characterize the commonality of two classes is an example of the redefinition of a component used by several measures.
Standardize Knowledge Representation Handling
In section 5.2.2, we discussed the process required to transform a KR to a data structure which can be processed by the measures. Such a process is actually too much subject to interpretations and deserves to be carefully discussed and formalised. Indeed, as an example, we stressed that numerous measures consider KRs as semantic graphs despite the fact that the formalism on which KRs rely cannot be mapped to semantic graphs without some loss of knowledge. The impact of such a reduction of KRs is of major importance since it can highly impact the results produced by the measures ii . The treatment performed to map a KR to a semantic graph is generally not documented which explains some of the difficulties encountered to reproduce experiments.
Promote Interdisciplinarity
From cognitive sciences to biomedical informatics, the study of SMs involves numerous communities. Efforts have to be made to promote interdisciplinary studies and to federate the contributions made in the various fields. We briefly provide a non-exhaustive list of the main commui E.g., CESSM to evaluate SMs designed for the Gene Ontology. Note that this solution is not open-source, it can therefore not be used to support large scale evaluations and it's impossible to reproduce the experiments and the conclusion derived from them… ii Consider for instance the simple case of a taxonomy corresponding to a semantic graph in which redundant relationships have been defined.
nities involved in the studies of SMs as well as communities which could contribute to their studies or field of studies which must be relevant to solicit to further study SMS. The list is alphabetically ordered: Biomedical Informatics and Bioinformatics: active in the definition and study of SMs. These communities are also active users of SMs.
Cognitive Sciences: propose cognitive models of similarity and mental representations which can be used to improve the design of SMs and better understand human expectations regarding similarity/relatedness. These communities can also use empirical studies made for the evaluation of SMs to discuss the cognitive models they propose.
Complexity Theory: study of the complexity of SMs.
Geoinformatics: Definition and study of SMs. They are also active users of SMs.
Graph Theory: several major contributions relative to graph processing. Essential for the optimization of measures based on graph-based KRs. This community will play an important role in the near future of knowledge-based SMs since large semantic graphs composed of billions of relationships are today available. Processing such semantic graphs require optimization techniques to be developed.
Information Retrieval: define and study SMs taking advantage of corpus of texts or KRs.
Information Theory: play an important role to better understand the notion of information and to define metrics which can be used to capture the amount of information conveyed, shared and distinct between the compared elements, e.g., notion of information content.
Knowledge Engineering: study of KRs and define KRs which will further be used by some SMs. This community could for instance play an important role to characterize the assumptions made by the measures.
Logic: define formal method to express and take advantage of knowledge. This community can play an important role to characterize the complexity of knowledge-based semantic measures for instance. Measure Theory: define a mathematical framework for the study of the notion of measure. Essential to derive properties of measures, better characterize SMs and to take advantage of theoretical contributions proposed by this community.
Metrology: study both theoretical and practical aspects of measurements.
Natural Language Processing: actively involved in the definition of distributional measures. They propose models to characterize corpus-based semantic proxies and to define measures for the comparison of units of language.
Optimization area: important contributions which can be used to optimize measures, to study their complexity and to improve their tuning.
Philosophy: play an important role in the definition of essential concepts on which SMs rely on, e.g., definitions of the notions of Meaning, Context.
Semantic Web and Linked Data: define standards (e.g., languages, protocols) and process to take advantage of KR. The problematic of ontology alignment and instance matching are actively involved in the definition of (semantic) measures based on KRs. This community is active in the definition of measures.
Statistics and Data Mining: Important contributions which can be used to characterize large collection of data. Major contributions in clustering can for instance be used to better understand SMs.
Study the Algorithmic Complexity of Semantic Measures
As we have seen all along this survey, most contributions have focused on the definition of SMs. Their algorithmic complexity is however near inexistent despite the fact that this aspect is essential for practical applications. Therefore, to date, no comparative studies can be made to discuss the benefits of using computationally expensive measures. This aspects is however essential to compare SMs. Indeed, in most application contexts, users will prefer to reduce measure accuracy for a significant reduction of the computational time and resources required to use a measure. To this end, SM designers must, as much as possible, provide algorithmic complexity of their proposals. In addition, as the theoretical complexity and the practical efficiency of an implementation are different, developers of software tools must provide metrics to discuss and compare measures' implementation efficiency.
Support Context-Specific Selection of Semantic Measures
Both theoretical and software tools must be proposed to orient end-users of SMs in the selection of measures according to the needs defined by their application contexts. Indeed, despite most people only (blindly) consider benchmark results to select a measure, efforts have to be made in order to orient end-users in the selection of the best suited approach according to their usage context, understanding the implications (if any) to use an approach compared to another.
The several properties of measures we have presented to characterize the measures can be used to guide the selection of SMs. Nevertheless, numerous large-scale comparative studies have to be performed to better understand the benefits to select a specific SM in a particular context of use.
Conclusions
In this paper, we have introduced the large diversity of semantic measures (SMs) which can be used to compare various types of elements, i.e., units of language, concepts or instances, based on texts and knowledge representation analysis. These measures have been proved to be essential tools to drive the comparison of such elements by taking advantage of semantic evidences which formally or implicitly support their meaning or describe their nature. From Natural Language Processing to Biomedical Informatics, SMs are used in a broad field of applications and have become a cornerstone for designing intelligent agents which will for instance use semantic analysis to mimic human ability to compare things.
SMs, through the diverse notions presented in this paper (e.g., semantic similarity/relatedness and distance), have been actively studied by several communities over the last decades. However, as we have seen, the meaning of the large terminology related to SMs was not clearly defined and misuses are frequent in the literature. Based on the commonly admitted definitions and new proposals, this paper presents a classification and clear distinctions between the semantics carried by the numerous notions related to SMs.
The extensive survey presented in this paper offers an overview of the main contributions related to the broad subject of SMs. It also underlines interesting aspects regarding the interdisciplinary nature of this field of study. Indeed, as we have seen, the design of SMs is (implicitly or explicitly) based on models of mental representations proposed by cognitive sciences. These models are further expressed mathematically according to specific canonical forms adopted to represent elements and functions designed to compare these representationsthis whole process enables computer algorithms to compare units of language, concepts or instances, taking into account of their semantics.
Our analysis of existing contributions underlines the lack of an extensive characterization of measures and provides several aspects and properties of SMs which can be used to this end. We also stressed the importance for our communities to better capture the semantics associated to SMs, i.e., to control the meaning which can be associated to a score produced by a SM. Our analysis helped us to distinguish three main characteristics which can be used to characterize this semantics: (i) the semantic evidences which are used to drive the comparison, (ii) the mathematical properties of the measure and (iii) the assumption on which is based the measure.
Finally, at the light of the state-of-the-art of the analysis of the large diversity of contribution related to SMs presented in this paper, we stressed the importance: (i) to better characterize SMs, (ii) to develop both software and theoretical tools dedicated to their analysis and computations, (iii) to standardize and to formalize some treatments performed by SMs which are subject to interpretations, (iv) to facilitate the selection and comparison of measures (e.g., by exploring new properties of measures, by defining new domain-specific benchmarks), and (v) to promote interdisciplinarity to federate the efforts made by the several communities involved in SMs study.
Contributions
This paper summarizes the state-of-the-art related to semantic measures which have been made by Sébastien Harispe during his PhD thesis. Sébastien Harispe wrote the paper; co-authors (PhD supervisors) supervised the project and provided corrections and advices.
RDF(S) Graphs and Semantic Measures
This note discusses some technical aspects relative to the computation of SMs on RDF(S) graphs. It mainly presents the considerations to be taken into account to map an RDF(S) graph to the kind of semantic graph generally expected by SMs.
The simple graph data model, considered for algorithmic studies and definitions of SMs, differs from the RDF graph specification in multiple ways, e.g. no blank nodes or literals in some cases. These differences do not prevent the use of SMs on RDF graphs. Nevertheless, to ensure both coherency and reproducibility of results, guidelines regarding the use of these measures on those graphs have to be rigorously defined. Handling of RDF graphs which take advantage of precise formal vocabularies such as RDFS must also be clearly defined. To our knowledge, required preprocessing enabling the use of SMs on RDF(S) graphs has not been previously discussed in literature. We will refer to RDFS entailment rules, please consider the W3C specification for rule numbering i .
On RDF(S) graphs, instances and classes are not clearly separated as "A class may be a member of its own class extension and may be an instance of itself" ii (e.g. using punning techniques, meta-class). Such cases are not limiting to our work as in general practice, instances and classes can easily be distinguished using basic rules and restrictions. Moreover, the separation of instance data and the taxonomy of classes is considered as a fundamental aspect of knowledge representation modelling which is therefore usually respected. The definition of a function enabling two distinguish the type to associate to a node, i.e., Class (C), Instance (I), Predicate (P) or Data value (D) is therefore not considered to be limiting iii .
SMs algorithms heavily rely on graph traversals. In order for the measures scores to be accurate and reproducible, the graph must first be entailed according to RDFS entailment rules. The graph needs to be reduced for some properties required by the measures to be respected. Both graph entailment and reduction required prior to SMs computation are detailed.
Since most treatments associated to SMs are expressed in terms of graph traversals, a RDFS reasoner must be used to infer all implicit relationships based on RDFS entailment rules 3, e.g. rdf:type inference according to the domain/range associated to a property (predicate). To reduce the complexity of the entailment, only the RDFS entailment according to rules 2, 3 and 5, 7 must be applied. Rules 2 and 3 are respectively related to rdfs:domain and rdfs:range type inference. Rules 5 and 7 are related to sub-property relationships and are therefore important in order to infer new statements according to rules 2 and 3. Other entailment rules have no direct implications on the topology of the resulting graph as inferred relationships will not be considered by SMs algorithms. i RDF Semantics, http://www.w3.org/TR/rdf-mt ii RDF Schema, http://www.w3.org/TR/rdf-schema iii Note that in OWL-DL the sets of classes, predicates and instances (concepts, roles and individuals) must be disjoint (Horrocks & Patel-Schneider 2003).
The graph must respect certain properties. Not all inferable relationships, according to the transitivity of taxonomic relationship or chain of transitive relationships (i.e. transitivity over rdfs:subClassOf), are to be considered. This treatment can be carried out through an efficient transitive reduction algorithm (see section 5.2.2.5). Furthermore, some classes associated to RDFS Vocabulary and/or other classes not explicitly defined in the ontology must be ignored prior to the treatment, e.g. rdfs:Property, rdfs:Class. Triplets associated to these excluded classes and RDFS axiomatic triples i must also be ignored. Such pre-processing is important to ensure coherency of both SMs and particular metrics (e.g., information content). As an example, an RDFS reasoner will infer that all classes are subclasses of rdfs:Resource and create the corresponding triplets. Thus, considering the edge-counting measure, the maximal distance between two concepts will be set to 2, which is not the expected behaviour for most usage contexts.
In RDF graphs, blank nodes or reification techniques can be used to model specific information into the graph. We consider that any blank node is associated to a class, a predicate, an instance of a class or a specific relationship. As an example, consider the set of RDF statements: This set of statements can be graphically represented by the graph (A) in Figure 17. Figure 17: Example of mapping between (A) an RDF graph modelling a specific knowledge using particular design pattern, and (B) a semantic graph representation of this knowledge as it is expected by most SMs.
In (A), the node _r1 corresponds to a blank node used to express properties on a specific relationship (reification). The representation expected by SMs is generally based on the classical graph property model. Nevertheless, most SMs expect such a knowledge to be expressed as the graph presented in Figure 17 (B). As an example, the path between ex:luc and ex:tom is expected to be: [ex:luc, ex:louise, ex:tom] foaf:knows However, in the graph, the shortest path between the two instances is: [ (ex:luc,rdf:subject -,_r1), (_r1,rdf:object,ex:louise), (ex:louise, foaf:knows,ex:tom) ]
From OWL to Semantic Graphs (TODO)
We describe some modifications which can be performed to map a KR expressed in OWL into a semantic graph which can be processed by most SMs framed in the relational setting.
From Distance To Similarity and vice-versa
A similarity (bounded by 1) can be transformed to a distance considering multiple approaches (Deza & Deza 2013). A distance can also be converted to a similarity. Some of the approaches used for the transformations are presented above.
Similarity to distance
If is normalized: | 2013-12-06T09:28:29.000Z | 2013-10-04T00:00:00.000 | {
"year": 2013,
"sha1": "31d6d18b330c618af7b2ea015888b768cb7a92ed",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c786c9a81240f96e1ceb4097e6a896b93b22287e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
8904276 | pes2o/s2orc | v3-fos-license | Predictive Value of Post-Transplant Bone Marrow Plasma Cell Percent in Multiple Myeloma Patients Undergone Autologous Transplantation
Background/Aims Autologous stem cell transplantation (ASCT) has become the treatment of choice for patients with multiple myeloma (MM). Studies have shown that maintenance treatment with interferon-alpha is associated with improved survival rates following ASCT. However, despite these recent advances in regimes, relapses are inevitable; thus, the prediction of relapse following ASCT requires assessment. Methods We retrospectively analyzed 39 patients who received ASCT between 2003 and 2008. All patients received chemotherapy with vincristine, adriamycin, and dexamethasone (VAD), and ASCT was performed following high-dose melphalan conditioning therapy. We evaluated the influence of the post-transplant day +14 (D+14) bone marrow plasma cell percent (BMPCp) (≥ 2 vs. < 2%), international scoring system (ISS) stage (II vs. III), response after 3 cycles of VAD therapy (complete response [CR] vs. non-CR), deletion of chromosome 13q (del[13q]) (presence of the abnormality vs. absence), and BMPCp at diagnosis (≥ 50 vs. < 50%) on progression-free survival (PFS) and overall survival (OS). Results During the median follow-up of 28.0 months, the median PFS and OS were 29.1 and 42.1 months, respectively. By univariate analysis, ISS stage III at diagnosis, BMPCp ≥ 50% at diagnosis, CR after 3 cycles of VAD therapy, del (13q) by fluorescence in situ hybridization, and BMPCp ≥ 2% at post-transplant D+14 were correlated with PFS and OS. A multivariate analysis revealed that a post-transplant D+14 BMPCp ≥ 2% (PFS, hazard ratio [HR] = 4.426, p = 0.008; OS, HR = 3.545, p = 0.038) and CR after 3 cycles of VAD therapy (PFS, HR = 0.072, p = 0.014; OS, HR = 0.055, p = 0.015) were independent prognostic parameters. Conclusions Post-transplant D+14 BMPCp is a useful parameter for predicting the outcome for patients with MM receiving ASCT.
INTRODUCTION
Multiple myeloma (MM) is an incurable disease with a median survival of three years after conventional chemotherapy. High-dose chemotherapy followed by autologous stem cell transplantation (ASCT) is currently the standard treatment for patients with MM aged below 65 years. Several studies have shown that high-dose ASCT is associated with an improved response rate and prolonged progression-free survival (PFS) and overall survival (OS) compared to conventional chemotherapy [1,2]. In a recent study, alpha-interferon maintenance treatment was shown to be associated with improved survival rates after high-dose treatment and ASCT in patients with MM [3]. Unfortunately, despite these recent advances, relapses are frequent; thus, the development of effective diagnostic approaches that can anticipate the response duration of ASCT-based strategies is required.
Several prognostic factors following ASCT, including cytogenetic abnormalities (e.g., the deletion of chromosome 13q [del(13q)]), the plasma cell labeling index (PCLI), and the international scoring system (ISS), have been reported to be useful parameters of patient outcome [4][5][6][7][8][9][10]. Several recent studies have demonstrated an association between pre-transplant complete response (CR) and outcome in MM patients receiving ASCT [11][12][13]; however, CR did not represent the effectiveness of the conditioning regimen for ASCT directly. Furthermore, recent studies have identified post-rather than pre-transplant CR as an important prognostic factor [14].
In the present study, we therefore evaluated the prognostic influence of the post-transplant day +14 (D+14) bone marrow plasma cell percent (BMPCp) as a reflection of the efficacy of the conditioning regimen on PFS and OS in newly diagnosed MM patients.
Patients
We retrospectively analyzed MM patients who were initially above ISS stage II and chemosensitive (i.e., achieved at least a partial response) to induction therapy between February 2003 and January 2008. Thirty-nine patients (26 males and 13 females; median age, 57 years [range, 37 to 63]) with MM were treated with the same induction therapy (vincristine, adriamycin, and dexamethasone [VAD]) followed by single ASCT. All patients received stem cell support following melphalan therapy as a conditioning regimen.
Treatment schedule
Four, three-week cycles of induction therapy with VAD were performed. Peripheral blood stem cell collections (PBSCCs) were accumulated following the administration of high-dose cyclophosphamide and mobilized with granulocyte colony-stimulating factor (G-CSF). The patients also received 200 mg/m 2 melphalan as a conditioning regimen. Autologous blood stem cells were infused on day 0 through a central venous catheter preceded by an intravenous injection of 50 mg of pheniramine maleate and 125 mg of methylprednisone. Post-transplant, the patients received 5 µg/kg G-CSF subcutaneously each day from post-transplant day +3 until engraftment. The patients received prophylactic ciprofloxacin, itraconazole, and acyclovir. Following high-dose ASCT, all patients were scheduled to receive 2 years of maintenance therapy with interferon (3 µg on 3 occasions weekly) and prednisone (50 mg on alternate days) if the disease status did not show progression.
Cytogenetic study
All patients underwent a bone marrow biopsy and conventional cytogenetic analysis at diagnosis following 3 cycles of therapy with VAD and post-transplant D+14. The BMPCp was determined using both aspirate smears and histological samples. The review included an examination of each bone marrow (BM) slide to obtain an estimate of cellularity and the number and proportion of plasma cells (PCs) per field in histological sections. In addition, differential counts for PCs, lymphocytes, and histiocytes were performed on the smears. Del (13q) was identified by interphase fluorescence in situ hybridization (FISH) of BM samples at diagnosis as the chromosomal abnormality was associated with a short PFS and OS in previous studies [10].
Response assessment of induction therapy
A response assessment of the induction therapy was performed after 3 cycles of therapy according to International Myeloma Working Group criteria [15]. Briefly, CR was defined by negative immunofixation on serum and urine, < 5% plasma cells in the BM, and the disappearance of any soft-tissue plasmacytoma. If present at baseline; a very good partial response (VGPR) was defined as a reduction of ≥ 90% in the serum M-protein level and a urine M-protein level of < 100 mg after 24 hours. A partial response (PR) was defined as a ≥ 50% reduction in serum M protein and a reduction in the 24hour M-protein level by ≥ 90% or to < 200 mg, and a ≥ 50% reduction in size of the soft-tissue plasmacytoma. Patients not meeting the criteria for CR, VGPR, PR, or progressive disease (PD) were defined as having stable disease (SD). PD was defined as the presence of at least one of the following conditions: an increase of ≥ 25% from baseline in the serum or urine M-protein level with an absolute increase of at least 0.5 g/dL and 200 mg after 24 hours, respectively.
Statistical analysis
All statistical analyses were performed using SPSS version 14.0 (SPSS Inc., Chicago, IL, USA). The Mann-Whitney U test was used to compare those patients with a BMPC of ≥ 2% on post-transplant D+14 with those showing a BMPCp of < 2%. The relationship between BMPCp after 3 cycles of VAD therapy and BMPCp at post-transplant D+14 was determined by Spearman correlation analysis. PFS was measured from the start of treatment to the date of progression. OS was measured from the start of treatment to the date of death or last follow-up visit. PFS and OS were estimated using the Kaplan-Meier method and compared with these two groups using the log-rank test. Cox proportional hazard models were used for univariate and multivariate analyses to evaluate the predictive value of BMPCp at post-transplant D+14 on PFS and OS compared to other predictive factors (e.g., BMPCp at diagnosis, CR after 3 cycles of induction therapy, del [13q] by FISH, and ISS stage III at diagnosis).
RESULTS
The patients' characteristics are shown in Table 1. The median follow-up duration was 28.0 months. The median PFS and OS were 29.1 and 42.1 months, respectively. The ISS at diagnosis was stage II (n = 25) or III (n = 14). The types of M protein were IgG (n = 23), IgA (n = 12), and others (n = 4). The median BMPCp at diagnosis was 43.0% (range, 11 to 57); chromosomal abnormalities were found by conventional cytogenetic analysis in 15 patients. Del (13q) was detected by FISH in 8 patients. Following induction therapy with VAD, 12 patients achieved a CR. The infused mean CD34+ stem cell dose was 4.1 × 10 6 /kg (range, 2.1 to 6.1), while the median BMPCp at post-transplant D+14 was 0.7% (range, 0 to 4.0). An analysis of the different cut-off levels between the 25 and 75% quartiles (range, 0.2 to 2.2) using the log-rank test determined that a BMPCp of 2% was the cut-off point yielding the greatest difference in PFS and OS; thus, this value was used as the cut-off level in our statistical analyses.
Comparison of patient characteristics according to BMPCp at post-transplant D+14
The baseline characteristics (age, sex, type, chromosomal abnormality, BMPCp at diagnosis, and stage) were compatible between the BMPCp ≥ 2% at post-transplant D+14 and BMPCp < 2% groups ( Table 1). The response following 3 cycles of VAD therapy was not different between the two BMPCp groups.
Correlation between BMPCp after 3 cycles of VAD therapy and BMPCp on post-transplant D+14
To estimate whether induction therapy with VAD influenced the post-transplant BMPCp, the correlation between BMPCps after 3 cycles of VAD therapy and posttransplant D+14 was estimated by Spearman correlation analysis. In the analysis, BMPCps between and above the two groups did not correlate (r = -0.071, p = 0.669, Fig. 1).
Impact of BMPCp at post-transplant D+14 on PFS and OS
The PFS and OS according to BMPCp at post-transplant D+14 are shown in Fig. 2A and 2B, respectively. PFS in the BMPCp ≥ 2% group at post-transplant D+14 was significantly shorter than in the < 2% group (p = 0.001, Fig. 2A). Similarly, OS in the BMPC ≥ 2% group at post-transplant D+14 was significantly shorter than in the BMPCp < 2% group (p = 0.001, Fig. 2B).
DISCUSSION
CR is defined by European Group for Blood and Marrow Transplant/International Myeloma Working Group uniform response criteria as the absence of serum and urine monoclonal M proteins by immunofixation (IFE) and < 5% PCs in the BM [15]. Several recent studies have shown that the pre-transplant CR is not associated with a better outcome in MM patients [11][12][13]. However, these studies did not interpret the importance of BMPC counts. Recent studies have demonstrated that the survival of patients who achieved a true CR (negative serum and urine IFE and < 5% BMPCs) was significantly longer than CR patients with ≥ 5% BMPCs [16]. This would be important to prevent the inclusion of a significant proportion of false-positive CR cases if the BM examination was removed. It is therefore suggested that the BMPC count is crucial for the prediction of survival in MM patients.
The present study showed that the post-transplant BMPCp was not associated with the BMPCp or response following 3 cycles of induction therapy. The BMPCp ≥ 2% at post-transplant D+14 group correlated with a poor survival rate compared to the BMPCp < 2% group. The base-line characteristics, including the response after 3 cycles of VAD therapy, did not differ between the two groups. Furthermore, the BMPCp after 3 cycles of VAD therapy did not correlate with the post-transplant BMPCp. This result demonstrates that the efficacy of the conditioning regimen rather than the impact of induction therapy influences the post-transplant BMPCp. Although in previous studies the pre-transplant CR was not equivalent to the CR after 3 cycles of VAD therapy, the present study suggests that the post-transplant BMPCp represents a novel prognostic parameter irrespective of the pre-transplant disease status. Recent studies showing contrasting results have been reported, whereby the post-transplant CR rather than pre-transplant CR was associated with survival in newly diagnosed MM patients [14]. These studies suggest that achieving a response by high-dose ASCT could be an important prognostic factor of the outcome. Thus, the importance of the BMPCp could be understood in the context of post-transplant CR in these studies.
Other studies have revealed several prognostic factors such as cytogenetics, the plasma cell labeling index, ISS, and pre-and post-transplant CR status in ASCT, which may be useful in planning individualized treatment regimes [2][3][4][5][6][7][8][9][10][11][12][13][14]. Similarly, the present study shows that ISS stage III, a BMPCp ≥ 50% at diagnosis, CR after 3 cycles of VAD therapy, and del (13q) as well as a BMPCp ≥ 2% at post-transplant D+14 were associated with PFS and OS in a univariate analysis. However, our multivariate analysis revealed that only CR following 3 cycles of VAD therapy and BMPCp ≥ 2% at post-transplant D+14 were independent factors predicting PFS and OS. This early diagnostic approach at engraftment may have an important role in the prediction of outcome in ASCT. Our data demonstrate that early detection of the residual myeloma burden by BM examination post-transplantation is an important factor in predicting the outcome for MM patients receiving single ASCT. A higher BMPCp at post-transplant D+14 may denote the necessity of the conditioning strategy, including high-intensity or novel agents, tandem ASCT, or more intensive maintenance therapy. Although this study investigated only a small number of patients and was retrospective in design, we identified a novel parameter for the early detection of the residual myeloma burden in ASCT. A well-designed prospective study that includes an examination of the BM on post-transplant D+14 is needed to provide further information regarding these observations. | 2014-10-01T00:00:00.000Z | 2011-03-01T00:00:00.000 | {
"year": 2011,
"sha1": "dbcceed7193ecfd2d730ff05db736fdbdd0669cc",
"oa_license": "CCBYNC",
"oa_url": "https://www.kjim.org/upload/kjim-26-76.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dbcceed7193ecfd2d730ff05db736fdbdd0669cc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236920670 | pes2o/s2orc | v3-fos-license | A Novel Optimization Method for Bipolar Chaotic Toeplitz Measurement Matrix in Compressed Sensing
the
Introduction
Compressed sensing (CS) [1,2] is a new framework for signal sampling. It enables to sample sparse signals at a sampling rate much lower than Nyquist's and to achieve accurate recovery of the original signal with high probability. The signal processing method based on compressed sensing theory does not depend on the bandwidth of the signal and can break the limitation of the sampling process by Nyquist's sampling theory. CS has broad application prospects in the fields of broadband signal acquisition [3], medical imaging [4,5], and data compression [6], etc.
The measurement matrix design is one of the cores of compressed sensing theory [7]. On the one hand, the nature of the measurement matrix directly determines whether the compressive sampling process can fully retain the useful information of the original signal, and on the other hand, the design of the measurement matrix needs to take into account the implementation capability of the compressed sampling system [8,9]. Although the widely used random measurement matrices such as Gaussian and Bernoulli have good applicability, there are too many free elements in the matrices, and they are not conducive to hardware implementation. Based on the above two factors, the bipolar chaotic sequence is used to construct Toeplitz matrix as a measurement matrix for compressed sensing-called the bipolar chaotic Toeplitz measurement matrix-in the literature [10]. This measurement matrix is simple to generate and has few free elements, which greatly reduces the difficulty of hardware implementation and, at the same time, supports fast algorithms that can solve numerous problems related to convolutional operations. Although the restricted isometry property (RIP) of the bipolar Toeplitz measurement matrix was proved [11,12], the constructed bipolar chaotic Toeplitz measurement matrix may still have a large correlation with the sparse dictionary during practical applications, which affects the compressed sampling reconstruction of the signal.
Measurement matrix optimization is an effective way to solve the above problem; the current optimization algorithms on the measurement matrix mainly focus on the optimization of random measurement matrix, such as the Elad algorithm [13], the Duarte-Carvajalino algorithm [14], and the Abolghasemi algorithm [15]. The strategy of alternating optimization between the matrix to be optimized and the target matrix in the Abolghasemi algorithm can effectively expand the search space and improve the optimization effect. Based on the idea of alternating optimization, a weighted measurement matrix optimization objective function was proposed in the literature [16] to improve the robustness of the compressive sampling system under the condition of considering both signal adaptation and the matrix's own characteristics. The literature [17] introduced the concept of parameter update in K-SVD to improve the measurement matrix update and improve the efficiency of matrix optimization. The literature [18] proposed a new joint optimization algorithm of the measurement matrix and sparse dictionary to improve the signal compression-aware reconstruction effect by constructing a new objective function. However, the above alternating optimization algorithm does not consider the possible structural constraints of the measurement matrix itself, so the structural properties of the measurement matrix itself will be destroyed after optimization. On the premise of considering the structural properties of the matrix, the literature [19] proposed an alternating optimization algorithm for sparse measurement matrices, and the literature [20] proposed an alternating optimization algorithm for cyclic matrices, but both algorithms are not applicable to the bipolar Toeplitz measurement matrix.
From the above analysis, it can be seen that the existing measurement matrix optimization methods do not meet the optimization needs of the bipolar Toeplitz measurement matrix. The reasons for this are mainly the following two aspects. On the one hand, the bipolar Toeplitz measurement matrix has a special matrix structure, and the existing measurement matrix optimization algorithms cannot guarantee that the matrix structure remains unchanged after optimization. On the other hand, the matrix elements of the bipolar Toeplitz measurement matrix have only two values, and the existing measurement matrix optimization algorithm will destroy the bipolar characteristics of the matrix elements.
To address the above problems, a new bipolar chaotic Toeplitz measurement matrix alternation optimization algorithm is proposed in this paper. In this paper, starting from the structural characteristics of Toeplitz matrix, the Toeplitz matrix is decomposed into the form of weighted summation of multiple structural matrices, thus converting the matrix optimization problem into the optimization problem of matrix generation sequence, which ensures that the original structure of the matrix optimization process remains unchanged. Second, a threshold function is introduced to constrain the values of the generated sequence during the iterative process, which ensures that the optimized matrix still maintains the bipolar property. The experimental results show that the optimized bipolar chaotic Toeplitz measurement matrix compression-aware reconstruction error is reduced and the reconstruction probability is significantly improved.
2. Description of the Problem 2.1. Bipolar Chaotic Sequence. Before constructing a bipolar chaotic Toeplitz measurement matrix, a bipolar sequence needs to be constructed first. The pseudorandom sequences based on chaotic systems have deterministic generating functions and good statistical independence, which are more conducive to hardware implementation, and for this reason, a bipolar sequence generation method based on chaotic systems is used in this paper. A logistic chaotic system is a commonly used method to generate chaotic sequences. Considering the problem of generating bipolar Toeplitz measurement matrix, the following mapping function for the logistic chaotic system is used in [21]: where x j ∈ ½−1, 1 and μ ∈ ½0, 1; this function is more suitable for the modulation of digital signals. When μ ⩾ 0:8371, the logistic chaotic system has a positive Lyapunov exponent and the system enters a chaotic state; when μ = 1, the mapping is traversed on the interval ½-1, 1. As can be seen from Figure 1, the time series goes through three different evolutionary stages of unstable immobility⟶cycle⟶ chaos in turn. The closer μ is to 1, the closer the range of values of x distributed over the entire region of -1 to 1; this means that the more obvious the chaos is. When the logistic mapping is extremely sensitive to changes in the initial value after the value of μ is determined, the structure of the entire chaotic system is very different when a small change in the initial value occurs.
The invariant probability density function of the logistic chaotic sequence is Then, the mean value of the series is By repeated iterations of Equation (1), a set of logistic real-valued chaotic sequences fx j g ∞ j=0 can be generated.
Journal of Sensors
Subsequently, a bipolar chaotic threshold function is introduced for this chaotic sequence fx j g ∞ j=0 , which is Then, the sequence A j constitutes a set of bipolar chaotic sequences. When A j+1 obtains ±1, the value interval of x j is taken as follows, respectively: Then, using Equation (2), we can get the equal probability of A j+1 obtaining ±1, both being 0.5.
Bipolar Chaotic Toeplitz Measurement Matrix.
To ensure the applicability of the bipolar Toeplitz matrix alternating optimization algorithm in this paper, two Toeplitz measurement matrices are constructed using the bipolar chaotic sequence described above [10]. and The scalar 1/ ffiffiffiffi ffi M p in Equation (6) is used to normalize the columns of Φ to ensure that the energy of the original signal x is consistent with the energy of the measured sample signal y during the dimensionality reductionℝ N ⟶ ℝ M . Φ 2 is the multiblock Toeplitz measurement matrix, and when b = 1, the multiblock Toeplitz measurement matrix degenerates to the conventional single-block Toeplitz measurement matrix. Since the generating sequences of both measurement matrices are bipolar chaotic sequences, the two measurement matrices are collectively referred to as bipolar chaotic Toeplitz measurement matrices in this paper.
The RIP properties of the bipolar chaotic Toeplitz measurement matrix have been effectively demonstrated in [9] and are not repeated here. The focus of this paper is on the optimization of the bipolar chaotic Toeplitz measurement matrix. Although the bipolar chaotic Toeplitz measurement matrix constructed using Equation (6) can satisfy the RIP property with high probability, there may still be problems such as strong correlation between the measurement matrix and the sparse dictionary and large interdependence coefficients during practical applications, so it is necessary to optimize the constructed bipolar chaotic Toeplitz measurement matrix to further improve the performance of the measurement matrix.
Currently, the core idea of measurement matrix optimization is to reduce the interdependence coefficients of the measurement matrix by reducing the values of the 3 Journal of Sensors nondiagonal elements of the Gram matrix. And the main methods to reduce the values of nondiagonal elements are threshold shrinkage, gradient descent, and singular value decomposition. However, the above methods all destroy the structural properties of the measurement matrix and the bipolar properties of the matrix elements, so they are not suitable for the optimization of bipolar chaotic Toeplitz measurement matrices. Based on this, this paper presents a new bipolar chaotic Toeplitz measurement matrix alternation optimization algorithm.
Bipolar Chaotic Toeplitz Matrix Alternating
Optimization Algorithm 3.1. Optimization of the Objective Function. For a onedimensional signal x ∈ ℝ N , the compressed perception process can be expressed as where y ∈ ℝ M is the measurement data, Φ is the measurement matrix, Ψ is the sparse dictionary, s is the sparse vector, and when the signal x is sparse, Ψ is the identity matrix.
The first attempt to consider the optimal design of measurement matrix was made in [12]. It states that by reducing the correlation between the measurement matrix and the sparse dictionary, the compression-aware reconstruction of the signal can be effectively improved. We define the Gram matrix: The measurement matrix objective function can be expressed as follows: The bipolar chaotic Toeplitz matrix used in this paper has obvious structural properties. In order to ensure that the measurement matrix optimization process still maintains its original structural properties, the measurement matrix is decomposed as follows: where J is the number of free elements in the measurement matrix, and J in Φ takes the value of M + N − 1, and Φ j is the structure matrix corresponding to element A j . The matrix consists of 0 and 1/ ffiffiffiffi ffi M p elements and has the same matrix dimension as Φ. If the matrix element in a is 1/ ffiffiffiffi ffi M p , it means that the matrix element in that position in Φ is A j . So Φ can be expressed as that is, Similarly, in Φ 2 , where Although Φ 1 and Φ 2 have different structures, they can both be expressed in the form of Equation (11) and, given a value of b, the measurement matrix is determined by the generating sequence A j only. To ensure that the structural properties of the measurement matrix are not destroyed by the optimization process, the measurement matrix optimization objective function is further expressed as 4 Journal of Sensors This ensures that the structure matrix remains unchanged during the optimization process and only the sequence A j is optimized.
3.2. The Proposed Method. For the multiparameter optimization problem in Equation (16), the following alternating optimization strategy is used in this paper: (i) Fixed the latest fA j g, updated G ideal (ii) Fixed the latest G ideal , updated fA j g G ideal is updated using the following contraction operation for a fixed fA j g: where i is the number of iterations, and sign ð·Þ is the sign function, η = ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðM − NÞ/ðNðM − 1ÞÞ p . The elements in the sequence fA j g are updated one by one using the gradient descent algorithm under the condition that the G ideal is fixed: where β is the gradient descent step and ∇f A i j ð·Þ is the gradient operator. In this paper, the expression for ∇f A i j ðA i j , G i ideal Þ is derived as follows: Then Equation (19) can be simplified as ∇f A i j ðA i j , G i ideal Þ can be calculated by the following equation: where vecð·Þ is the matrix vectorization operator. The derivation process converts the F-parametrization of the matrix to the 2-parametrization of the vector by the vecð·Þ operator, which gives the result in Equation (22).
Constraints for Generating
Sequences. The introduction of the structure matrix Φ j ensures that the structural properties of the measurement matrix are not destroyed during the alternating optimization process, but it is not guaranteed that the elements of the optimized matrix maintain their original bipolar properties. For this reason, this paper introduces a bipolar threshold function in sequence construction to constrain the sequence fA j g during the iterative process from the perspective of bipolar chaotic sequence construction: Then, by combining Equation (18) and Equation (23), the bipolar properties of the matrix elements will be effectively preserved during the optimization process. When the iterations converge, the optimized measurement matrix can be given by (11). At this point, the alternating optimization algorithm for the bipolar chaotic Toeplitz matrix in this paper is shown in Algorithm 1.
Experiments and Analysis
In order to verify the effectiveness of the bipolar chaotic Toeplitz measurement matrix alternating optimization algorithm in this paper, numerical simulation experiments are conducted to analyse the optimization effect of the measurement matrix and the compression-aware reconstruction effect of the optimized matrix.
Optimization Performance Analysis.
The bipolar chaotic sequence is constructed using the logistic mapping and the bipolar threshold function, and then, the single block Toeplitz measurement matrix Φ 1 and the multiblock Toeplitz measurement matrix Φ 2 are constructed according to Equations (6) and (7), respectively. The parameters of the measurement matrix are set as follows: N = 256, M = 128, b = 2. Setting the sparse dictionary as an identity matrix and the maximum number of iterations as 30, the optimization effect of the two measurement matrices is shown in Figure 2.
Journal of Sensors
As can be seen from the Figure 2, the optimization objective function decreases with the increase in the number of iterations and finally converges to stability. Compared with other measurement matrices, the structural properties of the bipolar chaotic Toeplitz measurement matrix and the bipolar nature of the matrix elements largely limit the Input: Measurement matrix Φ, sparse dictionary Ψ Output: Optimized measurement matrix Φ opt Step 1: Using the input measurement matrix Φ, construct the sequence fA j g as well as the structure matrix fΦ j g.
Step 2: Complete the update of G ideal by Equation (15) Step 3: Using Equation (18), complete the update of sequence element A j one by one Step 4: Constrain the updated element A j using Equation (23) Step 5: Determine if the iteration termination condition is met, if so, go to Step 6, otherwise, go back to Step 2.
Step 6: Output optimized measurement matrix Φ opt using Equation (11) Algorithm 1: The process of the proposed optimization method. Sparsity level (K) Table 1, where As can be seen in Table 1, the optimized measurement matrix correlation coefficients μ max as well as μ av have been significantly reduced.
Analysis of the Effect of One-Dimensional Signal
Reconstruction. The optimized measurement matrix was used to carry out simulation experiments on the compressive-perceptual reconstruction of one-dimensional signals and to analyse the effect of the optimized measurement matrix on the compressive-perceptual reconstruction effect of one-dimensional signals. During the simulation, a time-domain sparse signal was used for the onedimensional signal, at which time the sparse dictionary was an identity matrix. Setting the sparsity K = 30, the effect of compressive-aware reconstruction for four different measurement matrices (single bipolar chaotic Toeplitz measurement matrix, optimized single bipolar chaotic Toeplitz measurement matrix, multiblock bipolar chaotic Toeplitz measurement matrix, and optimized multiblock bipolar chaotic Toeplitz measurement matrix) is shown in Figure 3. The reconstruction algorithm uses an orthogonal matching tracking algorithm [22], and to further quantify the reconstruction effect, the relative error is introduced as follows: From the reconstruction results, it can be seen that all four measurement matrices achieve an effective reconstruction of the original signal at M = 128, but a comparison of the reconstruction relative error shows that the optimized single/multiblock bipolar chaotic Toeplitz measurement matrix has significantly lower values of reconstruction relative error than the unoptimized single/multiblock bipolar chaotic Toeplitz measurement matrix. Journal of Sensors A different sparsity K is set, and the reconstruction effect of the measurement matrix on different sparse signals is analysed. In the experiment, the position and amplitude of the sparse signals are generated randomly. 1000 Monte Carlo experiments were conducted under the same experimental conditions, and the reconstruction was successful when the relative error of reconstruction was lower than 10 -4 ; otherwise, the reconstruction failed. The reconstruction probabilities of the four measurement matrices under different conditions of sparsity K and the average relative errors are shown in Figure 4, where the average relative error is the average of 1000 Monte Carlo experiments.
As can be seen in Figure 4, the increasing sparsity leads to a decreasing reconstruction probability and an increasing reconstruction error. The optimized measurement matrices are more suitable for one-dimensional signals with different sparse locations and amplitudes, as the correlation coefficients μ max and μ av are significantly reduced. At this point, it can be seen from Figure 4 that the reconstruction probabilities of the optimized Φ 1 and Φ 2 are higher than those of the unoptimized Φ 1 and Φ 2 for the same sparsity, while the average relative errors are significantly lower than those of the unoptimized Φ 1 and Φ 2 .
In practical applications, the signal measurement process will inevitably be noisy, so the noise of different signal-tonoise ratios (SNR) is added to the compressed sampling process to analyse the reconstruction effect of the optimized bipolar chaotic Toeplitz measurement matrix under noisy conditions. In the experiments, the sparsity was set to 12, and the reconstruction probability of the four measurement matrices without noise was close to 1. Under the noisy condition, the reconstruction was successful when the reconstruction relative error was below 10 -2 ; otherwise, the reconstruction failed. The reconstruction probabilities and the average relative errors of the four measurement matrices with different SNRs are shown in Figure 5.
Analysis of the Effect of Two-Dimensional Image
Reconstruction. The optimization of the bipolar chaotic Toeplitz measurement matrix was further validated using twodimensional images. The two natural images ("Barbara" and "Boat") shown in Figure 6 were chosen as the experimental objects, both with a dimension of256 × 256. The above image was divided into 256 image blocks which were 16 × 16 for a compressed perceptual reconstruction experiment. The sparse dictionary used in the experiments is an orthogonal cosine dictionary. Setting the sampling rate to 0.5, the reconstructed images after compressive sampling with different measurement matrices are shown in Figures 7 and 8.
As the images are not ideally sparse signals under the orthogonal cosine dictionary, there is local blurring in the reconstructed images after compressive sampling compared to the original images. Further comparison of the reconstruction errors shows that the optimized single/multiblock bipolar chaotic Toeplitz measurement matrix has a lower relative reconstruction error than the unoptimized single/multiblock bipolar Toeplitz measurement matrix. Thus, after optimization, the bipolar chaotic Toeplitz measurement matrix is effectively improved for compression-aware reconstruction in 2D images.
Journal of Sensors
To further analyse the effect of sampling rate on the effect of image compression-aware reconstruction, the sampling rate was set to increase from 0.2 to 0.7, and the relative error after compression-aware reconstruction of the four measurement matrices was analysed at different sampling rates, and the experimental results are shown in Figure 9. In the figure, as the sampling rate increases, the number of sampling points keeps increasing; therefore, the relative error of compressed sensing reconstruction keeps decreasing. And comparing the four measurement matrices, it can be seen that at the same sampling rate, the optimized single/multiblock bipolar chaotic Toeplitz measurement matrix has a lower relative reconstruction error than the unoptimized single/multiblock bipolar chaotic Toeplitz measurement matrix. This experimental result shows that the alternating optimization algorithm of the bipolar chaotic Toeplitz measurement matrix in this paper can effectively improve the compression-aware reconstruction performance of the measurement matrix at different sampling rates.
Conclusions
In this paper, a bipolar chaotic Toeplitz measurement matrix optimization algorithm with alternating optimization is proposed to address the problem that existing measurement matrix optimization algorithms are not applicable in bipolar chaotic Toeplitz measurement matrices. The algorithm in this paper ensures that the structural properties of the optimized measurement matrix remain unchanged by introducing a structure matrix and then ensures that the bipolar properties of the optimized matrix elements remain unchanged by introducing constraints. The experimental results show that the optimization process effectively reduces the correlation coefficient of the measurement matrix, and the reconstruction error is effectively reduced, and the reconstruction probability is significantly improved when the optimized measurement matrix is applied to compressed perceptual reconstruction of 1D signals and 2D images.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest. | 2021-08-05T13:15:27.047Z | 2021-07-30T00:00:00.000 | {
"year": 2021,
"sha1": "dc309087fe4fce1494c44964e8c11b7085707e5d",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/js/2021/4024737.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "36f86b19fbfefac50f236519d6e09c24e61e0869",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
253232001 | pes2o/s2orc | v3-fos-license | New technical concept for alternating tangential flow filtration in biotechnological cell separation processes
Robust cell retention devices are key to successful cell culture perfusion. Currently, tangential flow filtration (TFF) and alternating tangential flow filtration (ATF) are most commonly used for this purpose. TFF, however, suffers from poor fouling mitigation, which leads to high filtration resistance and product retention, and ATF suffers from long residence times and cell accumulation. In this work, we propose a filtration system for alternating tangential flow filtration, which takes full advantage of the fouling mitigation effects of alternating flow and reduces cell accumulation. We have tested this novel setup in direct comparison with the XCell ATF® as well as TFF with a model feed comprising yeast cells and bovine serum albumin as protein at harsh permeate to feed flow conditions. We found that by avoiding the dead‐end design of a diaphragm pump, the proposed filtration system exhibited a reduced filtration resistance by approximately 20% to 30% (depending on feed rate and permeate flow rate). A further improvement of the novel setup was reached by optimization of phase durations and flow control, which resulted in a fourfold extension of process duration until hollow fiber flow channel blockage occurred. Thus, the proposed concept appears to be superior to current cell retention devices in perfusion technology.
| INTRODUCTION
Perfusion processes have gained in importance in biopharmaceutical cellular fermentation processes and research alike during the last few decades 1,2 due to increased productivity, improved product quality and higher batch-to-batch homogeneity in comparison to fed-batch processes. 3,4 The critical point is generally seen in the perfusion or cell retention device, 5,6 which is designed for separating the produced biological therapeutic substance in continuous production mode over longer periods of processing time, while retaining the producing cells.
Among the membrane-based perfusion devices, conventional tangential flow filtration (TFF) and alternating tangential flow filtration (ATF), that is, crossflow with periodically changing flow direction, are commonly used. In membrane-based perfusion processes, a key process performance indicator is the transmission of the target molecule (also known as product sieving). Deposits formed by retained cells and macromolecules can add an additional retention effect in comparison to the clean membrane. This deposit formation often results in reduced transmission, as it acts as a secondary membrane with its own and often unpredictable retention characteristics. Published work shows the superiority of ATF over TFF in terms of higher cell viability and lower product retention. 7,8 The ATF concept has become widely applied in the biopharmaceutical industry through the development of the commercially available XCell ATF ® device. Besides its application as perfusion device in the production of monoclonal antibodies, 8,9 it has also been studied for the production of virus particles, 10 to intensify N-1 seeding fermentation 11 and for harvesting of biopharmaceuticals. 12 The superior performance of the XCell ATF ® device versus conventional TFF has been attributed to three main effects: Firstly, the feed pump used in this system, a diaphragm pump, is considered a low-shear pump. The use of low-shear pumps was reported to result in low cell damage and therefore a lower release of DNA, RNA and other intracellular substances into the fermentation broth. 13 This reduces the complexity of substances in the aqueous phase, thus reducing fouling propensity, which otherwise increase filtration resistance and product retention. Secondly, these fouling effects were reported to be under better control by flow reversal and the associated pressure pulsations, which promote fouling mitigation and thus enhance filtration performance. 14 Thirdly, the changing pressure conditions of the unsteady flow were reported to cause backflushing of filtrate back to the retentate, also described as Starling flow phenomena, 15 which contributes to the removal of deposited material from the membrane surface. The fouling mitigation effects of alternating crossflow 14,[16][17][18] and other hydrodynamic fouling mitigation techniques have been extensively described in several works. [19][20][21] The remaining critical point, though, in our eyes is that the diaphragm pump applied in the XCell ATF ® device is limited in its operational flexibility, long-term processing and technically feasible range of processing conditions. Also, scaling up beyond XCell ATF ® 10, the largest commercially available device, is currently only possible by operating multiple units in parallel. The diaphragm pump is pneumatically actuated by supplying pressurized air and vacuum from the reverse side of the diaphragm, which results in a flow from the diaphragm pump back to the feed vessel (pressure phase) and from the feed vessel to the diaphragm pump (exhaust phase), respectively. 22 The use of pressurized air and vacuum, however, leads to small achievable flow velocities, for instance a maximum of 10 L min À1 in an XCell ATF ® 4 device, which results in a crossflow velocity of only 0.25 m s À1 in the attached hollow fiber module. Therefore, the wall shear stress along the membrane or deposit surface is limited to levels where only marginal deposit removal and fouling mitigation occurs. In perfusion processes of shear sensitive cells or mycelium-like aggregates, low crossflow velocities are well justified or even preferred, but for other processes working with more robust cells like yeasts, for instance Pichia pastoris, higher crossflow velocities could be desirable to enhance deposit layer removal. Several patents propose the advancement of the XCell ATF ® device by employing two diaphragms instead of one, motorized actuators, pistons or combinations thereof. [23][24][25] However, these technical developments appear to be mechanically complex to implement and they are not commercially available so far.
Another important aspect is that the diaphragm pump volume limits the pump's maximum displacement volume per stroke and cycle.
Therefore, the duration of each forward and backward cycle of alternating flow depends on the targeted flow rate. This interdependence limits the options for process optimization, as both frequency and crossflow velocity have an impact on the extent of fouling and fouling mitigation. 16,26 If, for instance, the crossflow velocity is reduced in order to decrease the shear stress acting on the cells, also the frequency will be reduced, which may have a negative impact on effective fouling mitigation. Additionally, the ratio of hold-up volume in the transfer line to the bioreactor and filter module relative to the fixed pump displacement volume can also be seen as unfavorable. This is because it provokes long residence times of cells in the device, which can lead to oxygen depletion, increased lactate production and reduced growth rate, impaired viability, and lower productivity. 27,28 Under certain processing conditions, cells even accumulate in the diaphragm hold-up volume, which leads to increased fluid viscosities and intensification of all aforementioned disadvantages, which are residence time related. That is why another patent proposes to replace the diaphragm pump with a bidirectional peristaltic pump. 28 The use of a peristaltic pump to transport the cell broth, however, poses an unwanted shear stress on the cells and is therefore not optimal for mammalian cell perfusion cultures.
Considering the reported advantages of alternating tangential flow filtration and to overcome the issues described above, a mechanically simple and more versatile alternating flow setup, capable of generating alternating flow within a wide range of flow rates and flow reversal frequencies would be desirable. At the same time, only low shear stress on the cells should be applied, residence times outside the bioreactor should be reduced and accumulation of cells in the external loop should be avoided. Therefore, a newly developed alternative alternating flow concept, mainly based on applying another pump concept with rapidly reacting centrifugal pumps acting in opposite flow direction (denominated setup II in the following), was studied in this work and compared with the XCell ATF ® device (setup I in the following). The detailed technical features of the used pumps are described in detail in the methods section.
The focus of this work was on the hydrodynamic conditions, that is, flow rates, pressure conditions, filtration resistances and cell accumulation effects of both filtration setups. A practicable model feed system was designed comprising yeast cells as a representative of producing cells and bovine serum albumin (BSA) as a substitute for produced biological substances meant to pass through the membrane. The application to a cell culture perfusion process must be the ultimate goal when developing new cell retention devices. At this stage of the work, the focus was on the hydrodynamic characterization of the proposed concept. However, the low-shear design of the pumps employed in setup II were already reported to have no significant damaging effect on mammalian cells. 13 The next step following this work therefore is the transfer of this concept to a mammalian perfusion culture and assessing the impact on cell viability, including cell size and cell metabolism, as well as product sieving and product quality.
| Analytical methods
The dry matter content of feed and retentate samples was determined by a microwave assisted drying balance SMART 6 (CEM Corporation,
| Filtration system
All filtration trials were conducted employing either of the two different filtration systems capable of generating alternating flow (see
| Flow profile characterization
As a first step for comparing setups I and II, the flow profiles generated by setup I were recorded for several flow rates and reproduced with setup II. For this purpose, the bioreactor was filled with desalted water and kept at the filtration temperature of 15 C. The permeate line was closed throughout these preliminary tests. The feed rate was set on the ATF controller, which finds the right pre-pressure and orifice size in an iterative process. When the set feed rate was reached and the pre-pressure did not further change over time, the flow profile was recorded. It should be noted that it can take several minutes for the ATF controller to reach the flow set point. Therefore, the related pre-pressures and orifice sizes were recorded to be used as starting point in the following filtration experiments. Flow rates higher than 5 L min À1 could not be sustained in setup I.
The flow profiles were analyzed in JMP ® Pro 14.1 software and the actual flow in each phase and the phase duration were noted.
These values were then used to program two-phase recipes on the console to set the duration of each phase and pump speed of the corresponding pump as input parameters for setup II.
| Filtration trial and data handling
Filtration experiments were conducted with both setups presented in for concentration processes. 12 However, harsh conditions up to a ratio of 1:3 were chosen in previous studies in order to provoke faster and more obvious fouling rates. 29 The chosen set point feed and permeate flow conditions of setup I are given in Table 1. It should be noted that the measured flow rates deviate from the set flow rates due to the indirect controlling strategy of the ATF controller, measuring cycle times instead of flow rates. The flow rates of setup II were set in order to match the actual flow rates of setup I, as can be seen from Figure 2. Shear rates τ w were calculated according to Equation (2) as a function of crossflow velocity v crossflow and hollow fiber inner diameter d i .
Additionally to the experiments presented in Table 1, single experiments with forward flow only (i.e., conventional non-alternating crossflow filtration) with the same actual feed flow rate and permeate flow rate were conducted using only the inlet centrifugal pump of setup II, while the other centrifugal pump was set inactive and was flown-through by the retentate.
Prior to each filtration experiment, the module was flushed with desalted water and the pure water permeability was measured at 15 C. Afterwards, the water was removed from the vessel and the filtration setup. While setup II was fully drainable, setup I had some remaining water in the diaphragm pump, which was not drainable without disassembling, which will be important when discussing the measured dry matter contents. Subsequently, 8 L of the pre-cooled feed suspension was given to the tank, the stirrer was set to 150 rpm and one initial feed sample was drawn. Afterwards, the feed flow was started, followed by the permeate flow induced by peristaltic pumps. Table 1 were conducted in a randomized order. The optimization experiments were performed afterwards. Therefore, the T A B L E 1 Overview over filtration trial set points in setup I. The set points of setup II were chosen to match the flow rates of setup I (see Figure 2) The time-resolved inline data for pressure and flow values were averaged in order to obtain data representing mean processing performance. The data processing was done according to Weinberger and Kulozik. 30 From these averaged flux (J) and transmembrane pressure (Δp TM ) data, the filtration resistance (R filtration ) was calculated according to Equation (3), considering the permeate viscosity η. The permeate viscosity of some permeate samples was measured using a MCR302 rheometer (Anton Paar GmbH, Graz, Austria) equipped with a double gap geometry; it was similar to pure water viscosity.
All data collected during the filtration trials were evaluated and plotted using JMP ® Pro 14.1.
| Flow profile
In order to directly compare both setups, flow profiles from setup I were recorded and reproduced with setup II. Figure 2 shows the flow profiles for the two feed flow rates chosen for this study, 2 L min À1
| Comparative assessment of filtration performance
The filtration trials were conducted at different feed flow rates and different permeate flow rates according to Table 1. Figure 3 shows process performance indicators of filtration trials conducted at equal feed flow rate but varying permeate flow rates (and thus varying permeate to feed ratio), while Figure 4 shows process performance indicators of filtration trials with equal permeate to feed ratio, but varying feed flow rates. This differentiation allows for the separate evaluation of the role of permeate to feed ratio on cell accumulation and of the crossflow velocity on fouling mitigation.
As can be seen from Figure 3, all filtration trials with a feed flow rate of 4 L min À1 could be sustained for at least 5 h. The process performance was similar for both alternating setups I and II with only minor differences. For the filtration trial with conventional nonalternating crossflow, the process performance was worse than for the alternating crossflow conditions, as can be seen from an up to ten- In comparison to the major difference between conventional non-alternating and alternating crossflow filtration, the effects of varying permeate flow rates and the filtration setup were comparably small, but yet observable. Figure 3a shows that the transmembrane pressure was higher for trials with higher permeate flow rate. This increase of transmembrane pressure was proportional to the increase in permeate flow rate, since the difference between each pair of trials vanishes when considering the filtration resistance (see Figure 3b).
The filtration resistance, however, reveals a minor difference between the two filtration setups, where the resistance using setup II was approximately 30% lower than for setup I. Also the pressure profiles, as exemplarily shown in Figure S2 for setup I and setup II, show only minor differences. The fluctuation of absolute pressures is stronger pronounced for setup I and all local pressures cyclically reach negative values due to the acting diaphragm pump. But the transmembrane pressure is slightly positive for both setups with only single outliers, which are probably just artifacts due to the high data acquisition rate.
The lower fluctuation of pressures in setup II might be beneficial as it results in a more even transmembrane pressure distribution across the membrane module and better module usage. 15 The BSA transmissions, as depicted in Figure 3c, for all alternating trials were comparable and scattered at about 100%. This high transmission value can be attributed to the rather large pore size of 0.5 μm and the overall low transmembrane pressure, which prevented an undesirable compaction of fouling material. The fact that BSA transmission values were partially above 100% might be due to analytical variations and possibly due to the approximation of the real transmission, by taking the feed BSA concentration and not the retentate BSA concentration inside the filtration device into account (see Equation (1)).
Lastly, the dry matter content in the retentate was determined as a measure for cell accumulation (see Figure 3d). Cell accumulation can be observed for both alternating flow setups. This is due to the hold- When comparing setup I and II, the filtration resistance in experiments using setup II was 20 to 30% lower than in experiments using setup I (as can be seen from Figure 4b), which is comparable to the observation from Figure 3b. The BSA transmission was not significantly different for most trials shown, with only a minor reduction after 3 h of filtration for the trial with 2 L min À1 feed flow rate conducted with setup I (gray open circles in Figure 4c). This trial also stands out in terms of accumulated dry matter in the retentate (see Figure 4d). Whereas all other trials with a permeate to feed ratio of 1:10 showed similar dry matter contents in the retentate, the dry matter content in the retentate was approximately doubled for the 2 L min À1 trial conducted with setup I due to cell accumulation over time. Considering the difference between 2 L min À1 and 4 L min À1 feed flow rate, on the one hand, it seems that the lower feed flow rates resulted in drag forces too low to transport the easily sedimentable cells against gravity back into the feed tank. Considering the differences between the filtration setups, on the other hand, setup II has no dead-end, as it is characteristic for the diaphragm pump of setup I. Setup II thus draws fresh medium from the feed tank also during the backwards flow phase, which reduces or even avoids accumulation of cells during filtration even at low feed flow rates. Hence, it can be said that even when setup II was operated with similar flow profiles as setup I, which is unfavorable in terms of the insufficient exchange volume, the issue of cell accumulation was less severe and filtration resistances were thus reduced. Figure S1). Figure 5 shows the process performance indicators for filtration trials conducted with 2 L min À1 feed flow rate and 400 ml min À1 F I G U R E 5 Process performance indicators of filtration runs with a nominal crossflow of 2 L min À1 and a nominal permeate flow rate of 400 ml min À1 : (a) Actual mean feed flow rate, (b) dry matter content in the retentate, (c) mean transmembrane pressure, (d) mean filtration resistance. Dark gray circles refer to setup I, black diamonds setup II and light gray squares represent a run with conventional non-alternating crossflow conducted with one centrifugal pump of setup II being active. Black rectangles represent filtration trials with setup II, but prolonged forward and backwards phases and black asterisks represent filtration trials with setup II, where the pumps were flow controlled instead of speed controlled. The error bars indicate the range of a randomized duplicate. Lines are given as a guide to the eye permeate flow rate. It can be seen that, due to the deliberately chosen extreme permeate to feed ratio, none of the trials could be sustained for the filtration time of 5 hours. The high permeate rate led to the concentration of the retentate (see Figure 5b), which results in an increased retentate viscosity and an impaired pumpability. Note that the controller of setup I cannot satisfactorily cope with increased fluid viscosities. 31 As a result of cell accumulation and increased fluid viscosity, the feed flow decreases over time (see Figure 5a). The insufficient volume exchange in setup I and the non-optimized setup II, as discussed in chapter 3.2, aggravates this issue. Also, the high permeate to feed ratio, as intended, led to severe deposit formation, which can be concluded from increasing transmembrane pressures and filtration resistances (see Figure 5c,d).
| Optimization concepts using setup II
During conventional non-alternating crossflow filtration, cell accumulation issues due to insufficient fluid exchange did not occur.
However, the sharp increase of fouling resistance after 15 to 20 min of conventional non-alternating crossflow filtration at similar permeate to feed flow ratio (see Figure 5d), hints at severe deposit layer formation due to the high drag forces toward the membrane and insufficient fouling prevention. Obviously, alternating crossflow was able to mitigate fouling in an efficient way, resulting in a longer feasible filtration time of alternating crossflow for both setups I and II, despite the occurrence of cell accumulation. Therefore, an alternating flow filtration process, which efficiently mitigates fouling and avoids cell accumulation, increasing retentate viscosity and feed flow reduction (as occurring for setup I and non-optimized setup II) is desired.
In setup I, the exchange rate can only be increased by constructional means, such as a higher diaphragm pump volume (not in the operator's hands) or a shorter transfer line to the feed tank. 27 Using two counteractive centrifugal pumps in setup II, the fluid exchange can be easily improved by increasing the volume transported in each phase, that is, by increasing the phase duration (as shown in Figure S1C) and/or feed flow rate, or by counteracting the increasing retentate viscosity by implementing a feedback loop (as shown in Figure S1D). Both optimization options are addressed in the following.
By reducing cell accumulation, prolonged phases (compare phase duration from lower panels in Figure S1) not only led to a longer feasi- • higher crossflow velocities and higher frequencies to increase fouling mitigation; • higher crossflow velocities and longer phases to reduce cell accumulation at high permeate to feed ratios; • longer forward phases with shorter intermittent backwards phases to combine the advantages of conventional and alternating crossflow; • flow profiles with less steep ramps to protect shear-sensitive cells from turbulences; • combination of alternating and pulsatile flow, where short flushing phases of higher flow rate might be used as inline cleaning technique.
However, these options are yet to be systematically investigated.
| CONCLUSION AND OUTLOOK
In this work, a new technical concept for alternating crossflow filtration was proposed and its hydrodynamic performance investigated in direct comparison with the state-of-the-art XCell ATF ® device and Open Access funding enabled and organized by Projekt DEAL. | 2022-10-30T06:17:14.837Z | 2022-10-29T00:00:00.000 | {
"year": 2022,
"sha1": "d46267cc1f66bbe3e50ff9b426a680b0d13a9b43",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/btpr.3309",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "b694485bbd63d432c8ee489322cdd80f7dfa60e9",
"s2fieldsofstudy": [
"Engineering",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
541484 | pes2o/s2orc | v3-fos-license | Concerning immune synapses: a spatiotemporal timeline
The term “immune synapse” was originally coined to highlight the similarities between the synaptic contacts between neurons in the central nervous system and the cognate, antigen-dependent interactions between T cells and antigen-presenting cells. Here, instead of offering a comprehensive molecular catalogue of molecules involved in the establishment, stabilization, function, and resolution of the immune synapse, we follow a spatiotemporal timeline that begins at the initiation of exploratory contacts between the T cell and the antigen-presenting cell and ends with the termination of the contact. We focus on specific aspects that distinguish synapses established by cytotoxic and T helper cells as well as unresolved issues and controversies regarding the formation of this intercellular structure.
Introduction
The immune synapse (IS) is a central event in the development of the adaptive immune response that results in the activation of the T cell. The "synapse-like" nature of the intimate contact between the T cell and the antigen-presenting cell (APC) during T cell activation was initially proposed by Norcross in the early 1980s 1 , although the term "immunological synapse" first appeared in a review by Paul and Seder in 1994 2 . The specifics of molecular segregation into activation clusters at the T cell:APC interface dates back to the seminal observations of Kupfer's group in 1998 3 . At the same time, Dustin and Shaw conjoined both concepts (the IS as the physical manifestation of T cell activation, and molecular segregation as the functional reflection of the T cell:APC interaction), adding crucial early data on the composition of the activation clusters 4 . The IS can be defined as a stimulus-driven, spatiotemporal segregation of molecules that participate in T cell activation. Segregation requires the establishment of an intimate contact between a T lymphocyte and an APC. The molecular redistribution is antigen dependent, requiring the interaction of an antigenspecific T cell receptor (TCR) with an antigen-loaded major histocompatibility complex (MHC) molecule. The features and outcome of the IS depend on the type of T cell and APC. The interaction of a CD4+ T helper (T H ) cell with an antigen-loaded MHC-II-bearing APC results in the specific recognition of the antigen and the activation of the T cell, i.e. proliferation, cytokine secretion, expression of effector molecules, etc. In the case of CD8+ T (CTL) cells interacting with cells displaying antigen-associated MHC-I, the outcome depends on the pre-exposure of the CTL to the antigen. Naïve CTL encountering specific antigens presented by APCs (e.g. dendritic cells [DCs] expressing antigen associated with class I via cross-presentation) are primed ("armed") to kill target cells and proliferate. Primed CTL also form transient IS with target cells (tumor cells or cells infected by a virus), resulting in specific killing.
The IS displays remarkable similarities with the neuronal synapse (NS), to which it owes its name. For spatial and functional reference, the APC is better compared to the pre-synaptic terminal, and the T cell to the post-synaptic terminal. The presynaptic portal provides the initiating signal, soluble in the NS (neurotransmitters), but membrane bound in the IS (antigen-bearing MHC). Upon ligation of the key receptor in the post-synaptic terminal (neurotransmitter receptors in the NS; TCR and its signaling co-receptor CD3 in the IS), downstream signaling ensues, including calcium mobilization, actin remodeling, and functional activation of the post-synaptic cell 1,5 . However, a unique feature of the IS consists of specific antigenic recognition, which is absent in the central nervous system (CNS). Another difference is the duration of the contact: whereas some NS can last for days, weeks, or even months, IS between CTL and target cells resolve in minutes, whereas between T H cells and APCs they can last from several hours to two days 6,7 . This feature change implies a different meaning for the concept of plasticity. In the NS, it refers to the modifications to the post-synaptic terminal that involve the consolidation and adaptation of the post-synaptic terminal to the flux of signal stemming from the pre-synaptic portal. In the IS, plasticity follows contact resolution and could be used to describe the functional changes to the T cell caused by the establishment of a productive synapse. These include activation (T H ), activation (naïve CTL) or kill (primed CTL), and functional anergy or apoptosis, e.g. during thymic selection of naïve T cells. A major manifestation of functional plasticity is the development of immunological memory, i.e. the generation of long-lived T cells primed to respond to a specific antigen that trigger a much faster and more efficient response to repeated exposure to the same antigen.
Overview of the spatiotemporal events of the IS
The study of the IS has focused on the establishment of hierarchical, spatiotemporally segregated events during the contact between the APC and the T cell. These events include the following: 1) Establishment of low-affinity, exploratory contacts between the T cell and the APC 2) Initial, scattered contact of the TCR with the antigenloaded MHC on the APC, followed by initiation of TCRdependent signaling pathways upon specific recognition of the MHC-peptide complex. Such activation is "umbrella shaped" (simultaneous activation and amplification of multiple pathways through different sets of effectors) and induces the activation of multiple effectors, including membrane-bound molecules, e.g. integrins, signaling adaptors, cytoskeletal elements, and transcription factors 3) Transactivation of adhesion molecules (integrins) that consolidate the interaction between the T cell and the APC. This step actually begins after initial TCR activation (step 2), but they evolve in parallel 4) Cytoskeleton-and signaling-dependent clustering of adhesion molecules and the TCR/CD3 complex at the contact interface between the T cell and the APC. In most cases, clustering is spatiotemporally segregated, i.e. the TCR/CD3 clusters and the integrin clusters, and their respective sets of adaptors, are separated 5) Signaling-and motor-dependent positioning of the secretory apparatus (including microtubules and microtubule-binding proteins) to the contact interface of the T cell 6) (Primed CTL only, also natural killer [NK] cells) Actin clearance at the center of the contact interface, enabling a tight association of the secretory apparatus with the plasma membrane 7-i) (T H cells) Stabilization of the contact and transcriptional activation of the T cell, including cytokine production and the expression of activation markers 7-ii) (Naïve CTL) Stabilization of the contact, priming and activation 7-iii) (Primed CTL and NK cells) Degranulation and target cell killing 8) Termination of the contact From this flowchart, it becomes obvious that a major difference between the IS established by CTL and that established by T H cells is the overall duration of the process and its immediate repeatability. CTL contacts are quick (to eliminate target cells rapidly), and CTLs can establish multiple IS with different target cells over short periods of time. Conversely, T H cells establish prolonged IS and do not form consecutive IS once activated properly.
In the following sections, we will develop emerging concepts pertaining to each of these spatiotemporal events.
Exploratory contacts
Exploratory contacts are mediated by low-affinity interactions between specific ligands and receptors. A major factor is the glycocalyx, which establishes charge-dependent repulsive interactions between the APC and the T cell (reviewed in 8). Additional contacts are mediated by glycosylation-dependent, low-affinity interactions, e.g. via galectins. For example, galectins bind TCR molecules with low affinity, thus the TCR does not activate 9 . Antigen-loaded MHC molecules successfully compete with galectin to trigger TCR/CD3 activation and subsequent cytoskeletal remodeling and transcriptional activation (see below). Chemokine receptors also participate in the formation and subsequent stabilization of the initial contacts and localize in the IS. Possible functions for chemokine receptors in this subcellular region are likely to involve co-stimulation, cell attraction, enhancement of actin polymerization, etc. 10 . Other exploratory contacts depend on specific protein-protein interactions, e.g. LFA-1 (α L β 2 ) (APC) with ICAM-3 (T cell) 11 , and LFA-3 (APC) with CD2 (T cell). LFA-1 interacts with ICAM-3 while in a low-affinity conformation 12 . Likewise, LFA-3 interacts with CD2 with suitable low affinity 13 , although the glycocalyces are likely to hinder their interaction sterically 14 . These contacts allow the transient interaction of the TCR with peptide-loaded MHC. If such interaction bears enough affinity, it overcomes the repulsive forces between the glycocalyces; if not, repulsion dominates and the unproductive contact between the mismatched T cell and APC is resolved.
TCR ligation and initial signaling
Successful interaction of the TCR/CD3 complex with peptideloaded MHC initiates signaling. It is important to point out that very few TCR-MHC interactions are sufficient to trigger T cell activation 15 . Recent reviews have described the current viewpoints on TCR/CD3 signaling 16,17 . Here, we will focus on several aspects of TCR binding and initial signaling that are specific to IS formation and shape the rest of the process.
Productive TCR engagement promotes its immobilization and clustering in the contact area 18 . This is mediated in part by its interaction with the MHC on the APC, which restricts the possible lateral movement of the TCR to the interacting portion of the plasma membrane of the T cell with the APC. However, the TCR/CD3 complex appears more immobile and clustered than predicted by a model of free diffusion in a semi-planar layer 8 , suggesting additional mechanisms of immobilization and aggregation. A crucial mechanism is the association of the TCR/CD3 complex with the actin cortex 19,20 . A recent study has shown that ligated TCR/CD3 molecules modify the flow of actin underneath them, indicating binding-dependent interactions between the TCR and cortical actin 21 , which are essential for sustained TCR-dependent signaling 22 . Such interaction is not direct but relies on the recruitment of actin-binding adaptors, e.g. Nck 23 .
Another important topic is cluster size. There is evidence of small (nanosized) TCR clusters even before their interaction with the MHC. These nanoclusters are continually generated throughout the plasma membrane of the T cell 24 and migrate and coalesce at the center of the contact to form micron-scale structures, termed central Supramolecular Activation Clusters (cSMACs) (Figure 1, top) 25 , which concentrate signaling components (reviewed in 26) as well as molecules involved in co-stimulation, e.g. CD28 27 . The mechanism of coalescence is also unclear, but it also depends on actin and TCR ligation 28 . Possible explanations involve increases in homotypic TCR lateral affinity, actin coalescence that would "drag" the TCR nanoclusters together, or changes to the size/position of the membrane nanoclusters based on alterations to the regional composition of the plasma membrane. The principles of spatiotemporal assembly of such structures remain unclear, mainly because of differences depending on the type of T cell and APC. In general, T cells that bear a higher basal activation state (e.g. leukemic T cells or memory T cells) form large clusters more readily than resting, naïve T cells. In the latter, TCR/CD3 clusters often remain small and sparse along the contact area between the T cell and the APC 29,30 . The difference could pertain to the expression of additional components in activated cells that promote, or facilitate, TCR/CD3 clustering in more pre-activated cells and/or that signals emanating from the TCR/CD3 are more intense in pre-activated cells owing to a higher activation baseline.
Adhesive interactions TCR-dependent inside-out signals trigger the conformational extension of integrin LFA-1, enabling its interaction with APCexpressed ICAM-1 (reviewed in 31). This process is similar to the inside-out signaling that activates integrins during extravasation 32 , and it results in stable adhesion between the APC and the T cell.
TCR signals that mediate LFA-1 trans-activation go through several adaptor circuits, including Rap1-RapL-RIAM and SLP-76/ ADAP/SKAP ( Figure 2). Rap1 is a small Ras-like GTPase that is activated by RasGEFs triggered by the TCR, e.g. CalDAG-1. Active Rap1 forms a complex with RapL and RIAM that targets talin to the plasma membrane 33 , where it promotes the conformational extension of LFA-1 34 . SLP-76/ADAP/SKAP-55 bind to the TCR effector LAT, triggering their association to RIAM, thereby participating in the delivery of talin to the integrin 35 .
Another important molecule for the inside-out activation of LFA-1 via TCR is kindlin-3. Kindlin-3 mutations cause a severe form of immunodeficiency, named Leukocyte Adhesion Deficiency (LAD)-III 36,37 . LAD-III T cells do not migrate properly and activate poorly due to impaired adhesion mediated by LFA-1 38 . There are two possible mechanisms to explain the role of kindlin-3 in LFA-1 transactivation. One mechanism postulates that kindlin-3 triggers inside-out activation of LFA-1 by binding directly to the β chain cytoplasmic domain. The other mechanism suggests that kindlin-3 could facilitate the binding of talin, or its effect on the conformational extension of LFA-1 (reviewed in 39). Recruitment of kindlin-3 to LFA-1 is likely mediated by its interaction with ADAP, as in platelet integrin α IIB β 3 40 (Figure 2). LFA-1 is the predominant integrin that mediates the interaction of T H cells with APC. It is also important for the formation of IS between CTL and target cells. However, it is unlikely that every target cell expresses ICAM-1, thus additional integrins may be implicated in the formation of IS. Prior studies have described possible roles for VLA-4 (α 4 β 1 ) and VLA-5 (α 5 β 1 ) in the IS (reviewed in 41), but their ligands as well as their redundant/unique functions with respect to LFA-1 remain unclear. Spatially, integrins localize throughout the contact area of the T cell and the APC. In activated cells (e.g. super-antigen-triggered clonal leukemic T cells), integrins localize in the outer edge of the contact zone, defining a peripheral SMAC (pSMAC) (Figure 1, top).
Actin reorganization at the IS Outside-in signals stemming from the TCR and integrins promote actin polymerization and clustering at the T cell:APC interface ( Figure 1). As discussed above, actin accumulation is fundamental for the clustering of the TCR and the integrins, forming a positive feedback loop. TCR/CD3 and integrins trigger actin polymerization through several pathways. A major pathway of TCR-mediated actin polymerization depends on the small GTPase Rac1. The TCR activates several Rac GEFs, including Vav1 42 and Tiam1 43 . Rac promotes branched actin accumulation by activating a multimolecular complex that includes WAVE (Scar), HSP300, ABL2, SRA1, and NAP1. This complex associates with the Arp2/3 complex, triggering actin polymerization, as reviewed elsewhere 44 . Wiskott-Aldrich syndrome protein (WASP) is a protein related to WAVE that also induces Arp2/3-dependent actin polymerization downstream of the TCR, but it is activated by the small GTPase Cdc42 45 .
The contribution of other mechanisms of actin polymerization to the congregation of actin at the contact area with the APC is less clear. During the first steps of the formation of the IS, molecular regulators of actin assembly, e.g. ADF/cofilin, are involved in the dynamic reorganization and accumulation of actin at the contact region. For example, depletion of ADF/cofilin function in T cells enhances the accumulation of actin at the IS 46 . Formins, e.g. mDia, are barbed end nucleators that bind to the uncapped actin filament through one domain and to G-actin-loaded profilin through another, thereby catalyzing G-actin transfer from profilin to the barbed end. mDia-deficient T cells activate and migrate deficiently 47 . Finally, the Arp2/3 complex, which nucleates dendritic actin polymerization at the lamellipodium of migrating cells 48 , also participates in the formation of actin lamellae at the IS, although differently shaped actin can accumulate at the IS in the absence of the Arp2/3 complex, in a formin-dependent manner 49 .
Actin accumulation is also regulated by the function of actinbinding proteins involved in its cross-linking. For example, α-actinin and filamin accumulate at the IS and are required for proper T cell activation in response to antigen-loaded MHC 50,51 . It is important to note that these two actin cross-linkers also bind directly to the cytoplasmic tail of β integrins 52,53 (Figure 2), hence they play a dual function facilitating actin and integrin accumulation at the synapse. Other cross-linkers, e.g. non-muscle myosin II (NMII), are also involved in the formation of efficient synapses. However, the role of NMII in IS formation is controversial. Some studies have shown that NMII affects TCR clustering into the cSMAC 54,55 , likely due to impaired actin-dependent flux of the TCR towards the contact area 56 , but other studies suggest a minimal involvement of this molecule in the formation of the IS 57,58 .
The differences between these studies likely reside in the type of T cell and APC used. NMII may play an additional role by regulating the mechanics of the contact interface of the T cell and the APC. In this regard, changes to the rigidity of the APC surface (and NMII inhibition) affect T cell activation 59 , indicating that the mechanics of the interfacing surfaces also play a role in the process.
Polarization of the secretory apparatus and the centrosome TCR and integrin signaling promotes a dramatic redistribution of cellular components in the T cell, most notably the redistribution of the secretory apparatus (centrosome and Golgi, reviewed recently in 60) and machinery involved in the generation of extracellular vesicles 61 towards the contact area with the APC (Figure 1, both columns). A major difference with the neuronal synapse is that the secretory apparatus of the APC does not polarize towards the postsynaptic cell (the T cell). This is a crucial event during this process that is often used as a marker of IS maturation. It depends on the activation of microtubule motors, e.g. dynein, which "reel in" the centrosome and the associated secretory elements towards the signaling area. This process has been reviewed in detail elsewhere 62-64 .
In IS formed between CTL and target cells, this polarization ensures the rapid and specific lysis of the target cell (Figure 1, bottom right column, and next two sections). A major argument to explain the polarization of the secretory apparatus in T H cells has emerged recently with the discovery of the unidirectional transmission of microRNA-containing exosomes from the T H cell to the APC 65 ( Figure 1, bottom left column), which could influence the activation state of the APC, inducing functional activation or anergy of the APC depending on the microRNAs contained in the exosomes.
Formation of a secretory domain in the CTL synapse Actin accumulation at the IS facilitates the initial activation of the T cell by immobilizing receptors involved in the contact with the APC and sustaining localized signaling. However, it also constitutes a steric hindrance for polarized secretion. In the early 2000s, Griffiths' group described the clearance of a part of central actin in maturing cytotoxic IS (Figure 1, right column). Such a zone, containing less actin than its surroundings, coincided with the localization of intracellular granzyme 66 , suggesting that the region of actin clearance acted as a gate that enabled efficient secretion towards the target cell. However, recent studies have indicated that very small openings in the cortical actin may be sufficient for efficient vesicle delivery 67,68 . The mechanism of actin clearance at the cytotoxic synapse remains unclear. A recent study indicates that coronin 1A is a key mediator of actin remodeling and clearance at the contact area to form the secretory domain 69 . The contribution of other actin mediators of depolymerization, e.g. cofilin, has been suggested but not directly demonstrated 70 . This scenario implicates that the depolymerization signal stems from receptors localized at the CTL side of the IS. An intriguing possibility, untested yet, is that secretory granules directly depolymerize actin at the IS by carrying actin remodeling factors in their surface.
Target cell killing/T cell activation
In the case of pre-primed CTL-contacting target cells bearing antigen-loaded MHC-I, the subsequent steps of this process involve the secretion of granzyme-and perforin-loaded vesicles to kill the target cell (Figure 1, bottom right column). This has been reviewed in detail elsewhere [71][72][73] . Before that, naïve CTLs undergo priming (i.e. expression of lytic enzymes and their load into the secretory apparatus) at the secondary lymphoid organs (SLOs) when they enter into contact with mature DCs bearing suitable antigens associated with MHC-I. Direct priming occurs only when a) the pathogen infects and activates DCs directly and b) the pathogen-infected cell (or tumor cell) migrates directly into the SLO. Importantly, the establishment of IS between naïve CTLs and immature DCs leads to cross-tolerance, i.e. the inability of the CTL to activate properly 74 . This is likely an important mechanism of induction of tolerance involved in tumor evasion.
On the other hand, T H -APC contacts trigger a transcriptional program that results in the activation of the T H , including expression of activation markers, e.g. CD69 and CD25, and cytokine secretion, e.g. IFN-γ and IL-2 ( Figure 1, bottom left column). The main function of these cytokines is to create an activating microenvironment for other immune cells in a paracrine manner. At the site of infection, these cytokines activate other effector cells, particularly macrophages involved in pathogen clearance, CTLs, and NK cells.
Additional molecules induced by the establishment of IS include mediators of cell proliferation downstream of NF-AT, AP1, and NF-kB (reviewed in 75) as well as receptors implicated in the migration of the activated cell to the inflammatory site, e.g. CCR5 76 .
IS termination
The specific signals that promote termination of the IS are unclear.
In the case of IS of CTL with target cells, a clear candidate to promote termination of the contact is the flip-flop of the plasma membrane of the target cell due to the effect of the lytic enzymes secreted by the CTL. In such a mechanism, the CTL would recognize phosphatidylserine, annexin V, or other components of the inner leaflet of the plasma membrane of the target cell. In the case of naïve CTL or T H cells, the mechanism is less clear but likely involves the exhaustion of the TCR recycling process over extended periods of stimulation 77 . Importantly, signaling molecules involved in the formation and function of the IS, e.g. PKCtheta, are also involved in synapse breakdown, constituting a possible mechanism of early remodeling of the IS 78 .
Concluding remarks: towards the application of manipulating the IS in biomedicine
In recent years, the need for new therapies against multidrugresistant tumors and the secondary effects of current therapies, e.g. chemotherapy, have led to the study and the development of better "targeted" therapies with less deleterious side effects for patients. Therefore, enhancing the ability of the immune system to detect and remove pathological cells through recognition of tumor or different expression patterns of the target cells is a crucial step to develop better therapies. Another important issue is to counteract the evasive mechanisms developed by pathogens and tumor cells.
One approach aimed at improving the immune response against tumor cells consists of autologous or allogeneic tumor vaccination (Figure 3, top right). These approaches are aimed at generating strong CTL responses against tumor cells based on their specific molecular makeup. The underlying mechanism consists of vaccinemediated CTL priming by vaccine-stimulated APC (mainly DCs), which would then home to the tumor and rapidly form an IS with the tumor cells, killing them. Several trials based on this approach are reviewed here 79 . Another possibility is the genetic immunization of patients (DNA vaccination) through DCs. The major limiting factor is the need for safe and specific carriers. An attractive possibility is the use of in vivo DC-targeting liposomal DNA vaccine carriers 80 .
Approaches aimed at suppressing the effects of the evasive maneuvers of tumor cells have also been tested in recent years (Figure 3, top left). For example, tumor cells are believed to promote the expression of CTLA-4, which is a molecule expressed by T cells that competes with CD28 for the co-stimulatory molecule CD80 (B7.1), thereby suppressing T cell activation. The US Food and Drug Administration (FDA) and the European Medicines Agency (EMA) have approved the use of a humanized monoclonal antibody against CTLA-4 for the treatment of late-stage melanoma 81 . Similar approaches have been developed for PD-1, which is another inhibitory receptor that suppresses T cell responses independent of CD28 but dependent of its ligands PD-L1 and PD-L2, which are abundantly expressed by several types of tumor cells 82 . A number of antibodies against PD-1 and PD-L1/2 are being developed by big pharmaceutical companies
. Therapy-based enhancement of immune synapse formation between T cells and tumor antigen-presenting dendritic cells.
Top left, poorly responding T cells are treated with antibodies that block inhibitory molecules such as CTLA-4 and PD-1, or inhibitory ligands of the latter, e.g. PD-L1/2. Bottom left inlay, representation of the effect of anti-CTLA-4 blockade, which blocks inhibitory signals emanating from CTLA-4 that counteract TCR/CD3-dependent signals and also releases CD80 to co-stimulate via interaction with CD28; also depicted is the effect of anti-PD-1 or anti-PD-L1/2 monoclonal antibodies (mAbs), which prevent their interaction and the generation of inhibitory signals.
Top right, direct vaccination of dendritic cells with tumor DNA or autologous or allogeneic tumor extracts. Bottom right, either treatment should enhance T cell response against tumor antigens.
aiming to find different anti-tumor therapies [83][84][85] . At a molecular level, CTLA-4 binding to CD28 disrupts TCR clustering, effectively destabilizing the IS 86 . Likewise, PD-1 accumulation at the IS recruits protein phosphatases, such as SHP-2, that quench the stimulating signals emanating from the synapse 87 .
Clearly, these studies and novel forms of treatment are of outstanding importance in the development of new treatments for the more aggressive and less-tractable types of cancer and are likely the beginning of a new era of molecular treatment of cancer.
Competing interests
The authors declare that they have no competing interests.
Grant information
Miguel Vicente-Manzanares is funded by the Ramon y Cajal Program (RYC2010-06094) and grants SAF2014-54705-R from MINECO and the BBVA Foundation.
The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Open Peer Review Current Referee Status:
Editorial Note on the Review Process are commissioned from members of the prestigious and are edited as a F1000 Faculty Reviews F1000 Faculty service to readers. In order to make these reviews as comprehensive and accessible as possible, the referees provide input before publication and only the final, revised version is published. The referees who approved the final version are listed with their names and affiliations but without their reports on earlier versions (any comments will already have been addressed in the published version).
The referees who approved this article are:
Version 1
, Department of Microbiology I (Immunology), Universidad Complutense de Madrid, Pedro Roda-Navarro Madrid, Spain No competing interests were disclosed. | 2016-05-12T22:15:10.714Z | 2016-03-31T00:00:00.000 | {
"year": 2016,
"sha1": "d58e8a6bd0fb6e13d3732da3e57a97a1c85846ed",
"oa_license": "CCBY",
"oa_url": "https://f1000research.com/articles/5-418/v1/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9a7e7ea2f34b3080b3a7a7afc324b00322aeb64d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
30478068 | pes2o/s2orc | v3-fos-license | Video summarization using line segments, angles and conic parts
Video summarization is a process to extract objects and their activities from a video and represent them in a condensed form. Existing methods for video summarization fail to detect moving (dynamic) objects in the low color contrast area of a video frame due to the pixel intensities of objects and non-objects are almost similar. However, edges of objects are prominent in the low contrast regions. Moreover, to represent objects, geometric primitives (such as lines, arcs) are distinguishable and high level shape descriptors than edges. In this paper, a novel method is proposed for video summarization using geometric primitives such as conic parts, line segments and angles. Using these features, objects are extracted from each video frame. A cost function is applied to measure the dissimilarity of locations of geometric primitives to detect the movement of objects between consecutive frames. The total distance of object movement is calculated and each video frame is assigned a probability score. Finally, a set of key frames is selected based on the probability scores as per user provided skimming ratio or system default skimming ratio. The proposed approach is evaluated using three benchmark datasets—BL-7F, Office, and Lobby. The experimental results show that our approach outperforms the state-of-the-art method in terms of accuracy.
Introduction
Due to the advancement of technology, video surveillance has been used widely in emerging places to help ensure a safe and secure life style. Government, public safety organizations, and transportation agencies mainly rely on real-time video surveillance systems for security, traffic management, and emergency operations. Surveillance video cameras are setup in offices, railway stations, bus stops and other places. These cameras are used to monitor the activities within these places by trained professionals who always observe video from the monitoring centre. To investigate a crime or to find any specific events from a long video recording, it can take many work hours [1]. Furthermore, to store long videos require a huge memory space [1]. Therefore, it is essential to develop a method for extracting the most informative video frames from the long consecutive video stream.
Video summarization (VS) is a process to extract the most informative set of frames known as key frames or a set of video fragments from the original video. The main purpose of VS is to generate a short video so that an observer can get a complete idea about all the high priority PLOS ONE | https://doi.org/10.1371/journal.pone.0181636 November 9, 2017 1 / 22 a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 entities and events. VS must be as concise as possible and should contain all significant contents of the entire video. It should also maintain continuation of information and be free of repetition without losing any important video data. To summarize a video stream by considering all these properties, it is necessary to extract some important features of the video. These features are later applied to construct a very short version of the original video. Objects and people within a video play an important role for video summarization [1]. The reason is that events in a video are usually represented by objects/people and their activities [2]. Moreover, objects/people in a video carry the high-level semantic information [3]. In addition to this, human beings usually pay more attention to moving (dynamic) objects in a video [4] [5]. Therefore, objects/people and their activities in a video are mainly extracted for generating video summarization. It remains a challenging problem to extract dynamic objects from a video with low contrast, illumination change, noise, and multimodal dynamic environment [6] [7]. However, edges of objects are prominent in the low contrast regions [6] and less sensitive to illumination change and the multimodal environment [8]. Moreover, the problems of the methods based on edge pixels are sensitive to variation of shape and position [8]. To overcome these problems, edgesegments (groups of connected sequential edge pixels) can be applied. However, edge-segments based methods are not robust to local shape distortion and shape matching [9] [10]. To represent objects, geometric primitives (such as lines, arcs) are higher level and more distinguishable descriptors than edge-pixels or edge-segments [9] [10]. These primitives have some special properties. They are independent of object size, efficient for comparisons and matching, and invariant to scale and viewpoint changes. Therefore, these geometric primitives have the capability to represent objects with complex shapes and structures effectively. Furthermore, they often play a major role in the human cognitive system due to their discriminative power [11].
The existing methods for object detection apply complete circles or ellipses to represent curve fragments [10]. However, sometimes complete circles or ellipses may not be found for part of an object due to the actual shape of the part, low color contrast, illumination changes, or camera motion. Moreover, in the real world, part of an object can be a circular, elliptical, parabolic, or hyperbolic curve. As a result, the object detection methods proposed with the circular arc do not fit accurately with an elliptical, or a parabolic or a hyperbolic curve. On the other hand, the elliptical arc does not fit perfectly with a parabolic or a hyperbolic curve. However, a conic part can easily be fitted for any type of curves (circular, elliptical, parabolic or hyperbolic).
In this paper, a novel approach is introduced for extracting dynamic objects applying geometric primitives such as line segments, angles and conic parts, and for generating a summary of a long video. The straight contours, corners, and curved contours of an object are presented by line segments, angles, and conic (circle, ellipse, parabola, and hyperbola) parts respectively. For this purpose, an edge image is generated from a video frame by applying the Canny edge detection method. After that, lists of connected edge-segments and line segments fitted with each edge-segment are obtained from the edge image by applying the method developed by Kovesi [12]. A single line segment is considered as a straight line. Line segments with two lines are modelled as angles. Edge segments and line segments with more than two lines are matched with conic parts using Pascal's Theorem [13] [14]. After constructing geometric primitives, their displacements between two consecutive frames are calculated by applying a new method for measuring the positional dissimilarity to include the activities of objects within a video. A probability score is assigned to each frame based on the displacements of geometric primitives. The frames are sorted based on their higher probability score. Finally, a set of key frames is selected from this sorted list based on the default skimming ratio or user preferred skimming ratio.
There are several advantages of using Pascal's theorem for detecting curve segments. This method does not require calculating the center, major or minor axes for detecting conic shape objects. Furthermore, a one-step process can detect any type of conics (circle, ellipse, parabola, and hyperbola) or conic parts. Moreover, it does not require any parameter for conic part construction. To construct conic, Hough Transform (HT) requires higher dimensional parameter space. For example, an ellipse can be defined by five parameters, such as its center, the major axis, the minor axis and the orientation. Therefore, O(N 5 ) space is required for an ellipse to accumulate the parameter space, where N is the size of each dimension of the parameter space. Moreover, finding an optimal threshold for selecting an ellipse from high dimensional space is another problem [15]. A large threshold may have a poor influence on accurate ellipse detection while a small threshold may lead to missing the true ellipses. Furthermore, conic detection methods based on the algebraic equation present some problems. The main disadvantages are that this method is numerically unstable [16] and it does not have any geometric interpretation [17]. To overcome these problems, a conic part is constructed based on Pascal's theorem in the proposed method. To construct a conic part using Pascal's theorem, two tangents at two endpoints of a curve are necessary. The existing methods for tangent estimation construct a tangent on a digital curve based on a parameter [18] [19]. However, finding an optimal parameter value is one of the main problems of these methods. To solve this problem, a new parameterfree tangent estimation method based on Pascal's theorem is proposed. The advantage of applying Pascal's theorem is that it does not require any parameters to construct tangents on the unsmoothed digital curves.
The key contributions of the proposed method for video summarization are as follows: 1. A new parameter-less tangent estimation method is proposed for conic part construction; 2. Conic parts are applied to model curve contour for object detection instead of circular or elliptical arcs; 3. A new method for dissimilarity measure of geometric primitives is proposed for recognizing the activity of objects; 4. Geometric primitives, such as line segments, angles and conic parts are applied for extracting objects in a video with low contrast or illumination changes.
The remaining of this paper is organized as follows. Section II describes the related work proposed in the literature on video summarization methods in recent years. A brief description of Pascal's theorem is provided in Section III. The detail of the proposed method is discussed in Section IV. Extensive experimental results as well as an analytical discussion are provided in Section V and concluding remarks are presented in Section VI.
Related work
Objects/people play the most significant role in a video for summarization. In [20], a set of similar objects is trained to build a model and similar objects are extracted using this model for summarization. A part-based object movement framework is proposed in [21] for video synopsis generation. Object bank and object-like windows are applied to extract objects and are then utilized to detect objects for story-driven egocentric video summarization in [22]. A complementary background model is proposed in [23] to extract moving objects and video summarization. Pixel-based motion energy and edge features are combined in [24] to detect object and to summarize video. A background subtraction method is applied in [25] to detect foreground objects for video summarization. Eye tracking data is applied in [26] for important object detection from a video. In [27], important objects from a video are detected using features and object segmentation. Aggregated Channel Features (ACF) detection and a background subtraction technique are applied for object detection in [28] for surveillance video synopsis generation. The non-parametric background model is employed to extract moving objects in [29] for producing a condensed version of a surveillance video. A background subtraction method is also used in [30] for object detection. In [2], robust motion and cluster analysis are utilized for object location detection for summarizing rushes video. For generating storyboard, important objects are detected by a min-cut method in [31].
In addition, a Bayesian foraging strategy is applied in [32] for objects and their activities detection to summarize a video. The grid background model is applied in [33] for object detection. In [34], a key-point matching based video segmentation method is employed to locate the visual objects in a video. Spatio-temporal slices are applied in [5] to select the states of the object motion for video summarization. J Value Segmentation (JSEG) algorithm is applied in [35] for object detection to extract key frames from a wildlife video. Latent Dirichlet Allocation (LDA) is applied in [36] for detecting objects and their activities for video summarization. The background subtraction method is applied in [37] for human objects detection. Objects in a video are described by Histograms of Optical Flow Orientations (HOFO) in [38] and their activities are detected by the Support Vector Machine (SVM) classifier. In [39], moving object and motion information calculated in spatial and frequency domain are combined for video summarization.
Moreover, image signature is applied for foreground object detection and then fused with motion information to summarize egocentric video in [40]. A modularity cut algorithm is employed in [41] to track objects and use this information for summary generation. Faces of human objects are applied for movie summarization in [42]. Moving objects are detected in [43] using the forward/backward frame differencing method. Foreground object and saliency map difference are applied in [44] for surveillance video summarization.
Recently, a surveillance video summarization method is proposed in [1]. Single view summarization is generated in the approach for each sensor independently. For this purpose, MPEG-7 color layout descriptor is applied to each video frame and an online-Gaussian mixture model (GMM) is used for clustering. The key frames are selected based on the parameters of cluster. As the decision of selecting or neglecting a frame is performed based on the continuous updates of these clustering parameters, a video segment is extracted instead of key frames. The video summarization technique using a single type descriptor (i.e., color descriptor) in frame-level with on-line learning (i.e., GMM) strategy provides very good performance if the video has uni-modal phenomenon, however, the technique may not perform well if the video has multi-modal phenomena such as illumination change, variation of local motion, or occlusion.
To the best of our knowledge, existing approaches for video summarization did not apply geometric primitives (line segments, angles, and conic parts) although they have the capabilities to represent objects with complex shapes and structures effectively in challenging environments such as video with low contrast and illumination change. These geometric primitives have several important properties as mentioned in the introduction section. For example, they are independent of object size, efficient for comparisons and matching, and invariant to scale and viewpoint changes. Thus, in this paper a new video summarization method utilizing geometric primitives is proposed.
Pascal's theorem
In this work, the curve segments (conic parts) are extracted using Pascal's theorem [13] [14]. Therefore, a brief introduction about Pascal's theorem is provided in this section. Pascal's theorem states that when a hexagon (no three points are co-linear and no parallel lines) is inscribed in a conic, the three pairs of opposite sides meet three points of intersection. These three points are collinear. This line is called Pascal line [13] [14]. In Fig 1, p1, p2, p3, p4, p5, and p6 are six vertices of a hexagon inscribed in a conic (green dotted ellipse) where no three vertices are co-linear and sides of the hexagon are not parallel. The pair of opposite sides is represented with the same color. The point q1 is the intersection between opposite sides p6p1 (black line) and p4p3 (black line). The intersection of opposite sides p1p2 (light green line) and p5p4 (light green line) is q2. The opposite sides p2p3 (magenta line) and p6p5 (magenta line) meet at q3 point. According to Pascal's theorem, the intersecting points (q1, q2 and q3) are colinear and the line connecting these points is Pascal line (blue line).
Pascal's theorem can also be applied when five vertices from a hexagon are provided. The sixth vertex can be calculated using the provided five vertices. This sixth vertex will also be on the conic sections and satisfy the property of co-linearity. Interested readers are referred to [45] for more details regarding conic construction using five points by Pascal's theorem. Video summarization using line segments, angles and conic parts
The proposed approach
The introduced method has four main steps. They are as follows-(i) Geometric primitives extraction, (ii) Measure the displacement of geometric primitives, (iii) Assignment of probability score, and (iv) Key frame selection. The main steps of the proposed method are shown in Fig 2. The details of each section is presented in subsequent sub-sections.
Geometric primitives extraction
In the proposed method, objects in a video frame are represented by geometric primitives, such as line segments, angles and conic parts. The motivation is that these primitives are independent of object size, efficient for comparisons and matching, and invariant to scale and viewpoint changes. Moreover, they are an effective feature in a challenging environment. To extract geometric primitives, the conventional Canny edge detection method is applied to obtain a binary edge image from a video frame. In Fig 3a, a binary edge image of frame (number 4721) from the bl-3 video of BL-7F dataset [1] is shown. After obtaining the binary edge image, lists of connected edge points (edgelists) without any branch and fitted straight line segments to connected edge points are obtained by applying the method developed by Kovesi [12]. Each line segment may contain single or multiple lines. The connected edge points and the corresponding fitted line segments are shown in Figs 3b and 3c respectively obtained from the binary edge image of Fig 3a. Different color is applied to edgelists and line segments for better visualization. Later, sharp turn and inflection points are identified and line segments are split at these points as per the method proposed in [46]. The connected edge contours after splitting at sharp turn and inflection points are shown in Fig 3d. A single line segment is considered as a straight line (F) (red lines in Fig 3e). Line segments with two lines are modeled as corners (O) (yellow lines in Fig 3e). The connected edge segments whose corresponding line segments have more than two lines are matched with conic (circle, ellipse, parabola, and hyperbola) parts using Pascal's theorem [13] [14].
To validate an edge segment as a conic part, tangents are drawn first at each endpoint of an edge segment and an arbitrary point on the edge segment is required. The existing methods for tangent estimation construct a tangent on a digital curve based on a parameter [18] [19]. However, finding an optimal parameter value is one of the main problems for these methods. Therefore, a new parameterless tangents estimation method is proposed based on Pascal's theorem [13] [14]. Consider p1, p2, p3, p4 and p5 are five points of a circle (Fig 4a), an ellipse (Fig 4b), a parabola (Fig 4c) and a hyperbola (Fig 4d). To avoid an exceptional case of Pascal's theorem, these five points are selected in such a way that no three co-linear and no parallel lines can be formed using these points. The point q1 is the intersection of line p1p2 and line p5p4, and q2 is the intersection of p5p1 and p3p2 respectively. Accordingly, p4p3 and q1q2 must meet at point q3. The line (q1q2q3) is Pascal line and is represented by a blue line. The expected tangent line (t) of the conic at point p1 is obtained by connecting p1 and q3. Similarly, we can get a tangent line at any point of the conics. In this way, two tangents (t1 and t2) are constructed at each end point (p1 and p5) of the edge segment.
These tangents are then used to construct a conic part using Pascal's theorem. In our method, five points are obtained from an edge segment so that these points can divide it into four equal parts. We follow this approach as it represents the conic more accurately than random sampling. Algorithm 1 getTangent(p1, p2, p3, p4, p5) Begin Find the intersecting point q1 between p1p2 and p5p4 Find the intersecting point q2 between p5p1 and p3p2 Find the intersecting point q3 between p4p3 and q1q2 Draw the tangent t on p1 by connecting p1 and q3 End In the real world, part of an object can be a circular or an elliptical or a parabolic or a hyperbolic curve. As a result, the object detection methods proposed with the circular arc does not fit accurately with an elliptical or a parabolic or a hyperbolic curve and vice versa. Therefore, an innovative conic part construction method is introduced.
Using two tangents (t1 and t2) and a selected point (p3) from the edge segment, a conic part is constructed based on Pascal's theorem. Consider, tangents t1 and t2 at p1 (start point) and p5 (end point) of a circle (Fig 5a), an ellipse (Fig 5b), a parabola (Fig 5c) and a hyperbola (Fig 5d). These tangents (t1 and t2) intersect at point q1. The tangent t1 and p5p3 meet at point r. The point q2 is selected from line p5r. The intersecting point (q3) is obtained from q1q2 and p1p3. The line q1q2q3 is Pascal line (blue line in Fig 5). Finally, a point (p6) on the conic is obtained by intersecting p1q2 and q3p5. If the point q2 is moved from r to p5, the conic part p1p3p5 is obtained. Following this process, a conic part is obtained for the corresponding edge segment. The edge segments are fitted with the corresponding conic parts using the Least Square Fitting (LSF) method with residual two pixels.
Algorithm 2 getConicPart(p1, p3, p5, t1, t2) Begin Find the intersecting point q1 between t1 and t2 Find the intersecting point r between t1 and p5p3 Select a point q2 from p5r Find the intersecting point q3 between q1q2 and p1p3 Find the intersecting point p6 between p1q2 and q3p5 Move q2 from r to p5, p6 will move from p1 to p5 and conic part p1p3p5 will be constructed connecting p1, p3 and q5 End If the connected edge segments fit with conic parts obtained by Pascal's theorem, these conic parts represent curve segments (C) of objects in a video frame (see green curve in Fig 3e). Otherwise, connected edge segments are represented by the corresponding line segments. The points of F, O and C are provided with a value of one, two, and three respectively to distinguish them separately.
Measure the displacement of geometric primitives
Object activities are indicators of events within a video [2]. Furthermore, human beings pay more attention to dynamic objects than those that are static [4]. Therefore, a new approach is proposed to measure the activities of objects.
To obtain the activities of objects pixel wise comparison is performed between geometric primitives of the current frame and the previous frame. Suppose, current frame (F n ) and previous frame (F (n − 1) ) with geometric primitives are denoted by G n and G (n − 1) respectively where n = 2, 3, . . .., N (total number of frames in a video). The pixel values of G n or G (n − 1) are between zero and three where zero, one, two and three represent background, F, O and C respectively. The pixel locations of each F, O and C of the current frame are compared with those of the previous frame. Consider a line segment F i n , (where i = 1, 2, 3, . . .., I total number of line segments in G n and n represents that it belongs to nth number of frame with geometric primitives, G n ) contains A×2 array of (row F i n (a, 1), column F i n (a, 2)) coordinates of pixels where a = 1, 2, 3, . . ., A total number of pixels in F i n . The value of G n (F i n (a, 1),F i n (a, 2)) is one as the pixel value of the line segment is set to one. If a pixel location (F i n (a, 1),F i n (a, 2)) of F i n from G n is also a pixel of a straight line from the previous frame with geometric primitives G (n − 1) , the pixel (F i n (a, 1),F i n (a, 2)) is considered as a stationary pixel. Otherwise, it is considered as a dynamic pixel. To obtain this information, the pixel value at F i n (a, 1),F i n (a, 2) in the previous frame with the geometric primitives G (n − 1) is calculated. If the value of G (n − 1) (F i n (a, 1),F i n (a, 2)) is also one (as pixel value one denotes a straight line), the pixel (F i n (a, 1),F i n (a, 2)) is regarded as a similar pixel and is assigned value zero. Otherwise, it is consider as a dissimilar pixel and assigned value one. Therefore, the positional dissimilarity D of F i n in G n with respect to G (n − 1) is calculated by the following equation:- where a = 1, 2, 3, . . ., A total number of pixels in F i n , and D is A×1 array as it contains either 0 or 1.
The dissimilarity score E of F i n in G n is measured as follows:- If the dissimilarity score E is greater than a threshold τ, F i n is considered as a part of a dynamic object. Otherwise, F i n is selected as part of a stationary object and it is neglected. Similarly, the dissimilarity score E for all geometric primitives (F, O and C) in G n with respect to G (n − 1) is measured and categorized into a part of stationary or dynamic objects based on the threshold τ. The line segments (F), angles (O) and conic parts (C) in G n that belong to dynamic objects are denoted by dF n , dO n , and dC n respectively where d represents dynamic objects. The stationary geometric primitives are neglected.
The geometric primitives of frame number 4720 and 4721 from the bl-3 video of BL-7F dataset [1] are shown in Fig 7a and 7b. The dissimilar geometric primitives of frame number 4721 with respect to frame number 4720 are shown in Fig 7c.
Assignment of probability score
In the proposed method, each frame is assigned a probability score to become a key frame. The total lengths of dF n , dO n , and dC n of G n obtained by the previous step are measured. The probability score W n of the current frame F n is assigned by the following equation:- The probability scores (W) for all video frames of a video are smoothed by applying Savitzky-Golay filtering [47] with window size ω. The main advantage of this filtering is that it enhances local maxima [47]. In Fig 8, the probability scores (W), smooth probability scores (SW) and ground truth key frames of the office-1 video are shown by light blue, and red, and black color respectively. Ground truth key frames are multiplied by maximum of W for better visualization. Video summarization using line segments, angles and conic parts
Keyframe selection and summary generation
In the final step, SW is sorted in ascending order so that the frame with the highest dynamic objects appears on the top of the list and frames with less or no dynamic objects remain at the bottom. As a result, a list of sorted smooth probability scores (SSW) is obtained.
The introduced approach generates summarized video based on the skimming ratio (λ). The proposed method enables the user to select for the value of λ. Otherwise, this approach selects a default value of λ. After that, this method selects video frames from top of the list of SSW based on λ. From these selected frames, frames with no dynamic object are removed. Finally, summarized video is produced from these video frames, keeping their sequential order in the original video.
Results and discussion
The proposed method is evaluated by the publicly available BL-7F dataset [1], Office [48] and Office Lobby dataset [48]. They are considered to be the benchmark datasets to evaluate the performance of the video summarization techniques. In the BL-7F dataset, 19 surveillance videos are taken from fixed surveillance cameras located in the seventh floor of the BarryLam Building in the National Taiwan University. The duration of each video is 7 minutes 10 seconds and contains 12,900 frames. This dataset also provides a complete list of selected key frames as a ground truth for each video. In Office dataset [48], four videos are collected from stably held with non-fixed cameras. The main difficulties are the vibration of camera and different lighting conditions. Similarly, three videos are collected in Office Lobby dataset [48], with stably held but non-fixed cameras. However, they contain more crowded scenes with richer activities compared to the Lobby and Office datasets. The ground truth key frames for both the Office and Office Lobby datasets are also publicly available. No ethics approval is required for this work as no human subject is involved in any step of this work.
In this experiment, the value of the dissimilarity threshold τ was set to 0.85. Experimental results revealed that this value satisfied the condition to identify dynamic geometric primitives successfully. The window size ω for Savitzky-Golay filtering [47] was set to 300. This value effectively highlighted the key frames and suppressed the unnecessary frames (see Fig 8). We apply Canny edge detector with the default parameters provided in Matlab (https://au. mathworks.com/products/matlab.html) similar to [49] to extract edges from all video frames. Matlab selects the high and low value of the sensitivity threshold to the highest value of the gradient magnitude of the image and 0.4×the high value respectively. Matlab also selects the . black The skimming ratio λ (user preferred) was set to the skimming ratio of the ground truth key-frames for each video in the datasets [1] [48] plus ten per cent of λ. This value ensured more accurate summarization results. The default value of λ is set to 20% of the total number of frames of a video. This skimming ratio is also consistent with some other existing methods [50] [51]. In Fig 9, the skimming ratio of the ground truth key frames and the total number of frames and the default skimming ratio (20% of the total video frames) for BL-7F, Lobby, and Office are shown. It is clear from the graph that the default skimming ratio is almost consistent with the ground truth skimming ratio provided in [1] [48].
An objective evaluation was performed to justify the effectiveness of the proposed method. In this regards, a set of objective evaluation metrics, such as precision, recall and F1-measure were computed. The definition of precision and recall are as follows Video summarization using line segments, angles and conic parts where t p is the number of key-frames selected by the proposed method, f p is the number of frames that are not key-frame selected by the proposed method, and f n is the number of keyframes not selected by the proposed method. However, they alone cannot provide an unbiased measurement of the performance of the proposed method. For example, a method with high precision and poor recall or vice versa cannot be an excellent method. Therefore, a method with both higher precision and recall is an excellent approach. To represent this measure, F1-measure is defined as combining both precision and recall and is represented as follows:- The amount of conic parts in a video frame is very low compared to other geometric primitives, such as line segments and angles (corners). Among all the conic parts, elliptical segments may exist more than circular, parabolic or hyperbolic segments. However, conic parts still have a significant role to detect dynamic objects and to summarize a video. To identify the role of each geometric feature, such as line segments, angles, and conic parts for detecting dynamic objects and generating the summary of a video, we compare the result obtained by the individual geometric primitive (feature) on bl-0 video of BL-7F dataset. In Fig 10, The proposed approach is compared with the single-view summarization of the GMM based method [1] as the proposed method is designed for the single-view summarization. This method is recently proposed and the state of the art method to surveillance videos as it outperforms relevant and recent methods [1]. As the proposed method is implemented on surveillance video, GMM based method is selected to compare with. The precisions, recalls and F1-measures of the proposed method and GMM-based method on Office dataset are shown in Table 1 In Table 2, the results of the precision, recall, and F1-measure for Lobby dataset [48] obtained by both the GMM-based method (intra-view) and the proposed method are presented. The values of F1-measure obtained by the proposed method for lobby-0, lobby-1, and lobby-2 are 81.6, 83.8, and 86.0 respectively. In comparison, the GMM based method (intraview) obtains 75.8, 79.0, and 84.0 respectively. Therefore, the proposed method outperforms the GMM based method for the Lobby dataset. In Fig 12, the results of F1-measure for four videos of the Lobby dataset [48] obtained the GMM based method (intra-view) (F1-GMM) and the proposed method with user provided skimming ratio (F1-Geometric) and default skimming ratio (F1-DefaultSkimming) are shown.
The results of the precision, recall, and F1-measure obtained by both the GMM-based method (intra-view) and the proposed method for the BL-7F dataset are provided in Table 3. The values of F1-measure obtained by the proposed method are higher for 18 videos out of 19 Video summarization using line segments, angles and conic parts videos than those of the GMM based method. In Fig 13, the results of F-1 measure obtained the GMM based method (F1-GMM) and the proposed method with user provided skimming ratio (F1-Geometric) and default skimming ratio (F1-DefaultSkimming) for 19 videos of BL-7F dataset are represented. From this graph, it is apparent that the proposed method obtains slightly better results than the GMM based method for four videos, such as bl-2, bl-14, bl-15, and bl-17 videos. Noticeable enhanced values of F1-measure are obtained by the proposed method for six videos namely bl-0, bl-4, bl-7, bl-8, bl-9 and bl-11. The proposed method achieves superior performance for the remaining nine videos. Video summarization using line segments, angles and conic parts The reasons for the failure of the proposed method to perform better in bl-12 video of BL-7F dataset have been evaluated. In bl-12 video, some ground truth key-frames do not contain any dynamic objects or object activities. Similarly, some frames are not selected as ground truth key-frames although they contain significant dynamic object or object activities as provided in [1]. For example, frame no 4083, 4120, and 4563 show a person is working near the door. However, these frames are not selected as ground truth key frames. On the other hand, frame no 12615, 12675, and 12750 do not contain any object activities. However, they are selected as ground truth key-frames. There is no explanation found for this incident in [1].
After observing the proposed method both quantitatively and qualitatively, it is certain that the proposed method based on geometric primitives performs better than the GMM based method (intra-view) [1]. The main reason for this success is that the proposed method utilizes geometric primitives, such as line segments, angles, and conic parts for object detection, and applies a dissimilarity measure method to include the degree of object activities. In contrast, the GMM based method ranks the video frames based on the size of the foreground objects in the intra-view stage [1]. This method does not consider multi-modal phenomena such as illumination change, variation of local motion, or occlusion. Therefore, the proposed method performs better than the GMM based method.
Conclusion
In this paper, an innovative approach is proposed to summarize video using geometric primitives, such as line segments, angles, and conic parts. Existing video summarization methods fail to detect dynamic objects in low contrast regions. However, edges are prominent in low contrast regions. Again, to represent objects, geometric primitives (such as lines, arcs) are higher level and more distinguishable descriptors than edges. Existing object detection methods apply circular or elliptical arcs or entire circles or ellipses for object segment representation. However, elliptical arcs do not fit accurately to circular curves or vice-versa. Therefore, a conic part is applied for fitting the curve segments. To measure the activities of objects, a new cost function is proposed to calculate the displacements of geometric primitives between two successive frames. Experimental results has shown that the proposed summarization method using geometric primitives outperforms the recent state-of-the-art method. The proposed method performs very well in case of stationary camera. In the future, we will consider the moving camera for video summarization. Video summarization using line segments, angles and conic parts | 2017-11-28T19:48:21.265Z | 2017-11-09T00:00:00.000 | {
"year": 2017,
"sha1": "892b2769573ab84b0d2fcdc221b9e0039438c5ae",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0181636&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ecde7403bb09ac5ed67c3977eb8e7537d7285737",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
272444227 | pes2o/s2orc | v3-fos-license | Clinical decision making around commercial use of gene and genetic therapies for spinal muscular atrophy
Spinal muscular atrophy is no longer a leading cause of inherited infant death in the United States. Since 2016, three genetic therapies have been approved for the treatment of spinal muscular atrophy. Each therapy has been well studied with robust data for both safety and efficacy. However, there are no head-to-head comparator studies to inform clinical decision making. Thus, treatment selection, timing, and combination therapy is largely up to clinician preference and insurance policies. As the natural history of spinal muscular atrophy continues to change, more data is needed to assist in evidence-based and cost-effective clinical decision making.
Genetics of 5q SMA
Although the initial clinical descriptions of spinal muscular atrophy were in the late 1890s, the location of gene responsible for spinal muscular atrophy was not narrowed to the long arm of chromosome 5q until the 1990s [1][2][3][4].Quickly thereafter, in 1995, the survival motor gene was identified [5].There is an inverted duplication with 2 nearly identical copies of the gene; one produces full length functional transcripts (SMN1) and the other produces significantly reduced numbers of functional transcripts (SMN2).The homozygous loss-of-function nature of the disease, along with the presence of a "backup" gene producing some full-length transcript was promising for therapeutic development.The first therapeutic trials attacking the genetic mechanism of SMA began in the fall of 2011 (NCT01494701) and many additional trials quickly followed.
Historical natural history of SMA
The incidence of SMA is ~1:14,000 in the United States but varies around the world from 1:3900 to 1:16,000 [6][7][8][9][10].Historically, individuals with SMA were classified based on clinical features and motor function skills achieved (Table 1).Although there is some variability seen between individuals with the same number of SMN2 copies, the generalization that increasing number of SMN2 copies results in a milder phenotype classification largely holds true [11,12].There are two clear modifiers that result in milder disease phenotype, and the c.859C > G in SMN2 has been an exclusion criteria in some trials [13,14].
In anticipation of development of disease modifying therapies, several natural history studies were completed in all subtypes of SMA beginning in the early 1990s [15][16][17][18][19]. Two natural history studies of children with SMA type 1 were critical to initial drug approvals [20,21].Without treatment, these children never achieve the ability to sit, develop feeding and respiratory failure in infancy and have a significantly shortened life expectancy [21].A retrospective study confirmed historical studies where a median age of death or 16 h per day of ventilatory support by 13.5 months of age [20].The second study was a prospective study that found the median age of death or permanent ventilation to be 8 months [21].
An additional natural history study of individuals with SMA type 2 or 3 was critical for subsequent drug approvals [22].Individuals with type 2 SMA have symptom onset between 6 and 18 months of age.These children achieve the ability sit but never walk.They experience a slow progressive decline, with some periods of plateaus in function [19].Lifespan for children with SMA type 2 is shortened due to weakness of the respiratory muscles and progressive scoliosis.Winjgaarde et al. endpoint free survival probabilities of 74.2% at 40 years and 61.5% at 60 years [11].In SMA type 3, symptoms begin after 18 months of age with all achieving the ability to walk and quite a range in age when loss of ambulation occurs [23][24][25].As is seen in SMA type 2, there can be periods of stability.Lifespan is typically normal due to limited respiratory muscle weakness.The mildest form of SMA is type 4 and these individuals do not experience any symptoms until adulthood with proximal muscle weakness and a normal lifespan [26][27][28].
The most severe type of SMA is also the least common.SMA type 0 has significant symptom onset in utero and these infants are severely symptomatic upon delivery.They require immediate respiratory support and death often occurs within the first few months of life without significant interventions [29].
Nusinersen
Nusinersen, an antisense oligonucleotide delivered intrathecally (Table 2), was the first genetic therapy in clinical trial in the fall of 2011.Nusinersen works by promoting inclusion of exon 7 in the SMN2 transcript to increase full length SMN2 mRNA and subsequent functional SMN protein [30].The initial phase 1 trial to establish dose and safety was conducted in individuals ages 2-14 years of age with SMA types 2 and 3 [31].Several studies have since been completed in both symptomatic and presymptomatic individuals with SMA types 1, 2 and 3 clearly showing efficacy and continued safety (Table 3) [32][33][34][35].Nusinersen was approved in December 2016 in the US for all individuals with SMA.Approvals in Europe and Canada followed shortly in 2017 and now it is approved in over 60 countries [36,37].Nusinersen is the most thoroughly studied with approximately 12,000 dosed to date and side effects are largely related to complications from the lumbar punctures.After approval, there were reports of increased intracranial pressure and some individuals requiring shunts, but these complications are no longer being reported [38].Additional rare complications include proteinuria, thrombocytopenia, and coagulation abnormalities.Studies in adults remain observational [39][40][41][42].Currently the dose is 12 mg in all individuals and dose escalation studies are underway [43].
Onasemnogene abeparvovec
The first clinical trial for onasemogene abeparvovec (OA) was initiated in 2014 with 15 infants enrolled [44].OA uses an adeno-associated virus (AAV9) delivery system to introduce non-integrating SMN1 cDNA and is referred to as gene replacement therapy.The initial study was a dose finding and safety study, but clear functional benefit was seen in all 12 infants in the high dose cohort.One achieved stability in motor function and 11 others had significant improvements in function.Additional completed trials in early symptomatic and presymptomatic children confirmed significant and sustained improvements in function [45][46][47][48].OA was approved in May 2019 in the US for individuals with SMA under 2 years of age and without end-stage disease.Approval in Europe was obtained in June 2020 and a clinical diagnosis of SMA type 1 or up to 3 copies of SMN2 [49].OA is now approved in 51 countries with approval definitions varying by country [50].The most common complications seen with administration of OA are the temporary effects of daily prednisone use, elevations in AST, ALT and occasionally GGT as a result of the AAV9 effect on the liver, nausea, vomiting, thrombocytopenia, and asymptomatic troponin I elevations [51].Rare complications include liver failure and death which seem to be related to abnormal liver function prior to treatment, development of severe illness around the time of treatment, atypical hemolytic uremia syndrome or thrombotic microangiopathy [52,53].There have also been two reports of cancer in children many months post dosing [54].Clinical trials assessing intrathecal administration to allow for dosing of larger and older individuals, were briefly halted after concerns arose due to dorsal root ganglia toxicity [55,56], but are again underway (NCT05089656, NCT05386680) [57].
Risdiplam
Clinical trials for risdiplam began in 2016.Risdiplam, with a mechanism of action similar to nusinersen, is a small molecule that can be taken orally.It is also a SMN2 splicing modifier that works by increasing inclusion of exon 7 in SMN2 [58].Risdiplam has been studied in infants and adults with a good safety profile and significant improvements in function [59][60][61][62].Risdiplam was initially approved in the US for any individual with SMA over 2 months of age in 2020, but this was revised in 2022 to include individuals with SMA of all ages.Initial approval in Europe was in 2021 and approval was expanded to all ages in 2023 [63].Risdiplam is now approved in over 100 countries [64].Risdiplam dosing is age and weight based until 20 kg and then all receive the same dose.Risdiplam has not had any significant safety concerns in humans [65,66].Drug-drug interactions with MATE transporters exist, so all new medication additions need to be reviewed for potential interactions.
Newborn screening
SMA was added to the Recommended Uniform Screening Panel in the US in July 2018.Missouri, Ohio, Utah and New York were among the first states to implement screening [67].As of January 2024, all 50 states and Washington DC are screening for SMA.Each state chooses how to implement the program so some states are only screening for SMN1 homozygous deletions, while other states are also screening for SMN2 copy number at the time of birth.Newborn screening programs are also underway in several countries in Europe, Australia, Canada, Japan, Qatar, and Taiwan.With the approval of three disease transformative therapies between 2016 and 2020 and the implementation of newborn screening beginning in 2018, the natural history of spinal muscular atrophy has dramatically changed.Earlier treatment has led to improved functional outcomes for all and remarkable outcomes in most children with SMA.However, a subset of children with 2 copies of SMN2 still experience significant symptoms despite functional gains [68][69][70].
Clinical decision making in the SMA treatment era
A new era in spinal muscular atrophy has arrived and clinical decision can be quite complex.Unfortunately, there are no head-to-head trials comparing the three available treatments and each clinical trial population studied was slightly different from the others making even indirect comparisons difficult (Table 3) [71].In an ideal scenario the clinician would be able to evaluate the evidence and determine which treatment option is likely to be most efficacious for each individual.However, in the setting of limited evidence, clinicians often rely on their prior clinical experiences and comfort level with these options.Further complicating the decision making is the significant role that insurance coverage can play in treatment availability for individuals.
Safety screening
Two of the three approved medications have significant safety concerns.To mitigate potential complications and determine eligibility, screening laboratories are required and are an important consideration in clinical decision making.For nusinersen, an antisense oligonucleotide, there is a risk of thrombocytopenia and kidney toxicity, so normal platelets and kidney function is required.Additionally, due to intrathecal administration, a normal coagulation profile may also be required [72].Onasemnogene abeparvovec is an AAV9 mediated gene replacement therapy that delivers a large viral load which may trigger both complement mediated and T cell mediated immune responses [73].If AAV9 antibodies are present, it is not safe to proceed with therapy.If the individual is a newborn, the AAV9 antibodies may be maternal and retesting in 4-6 weeks may result in a negative test result and eligibility for therapy [74].Retesting may also be considered in older individuals.There are also significant risks of thrombocytopenia, kidney injury and livery injury post dosing, so normal platelets, and normal kidney and liver function are required prior to administration.If any of these abnormalities are present, risdiplam may be the only safe option for treatment.
Infants identified via newborn screening
When an infant arrives to the neuromuscular clinic, typically within the first week of life, the visit may vary depending on parent/caregiver comfort and desired information.In all scenarios, a thorough history and exam are completed and then a lengthy discussion of spinal muscular atrophy begins in reader friendly plain language.Most parents/caregivers wish to discuss all available treatment options, including known risks and benefits of each.However, some parents/caregivers have difficulty accepting the diagnosis and wish to await confirmatory testing before fully exploring treatment scenarios.In these cases, a follow up visit is scheduled on the day confirmatory results return.Typically, parents/caregivers are open to completing the needed screening laboratory testing at the initial visit to avoid additional blood draws and to minimize delays in treatment initiation if the SMA diagnosis is confirmed.In some scenarios when copy number is known, clinicians and parents/caregivers choose to initiate risdiplam utilizing a free drug program for all infants with 2 copies of SMN2.If copy number is not available at the initial visits, due to ease of administration and minimal side effects, the risdiplam free drug program is being increasingly utilized to ensure that best outcomes can be obtained.If the child's confirmatory testing returns negative, the drug will be discontinued.Regardless of the results of the SMN2 copy number when the diagnosis is confirmed, many parents will continue the risdiplam until the free supply is depleted.The ultimate goal is to begin treatment as soon as possible, on the day of the first visit, if possible.
Treatment selection is variable and is first driven by the available products considering the infants SMN2 copy number and country, followed by if any safety concerns arise during the screening process.Finally, treatment selection may be impacted by insurance coverage.With the goal of immediate treatment initiation, risdiplam is typically the first treatment and then the infant is switched over to onasemnogene abeparvovec once insurance approval is obtained.Infants are followed monthly to every three months in the first year of life and most do well.All infants with 3 or more copies of SMN2 and ~50% of infants with 2 copies of SMN2 meet all motor milestones [68,70].For those infants who have 2 copies and are suboptimal responders, the resumption or addition of risdiplam may provide benefit.Based on the knowledge that SMN protein levels are highest in utero and decrease significantly in the first few months of life [75], early dual or combination therapy may be most beneficial.This is currently under active investigation (NCT05861986, NCT05861999l).In the clinic setting, if gross motor development becomes delayed, risdiplam may be resumed.
Treatment initiation for infants identified with 4 copies of SMN2 remains controversial.A consensus statement that was developed before the approval of onasemnogene abeparvovec or risdiplam advocated for treatment of infants with 2 or 3 copies of SMN2, but consensus could not be achieved for infants with 4 copies of SMN2 [76].However, this was updated in 2020 to recommend immediate treatment of infants with 4 copies [77].This recommendation was based on data that clearly shows early treatment results in better outcomes and that significant motor neuron loss occurs prior to functional changes are seen.The recommendation remains to monitor those with 5 copies of SMN2.In the United States, infants with 4 copies of SMN2 are eligible for all approved therapies based on FDA approval definitions, although some insurance policies do exclude individuals with 4 copies.In Europe and other countries, drug approvals vary and 4 copies may be excluded, most often for onasemnogene abeparvovec.Given that infants with 4 copies are much less common, data gathered from registries may be the only way to support this recommendation with the current absence of strong biomarkers.
Treatment of an infant with 1 copy of SMN2 has been done with some improvement, but significant residual weakness and the need for continued nutritional and respiratory support [78].Due to the severity of weakness after treatment with two therapies, review with an ethics panel is recommended before making any treatment decisions in these infants.Additionally, it is unlikely that treatment will be covered by insurance companies as most have stipulations for end-stage disease or severe weakness.
Symptomatic infants and children
Symptomatic infants and children include those born prior to newborn screening implementation but also the ~5% of infants who will be missed by newborn screening and those born in areas where newborn screening has not yet been implemented.Treatment initiation as soon as possible is essential.At the initial visit, all available treatment options based on the age and/or weight of the child should be discussed, and screening laboratories sent for onasemnogene abeparvovec and nusinersen, if applicable.Because both risdiplam and nusinersen offer free or starter drug programs, these are typically the quickest to initiate.Nusinersen is becoming less popular in the pediatric population due to the need for repeated lumbar punctures and the potential need for sedation.If the child is eligible for onasemnogene abeparvovec and the provider anticipates insurance approval within a week, it may be appropriate to forego starting the child on risdiplam or nusinersen.However, if the provider anticipates any delays, risdiplam or nusinersen should be initiated to preserve as much motor neuron function as possible.
After initial treatment has begun, clinical practices further vary based on if and when to initiate an additional agent.This is an area with limited evidence, although there are clinical trials that are attempting to answer these questions (NCT05861986, NCT05861999, NCT05115110, NCT05156320, NCT04488133).Insurance also has a large impact on the availability of dual or add-on therapy outside of clinical trials.If the child has a decline in function after a period of improvement or stability on a treatment regimen, the addition or change of treatment is reasonable.
Symptomatic adults
This group encompasses adults who were diagnosed as children or adults but did not have access to therapies until symptomatic in adulthood.For these individuals, risdiplam may be the only option due to prior spinal fusion surgery.Nusinersen has been given via reservoirs or cervical access, but this approach has not been widely used, perhaps due to the availability and ease of use of risdiplam [79][80][81][82].Additional consideration is given to those with long-standing disease duration prior to treatment initiation since earlier treatment is known to result in best outcomes.However, even in later stage disease, treatment with nusinersen or risdiplam may provide benefit and thus should be offered as preservation of residual function can impact quality of life [66,83,84].For those diagnosed in adulthood with minimal disease progression prior to treatment initiation, risdiplam and nusinersen should be considered.The potential reproductive impact of risdiplam may lead to a preference for nusinersen in some individuals [85].
In each of the above categories, many questions remain unanswered (Table 4).In every group, highly reliable biomarkers would provide data to assist in decision making.At the moment, neurofilament and compound muscle action potentials are under investigation, but neither is used routinely in clinical practice and it is still not clear if these markers are sufficiently robust to inform clinical decisions [68,70,[86][87][88][89]. Patient reported outcomes may be another avenue to monitor for disease progression, treatment response, and treatment persistence.
Clinical decision making regarding treatment selection in spinal muscular atrophy is currently a tiered process based on limited comparative evidence.Physicians and parents/caregivers should have discussions regarding the risks and benefits of the available options.Screening laboratories may reduce available options and then finally, insurance approval or denial may dictate which treatment(s) the individual will receive.Further work to harmonize clinical trial outcome assessments, long term outcomes and biomarker development will be important to provide more evidence based and fiscally responsible care.
Table 1
Historical Phenotypic classification of spinal muscular atrophy.
a Without disease modifying therapies or mechanical ventilation.
Table 2
FDA Approved therapies for spinal muscular atrophy.
Table 3
Representative summary of clinical trials in SMA. | 2024-09-08T15:30:18.458Z | 2024-07-01T00:00:00.000 | {
"year": 2024,
"sha1": "5f4180214a1ebac23440dcc75f2f095291537720",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.neurot.2024.e00437",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "56f6d26c5a56615f08dea6a9ac8459493de7b1e1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6999778 | pes2o/s2orc | v3-fos-license | Urinary leakage during sexual intercourse among women with incontinence: Incidence and risk factors
Background Coital incontinence is an under-reported disorder among women with urinary incontinence. Women seldom voluntarily report this condition, and as such, related data remains limited and is at times conflicting. Aims and objectives To investigate the incidence and quality of life in women with coital incontinence and to determine associated predictors. Methods This observational study involved 505 sexually active women attending the urogynecologic clinic for symptomatic urinary incontinence at a tertiary medical center. All of the patients were consulted about the experience of coital incontinence and completed evaluations including urodynamics, and valid questionnaires including the short form of the Pelvic Organ Prolapse/Urinary Incontinence Sexual Questionnaire, the Urogenital Distress Inventory and the Incontinence Impact Questionnaire. Results Of these women, 281 (56%) had coital incontinence, while 224 (44%) did not. Among women with coital incontinence, 181 (64%) had urodynamic-proven stress incontinence, 29 (10%) had mixed incontinence, and 15 (5%) had detrusor overactivity. Only 25 (9%) sought consultation for this disorder before direct questioning. Fifty percent (84/281) of the women rarely or sometimes had incontinence during coitus, while 33% (92/281) often had incontinence, and 17% (48/281) always had incontinence. The frequency of coital incontinence was not different regarding the types of incontinence (p = 0.153). Women with mixed incontinence had the worst sexual quality of life and incontinence-related symptom distress. Based on univariate analysis, higher body mass index (OR 2.47, p = 0.027), and lower maximal urethral closure pressure (≤ 30 cmH2O) (OR 4.56, p = 0.007) were possible predictors for coital incontinence. Multivariate analysis showed lower MUCP was independently significant predictors (OR3.93, p = 0.042) Conclusions The prevalence of coital intercourse in urinary incontinence women was high. Coital incontinence in these women was associated with abnormal urodynamic diagnosis and urethral dysfunction.
Introduction
Coital incontinence is defined as "complaint of involuntary loss of urine during coitus" according to the International Urogynecological Association and the International Continence Society in 2010 [1]. Coital incontinence is a common but under-reported symptom that adversely affects sexually-active women. A literature review by Serati et al. searched related articles from 1970 to 2008 and reported the incidence of coital incontinence ranged between 10-27% [2]. However, two recent studies reported a higher prevalence of coital incontinence as up to 60% [3] and 67% [4]. Although urinary incontinence during coitus may be an embarrassing problem that may lead to reduced sexual desire, reduced ability to achieve an orgasm, and may even be harmful to a relationship, this issue is difficult to understand and research [5]. One reason may be that it would appear that women very seldom voluntarily consult on the issue of coital incontinence unless they are asked directly by physicians or asked to complete related questionnaires [6].
Due to the limited data on coital incontinence, the pathophysiology is yet to be well known. Aside from an unknown pathogenesis, its frequency and impact on quality of life are also unclear. According to the clinical findings of Hilton et al. in 1988, coital incontinence during penetration is more prevalent in women with stress incontinence, while incontinence at orgasm is more common in women with detrusor overactivity [7]. Thus, coital incontinence is generally divided into two forms: incontinence during penetration and incontinence at orgasm. However, Moran et al. in 1999 investigated 228 women with coital incontinence either during penetration or at orgasm. They reported that coital incontinence was more prevalent in patients with urodynamic stress incontinence, and not common in detrusor over-activity [6]. Among their patients, 80% had incontinence during penetration, 93% had incontinence at orgasm, and 92% had incontinence for both, indicating coital incontinence as a common symptom during sexual activity in women with stress incontinence. Thus, Moran et al. proposed urethral dysfunction as the possible causative of coital incontinence [6].
The pathophysiology of urinary incontinence has not yet been fully understood [4]. As a result, further investigation into this issue is warranted [8]. The present study aimed to evaluate the incidence, frequency, and risk factors of coital incontinence among women with incontinence. Urethral function and sexual quality of life of those with coital incontinence were also investigated.
Materials and methods
All sexually active women with urinary incontinence attending the out-patient Urologic Clinic of a tertiary medical center were recruited using convenience sampling, and consecutively interviewed about their experience with regards to coital incontinence from April 2014 to March 2015. The clinical evaluation included medical history, physical examination, and urine analysis. Face-to-face interviews were conducted by a research nurse in a quite conference room beside the clinic. The women were asked questions regarding their experiences with regards to urinary incontinence during intercourse (either urine leakage during penetration or at orgasm). The frequency of coital incontinence was evaluated by 5-point Likert scale (Never, Rarely, Sometimes, Often, and Always). Women who never had coital incontinence were recruited as a comparison group during the interview. Patients underwent urodynamic measurements, pelvic examination for staging of prolapse according to the pelvic organ prolapse quantification (POP-Q) system [9] and valid questionnaires to evaluate their quality of life, including the short form of the Pelvic Organ Prolapse/Urinary Incontinence Sexual Questionnaire (PISQ-12) [10], the Urogenital Distress Inventory (UDI-6) and the Incontinence Impact Questionnaire (IIQ-7) [11]. Patients were excluded if they did not complete all of the evaluations or if they had urinary tract infection, any major medical condition (any chronic illness, such as poor-controlled diabetes, cardio-vascular disease, cancer, end-stage renal failure, or neurological disease), having stage 2 or more prolapse, or psychiatric disease that might influence the urodynamic measurements or questionnaire scoring. All participants provided verbal informed consent to participate in this study. Verbal consent contained all elements of this study, and the participant verbally agreed to participate. The Institutional Research Board of Mackay Memorial Hospital approved this study. The approval number is 15MMHIS080e.
PISQ-12 which was used to assess sexual function in women with pelvic organ prolapse and/or incontinence included 3 domains: behavioral-emotive (items 1-4), physical (items 5-9) and partner-related (items 10-12). The response of each item was scored from 0 to 4, with a total score of 0-48. A higher score indicated better sexual function [10]. The UDI-6 and IIQ-7 were designed to assess symptoms and quality of life related to urinary incontinence. UDI-6 was composed of 6 items including 3 subscales: irritative, discomfort/obstructive, and stress symptoms. Each item was scored from 0 to 4. IIQ-7 which was designed to evaluate incontinence-related quality life impairment was composed of 7 items, and included 4 domains: relationships, travel, emotional health, and physical activity. Each item was also scored from 0-4. For UDI-6 and IIQ-7, a higher score indicated worse symptoms and quality of life [11]. Urodynamic studies (UD 2000, Medical Measurement System, Enschede, Netherlands) included spontaneous uro-flowmetry, filling and voiding cystometry, and urethral pressure profile study. All urodynamic assessments were performed using standard procedures as described previously [12,13]. The presence of urodynamic stress incontinence (USI) and detrusor over-activity (DO) were recorded. The terminology used in this paper conformed to the standardization of terminology for female pelvic floor disorders from the International Urogynecological Association/ International Continence Society joint report [1].
Statistical analysis was performed using one-way analysis of variance (ANOVA), Mann-Whitney U test, or independent t-test for continuous variables, and the chi-square or Fisher's exact test for categorical variables, as appropriate. Univariate analysis was used to assess the association of potential predictive factors of coital incontinence and significant variables were entered into a multivariate analysis that was performed using logistic regression. Statistical analysis was performed using the Statistical Package for the Social Sciences (SPSS) 17.0 for Window (SSPS, Chicago, IL, USA). Differences were considered significant at p<0.05.
willing to join the interview, 505 women completed the quality of life assessments and urodynamic measurements. A total of 281 (56%) women answered affirmatively having experienced coital incontinence, and 224 (44%) did not. Only 25 (9%) patients voluntarily reported this condition. Based on the demographic characteristics of patients with and those without coital incontinence (Table 1), women with coital incontinence seemed to be multiparous (p = 0.042), had higher body mass index (p = 0.027), fewer normal urodynamics (p = 0.041) and lower maximal urethral closure pressure ( 30 cmH2O) (p = 0.001). While 10% (29/281) had mixed incontinence, and 5% (15/281) had detrusor-overactivity, 64% (181/281) of the patients had urodynamic stress incontinence, showing coital incontinence was prevalent in women with stress incontinence (Table 1). There was no significant difference regarding the types of incontinence between women with and without coital incontinence.
Women with mixed incontinence had the worst sexual quality of life (p = 0.001) and incontinence-related symptom distress (p = 0.014) ( Table 2).
For the frequency of coital incontinence, 50% (84/281) of the women reported rarely or sometimes having incontinence during coitus, 33% (92/281) often, and 17% (48/281) always, indicating half of the women had coital incontinence more frequently than sometimes. There was no significantly different frequency regarding the types of incontinence (p = 0.153) ( Table 3). According to univariate analysis, higher body mass index (OR 2.47, p = 0.027) and lower maximal urethral closure pressure ( 30 cmH2O) (OR 4.56, p = 0.007) were the possible indicators of coital incontinence. After multivariate logistic regression analysis was conducted, maximal urethral closure pressure 30 cmH2O was an independent risk factor for coital incontinence was identified (OR 3.93, p = 0.042) ( Table 4).
Discussion
The present study revealed that very few patients voluntarily consulted on coital incontinence, although it was not an uncommon symptom in women with urinary incontinence. Consistent with the clinical observation of El-Azab at al. that coital incontinence was negatively correlated with abdominal leak point pressure (urethral competence) [3], we also noted maximal urethral closure pressure 30 cmH2O associated with coital incontinence, indicating that urethral function plays an important role in maintaining continence during coitus. El-Azab et al. tried to determine the indicators for coital incontinence by assessing urodynamic measurements and anatomic anomalies using magnetic resonance imaging [3]. Similarly, they noted the majority of the patients (89%) had stress incontinence. Coital incontinence was correlated with the severity of stress incontinence and urethral incompetence. There was no specific anatomical anomaly discovered by magnetic resonance imaging. They thus concluded that coital incontinence is almost invariably a symptom of stress incontinence with urethral sphincter incompetence [3]. Similar result was also reported by studies of Madhu et al. [5] and Pastor [14]. Madhu et al. conducted a retrospectively study to analyse 1391 patients who had coital incontinence and underwent urodynamic examination from 1991 to 2009. They noted urodynamic-proven stress incontinence was significantly associated with coital incontinence [5]. Pastor conducted a review about women expelled fluids during sexual arousal and at orgasm. He also proposed that coital incontinence was a pathological sign caused by urethral disorder [14]. Moreover, El-Azab at al. reported there was no different amplitude of detrusor contraction pressure between detrusor overactivity women with and without coital incontinence. They speculated coital incontinence at orgasm not responding well to anticholinergics was due to urethral incompetence rather than severe refractory from of detrusor overactivity [3]. According to our data, we also noted the amplitude of detrusor contraction pressure was not different between women with and without coital incontinence. However, due to the limited numbers of detrusor overactivity subjects and unknown the timing when coital incontinence did occur (at penetration or orgasm), there is insufficient evidence to explain the pathophysiology of coital incontinence in detrusor overactivity subjects. Coital incontinence at orgasm in women with detrusor overactivity may be associated with a more complex pathophysiologic mechanism that combines neural transduction, urethral function, and detrusor activity.
This study showed the incidence of coital incontinence was up to 56% and prevalent in women with stress incontinence. The possible reasons to explain the high incidence and different clinical observations from the review by Serati et al. that the incidence ranged 10-27% [2] may be due to the different methodologies, and the frequency of coital incontinence. Patients who go to a clinic theoretically have more severe symptoms, are willing to consult on this embarrassing symptom, and to receive urodynamic measurements. However, stress incontinence is the most common clinical symptom and the indication for urodynamics [4]. Given the nature of this condition, it is no wonder that the majority of these patients had stress incontinence. This study evaluated the frequency of coital incontinence among women with incontinence. Based on our data, half of the patients reported coital incontinence rarely or sometimes, 33% (92/281) often, and 17% (48/281) always. For patients that reported rarely or only sometimes experiencing such symptoms, they may not be willing to discuss this with physicians because it is not a "frequent" problem. If not counting these patients, the prevalence of coital incontinence was 28% (140/505) and that was similar with the data of previously published studies.
Another interesting finding related to our data was that it revealed women with coital incontinence were more prone to have abnormal urodynamic diagnosis which echoed the findings of Jha et al. who reported normal urodynamics were less likely in women with coital incontinence [4]. This may indicate that coital incontinence is a specific symptom suggesting abnormal urodynamic findings and deteriorated urethral function. Some studies have reviewed the influence of different types of incontinence on female sexual function, and showed the conflicting data [15,16]. Using a valid questionnaire, Urwitz-Lane et al. reported that sexual function was not altered in different types of incontinence [15]. In contrast, Coksuer et al. reported that stress incontinence affected sexual function more than detrusor overactivity [16]. In the present study, the frequency of coital incontinence was not significantly different regarding the type of incontinence; however, patients with mixed incontinence had the worst sexual function and quality of life. Theoretically, stress incontinence and detrusor overactivity are from different pathophysiologic processes. Stress incontinence is associated with bladder neck hyper-mobility and urethral incompetence, while detrusor over-activity is associated with detrusor muscle instability [4]. Since mixed incontinence is a combination of both, this may explain why such patients have the worst sexual function.
This study has a number of limitations that should be noted. Data on when coital incontinence occurred, either during penetration or at orgasm, was not obtained. Not performing a semi-structured interview was also a limitation. The predominance of stress incontinence patients may also cause some bias to the analysis of risk factors. The treatment outcomes of coital incontinence by either medication or surgery were not followed up. That was due to this was an observational study, and some of the women recruited from clinic were not willing to further treatment. As a result, the choice of treatment remains unclear. The merit of this study is to evaluate the frequency of coital incontinence and quality of life in a large sample size with valid questionnaires and urodynamic measurements.
Conclusions
The prevalence of coital intercourse in urinary incontinence women was high. Coital incontinence in these women was associated with abnormal urodynamic diagnosis and urethral dysfunction. | 2018-04-03T04:11:23.374Z | 2017-05-24T00:00:00.000 | {
"year": 2017,
"sha1": "5c2bee0835cccd82df9b0f5a5eba54b4365e3914",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0177075&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5c2bee0835cccd82df9b0f5a5eba54b4365e3914",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17770638 | pes2o/s2orc | v3-fos-license | Pneumocystis jirovecii dihydropteroate synthase gene mutations in a group of HIV-negative immunocompromised patients with Pneumocystis pneumonia
The purpose of this study was to investigate dihydropteroate synthase (DHPS) mutations and their clinical context in non-HIV-infected patients with Pneumocystis pneumonia (PCP). DHPS genes in respiratory samples collected from HIV-negative patients with PCP presented between January 2008 and April 2011 were amplified by polymerase chain reaction (PCR) and sequenced. Basic clinical data from the medical records of the patients were also reviewed. The most common point mutations, which result in Thr55Ala and Pro57Ser amino acid substitutions, were not detected in the Pneumocystis jirovecii sampled from the HIV-negative patients. Two other point mutations, which result in nonsynonymous mutation, Asp90Asn and Glu98Lys, were identified in P. jirovecii from two patients. Among the patients, the levels of lactate dehydrogenase (LDH), C-reactive protein (CRP) and plasma (1–3) β-D-glucan were elevated in 75, 92.31 and 42.86% of patients, respectively. The percentage of circulating lymphocytes was significantly lower in non-survivors than in survivors [4.2%, interquartile range (IQR) 2.4–5.85 versus 10.1%, IQR 5.65–23.4; P=0.019]. The neutrophil proportion in bronchoalveolar lavage fluid (BALF) was significantly higher in non-survivors than in survivors (49.78±27.67 versus 21.33±15.03%; P=0.047). Thirteen patients had received adjunctive corticosteroids (1 mg/kg/day prednisone equivalent) and nine (69.23%) of them eventually experienced treatment failure. No common DHPS gene mutations of P. jirovecii were detected in the HIV-negative PCP patients. However, other mutations did exist, the significance of which remains to be further identified. The elevation of neutrophil counts in BALF and reduction of the number of lymphocytes in peripheral blood may be associated with poor outcome. The efficacy of adjunctive steroid therapy in HIV-negative patients with P. jirovecii infection requires further investigation.
were also reviewed, to determine the effect of mutation of the Pneumocystis DHPS gene on clinical outcome in HIV-negative patients.
Methods
Patients. In this retrospective study, 22 non-HIV-positive patients with PCP, which were confirmed by Gomori methenamine silver (GMS) staining of respiratory samples, were included. The patients attended Peking University First Hospital, a 1,368-bed teaching hospital in Beijing, China between January 2008 and April 2011. The present study was approved by the Institutional Review Board of Peking University and was performed in accordance with the recommendations of the Helsinki Declaration of 1975. Written informed consent was obtained from all patients.
Materials.
A total of 24 respiratory samples, comprising 22 bronchoalveolar lavage fluid (BALF) and two sputum samples, were obtained from the 22 HIV-negative patients with confirmed PCP. The patients' medical records were also reviewed and the outcome was followed up.
Sample processing and DNA extraction. All the samples were dissolved with dithiothreitol (DTT) first, and then filtered with a nylon mesh and centrifugation. Part of each sample was stained and demonstrated to be GMS positive, and the remainder was stored at -20˚C. DNA extraction was performed using E.Z.N.A. Blood DNA kit (Omega Bio-Tek Inc., Norcross, GA, USA).
Polymerase chain reaction (PCR). PCR was performed to analyze the DHPS gene of Pneumocystis, using the previously reported primers DHPS-3 and DHPS-4 (8). The reaction mixture contained 2.5 µl DNA template, 12.5 µl Taq PCR Master Mix (Qiagen, Valencia, CA, USA), 1.5 µl (10 µM) forward primer, 1.5 µl (10 µM) reverse primer and sterile water, making a total volume of 25 µl. PCR was used to amplify the samples, yielding a 370-bp fragment. A hot-start step at 94˚C for 5 min was followed by the following for 45 cycles: DNA denaturation at 94˚C for 30 sec, annealing at 61˚C for 30 sec and extension at 72˚C for 30 sec. This was followed by a final extension step at 72˚C for 5 min. The amplification products were analyzed by electrophoresis on a 1.5% agarose gel containing ethidium bromide, and the bands were visualized with ultraviolet light. To prevent contamination, all PCR procedures were performed with a negative control of sterile water.
Statistical analysis. Clinical information, including laboratory results, therapy and outcome of patients were collected from medical records. SPSS software, version 17.0 (SPSS, Inc., Chicago, IL, USA) was used to analyze data, and the Student's t-test was used to assess significant differences in continuous data with Gaussian distribution, while the Mann-Whitney U-test was used for non-normal distribution. Proportions between groups were compared using the χ 2 test. A P-value of <0.05 was considered to indicate a statistically significant result.
Results
Demographic and clinical characteristics. Twenty-two specimens from bronchoalveolar lavage and two sputum samples were obtained from 22 HIV-negative patients with confirmed PCP from January 2008 to April 2011. Twenty-one DHPS gene fragments (that include the most frequently reported mutations) from 20 patients were successfully extracted from 24 (87.5%) samples. Among these 20 patients, two patients were outpatients whose medical records were unable to be retrieved. The demographic characteristics of the other 18 patients and underlying diseases of these patients are listed in Table I. All patients had received immunosuppressive agents, but none of them had ever received prophylaxis against PCP. One patient (5.6%) had experienced a prior episode of PCP. Table II shows that lactate dehydrogenase (LDH) levels were above the upper limit of normal in 9/12 patients (75%). β-D-glucan levels were elevated in 10/14 patients (71.3%). The CD4 + lymphocyte count was <200/µl in 9/10 patients (90%).
The 28-day mortality rate was 50%. Table III shows the prognostic factors that were identified to be associated with mortality by univariate analysis. These were peripheral neutrophils (P=0.003), peripheral lymphocytes (P=0.019) and neutrophils in BALF (P=0.047). The treatment and related mortality were also analyzed and are summarized in Table IV. Caspofungin therapy was administered to eight patients (50%) and 75% of them failed to survive. Thirteen patients had received adjunctive corticosteroids (1 mg/kg/day prednisone equivalent) and nine (69.23%) of them succumbed.
DHPS gene mutations.
No mutations in the DHPS gene were detected in the sequenced amplicons at codons 55 or 57. All had the wild-type genotype with the nucleotide sequence ACA CGG CCT at codons 55, 56 and 57, respectively, corresponding to threonine and proline at positions 55 and 57. However, gene mutations at two relatively rare positions were identified. One mutation was observed at DHPS codon 98 in two patients with PCP, with glutamate replaced by lysine. The other was at DHPS codon 90 in a sample obtained from one of these two patients, with aspartate replaced by asparagine. The patient with only one mutation at codon 98, a 51-year-old female with dermatomyositis, developed PCP during adjustment of the dose of corticosteroids and finally succumbed following active treatment with TMP-SMZ. The patient with two mutations was a 31-year-old male kidney transplant recipient. The manifestations of this patient were mild with a PaO 2 of 83.29 mmHg on ambient air. The mycophenolate mofetil dosage of the patient was halved immediately following the diagnosis of PCP, while the FK506 dosage remained unchanged. This patient declined TMP-SMZ treatment and self-discharged. The patient was followed up by telephone and it was reported that his condition had improved.
Discussion
The present study is, to the best of our knowledge, the first that has investigated DHPS gene mutations in HIV-negative PCP patients in China. The results of the present study revealed that the most common point mutations, which result in Thr55Ala and Pro57Ser amino acid substitutions, were not detected in the P. jirovecii isolates from any of the samples from 20 non-HIV patients with confirmed PCP. This result is consistent with that of previous studies in HIV-positive patients (6,7), and suggests that the prevalence of DHPS gene mutations remains low in China. A number of studies have found that the prevalence of DHPS gene mutations in developed countries is relatively higher than that in developing countries with the highest prevalence being 72% in the USA (6,9). It is hypothesized that the relatively lower prevalence in developing countries, for example, 6.2% in India (10) and 56% in Africa (11), may be due to the reduced use of sulfa prophylaxis. It has been reported that even short-term exposure to TMP-SMZ can be associated with the emergence of resistance (12). According to the literature, the major antipneumocystic activity of this agent for PCP is derived from sulfamethoxazole (13) and the trimethoprim component is a very poor inhibitor of P. jirovecii DHFR (dihydrofolate reductase) (14). A number of studies have documented an association between the failure of sulfa prophylaxis and the occurrence of mutations in the P. jirovecii gene coding for DHPS, especially at codons 55 and 57, either as single mutations or combined (15,16). One study reported that DHPS mutations were significantly more common in patients who had previous exposure to sulfa drugs (18/29; 62%) than in those who had no exposure (13/123, 10.5%; P<0·0001) (5). In another study, which included 158 patients of five hospitals in France, multivariate analysis of risk factors for DHPS gene mutation revealed that sulfa prophylaxis is among the independent risk factors (adjusted odds ratio, 26.04; P<0.001) (5). It has been suggested that the geographic area of residence, which may reflect the resistant strains, is an independent predictor of the harboring of DHPS mutations (17). No common mutations of the DHPS gene were detected in the present study. This result may be associated with the characteristics of the population that was studied. Kazanjian et al (18) have also reported that the duration of prophylaxis increases the risk of mutations [relative risk (RR) for each exposure month, 1.06; P=0.02] and that there is a statistically significant increase in the presence of a DHPS mutation if the duration of sulfa exposure is >1 year (RR, 1.16; P=0.003); however, the dose of sulfa was not found to be significantly associated with the mutation. The present study focused on HIV-negative patients and none of them received any sulfa prophylaxis. Although some of the patients in the present study (10/18) received sulfa drug treatment prior to BALF sampling, this was of little consequence due to the short exposure time (<7 days).
Patients with mutant DHPS genotypes are more likely to have severe disease, require invasive ventilation and have a poor outcome than patients with wild-type DHPS genotypes (19,20). In the current study, 72.2% of patients had hypoxemia and 66.7% of patients received mechanical ventilation. Although TMP-SMZ was used as first-line treatment, with the exception of contraindication as soon as PCP was suspected, the 28-day mortality rate remained as high as 50%. It is assumed that DHPS gene mutation may be just one of the mechanisms of sulfa resistance. A number of other factors may also play a role, such as pharmacokinetics/pharmacodynamics (PK/PD), underlying diseases, nutritional status and complications. The timing of the first dose of sulfa administration is likely to be crucial to the outcome of patients. In addition, the full length of the DHPS gene was not examined to exclude the possibility of existence that other mutation positions exist that are responsible for sulfa-drug resistance, because a previous study has done so and found no such mutations (5). As certain patients harbouring Pneumocystis with DHPS gene mutations respond to treatment with high doses of TMP-SMX, a possible explanation is that high-dose therapy may compensate for reduced sensitivity (21). However, in the present study, all patients received 15 mg/kg of TMP, or an adjusted dose according to renal function. With the absence of common mutations, two other nonsynonymous point mutations, Asp90Asn and Glu98Lys, were identified in the P. jirovecii isolates from two patients who had different underlying diseases and clinical manifestations, as well as completely different outcomes. The implications of these mutations call for larger scale study.
The immune backgrounds of the patients in the current study were similar to those in a previous study (9). However, none of the patients in the present study had ever received prophylaxis, despite the fact that in nine of 10 patients, the CD4 + lymphocyte count was <200/µl on admission. It has been suggested that prophylactic treatment should be applied to HIV-positive patients with CD4 + lymphocyte counts <200/µl (10). However, there is no such agreement for HIV-negative immunosuppressed patients. In clinical practices, physicians have to weigh benefits against risks to make an appropriate decision.
The onset symptoms of PCP were variable and nonspecific, including fever, cough, dyspnea and chest tightness. It has been suggested that the level of serum β-D-glucan and S-adenosylmethionine is diagnostic for PCP within the appropriate clinical context (22) and the level of LDH is elevated at an early stage, offering diagnostic value despite its low specificity (23). In the present study, 75% of patients had elevated LDH levels and 92.31% of them had elevated CRP levels, supporting the high sensitivity of these indicators. In addition, 71.43% of patients had elevated β-D-glucan levels, which is lower than the sensitivity of 98% and specificity of 94% reported by a previous study (24). The results of the present study showed that serum β-D-glucan levels were markedly elevated in 42.86% of patients; however, the reduction of the level with therapy did not translate into survival. At present, the gold standard to confirm PCP remains the microscopic examination of respiratory samples. However a low fungal load in HIV-negative patients with PCP (25) may lead to false negativity (26). Several PCR assays have been developed with higher sensitivity and specificity, but the detection of Pneumocystis DNA provides no information concerning the organism's viability or infectivity, and therefore cannot exclude the possibility of colonization in asymptomatic patients (27), particularly in patients receiving corticosteroid therapy or immunocompetent patients with lung disease (28,29). In the present study, 55.6% of the patients received sulfa pre-emptive treatment prior to bronchoalveolar lavage, which may have exerted an influence on the positive rate of microscopy. The results showed that the lymphocyte count was significantly lower in nonsurvivors than in survivors (4.2%, IQR 2.4-5.85 versus 10.1%, IQR5.65-23.4; P<0.05), which supports the theory that cellular immunity plays an important role in anti-pneumocystic therapy, particularly that involving CD4 + T cells. Similarly, the CD4 + T cell count in nonsurvivors was lower than that in survivors, although the difference was not significant statistically due to missing data. In univariate analysis, it was found that the neutrophil count in the BALF from nonsurvivors was significantly higher than that from survivors (49.78±27.67 versus 21.33±15.03%; P<0.05), indicating the possibility that the elevation of the neutrophil count in BALF may be associated with a poor prognosis. The limited number of samples hampered the multivariate analysis.
In this study, 16 out of 18 patients were treated with co-trimoxazole as a first-line therapy; only five patients were treated with the intravenous form. According to the literature, the bioavailability via oral or intravenous administration is thought to be equivalent (30). Caspofungin, an inhibitor of fungal 1,3-β-D-glucan synthesis (31,32), is effective for the treatment of invasive candidiasis and aspergillosis (33). There are some case reports indicating its effectiveness as a salvage or additional treatment for PCP (34,35). Caspofungin has strong activity on cyst forms and weak activity on trophic forms, whereas TMP-SMZ mainly interferes with trophic forms. Theoretically, the concomitant use of TMP-SMZ and caspofungin, by fully inhibiting the organism life cycle, may provide synergistic activity against Pneumocysitis (36). In the present study, eight patients (50%) received caspofungin therapy and 75% of them failed to survive. However, caspofungin is usually recommended in severe cases, and this may have affected the mortality rate. In addition, the number of patients was also limited in the present study. Adjunctive corticosteroids, in addition to antibiotics, are of substantial benefit in HIV-infected patients with moderate to severe hypoxemia (37,38). In HIV-negative patients with PCP, there is no evidence that adjunctive steroid therapy is beneficial (39,40). A review of 31 non-HIV-positive patients with confirmed PCP and hypoxemia found that those who received a higher dose of steroids had a shorter duration of mechanical ventilation and oxygen use (39). However, another similar study was unable to show an improvement in survival (40). A further study (42), which concluded that high-dose steroid therapy was associated with increased mortality in HIV-negative patients with PCP via a mechanism independent from an increased risk of infection, also supported this view. All of the patients analyzed in the present study were HIV-negative, and in some of them, PCP was associated with corticosteroid use; 72.22% of the patients received adjunctive corticosteroid therapy. Among them, nine (69.23%) patients eventually experienced treatment failure, which might also be associated with their disease severity. Currently, whether to use corticosteroids or not and their appropriate doses requires serious consideration in severe cases of PCP in HIV-negative patients (41).
No common DHPS gene mutations of P. jirovecii were detected in the HIV-negative PCP patients in the present study. However, other mutations were present, the significance of which remains to be further identified. The elevation of neutrophil counts in BALF and reduction of lymphocyte counts in peripheral blood may be associated with poor outcome. The efficacy of adjunctive steroid therapy in HIV-negative PCP patients requires further investigation. | 2018-04-03T00:53:27.506Z | 2014-10-03T00:00:00.000 | {
"year": 2014,
"sha1": "0b85e390f66eb3267cacb62f7d3d25b74562a4a4",
"oa_license": "CCBY",
"oa_url": "https://www.spandidos-publications.com/etm/8/6/1825/download",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0b85e390f66eb3267cacb62f7d3d25b74562a4a4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3566220 | pes2o/s2orc | v3-fos-license | Postsurgical behaviors in children with and without symptoms of sleep-disordered breathing
Background Although some children undergo formal preoperative testing for obstructive sleep apnea, it is likely that many children present for surgery with undetected sleep-related disorders. Given that these children may be at increased risk during the perioperative period, this study was designed to compare postoperative behaviors between those with and without symptoms of sleep-disordered breathing (SDB). Methods This study represents a secondary analysis of data from a study examining the effect of SDB on perioperative respiratory adverse events in children. Parents of children aged 2–14 years completed the Sleep-Related Breathing Disorder (SRBD) subscale of the Pediatric Sleep Questionnaire prior to surgery. Children were classified as having SDB if they had a positive score (≥0.33) on the SRBD subscale. Seven to ten days following surgery, the SRBD subscale was re-administered to the parents who also completed the Children’s Post Hospitalization Behavior Questionnaire. Children were classified as exhibiting increased problematic behaviors if their postoperative behaviors were considered to be “more/much more” relative to normal. Results Three hundred thirty-seven children were included in this study. Children with SDB were significantly more likely to exhibit problematic behaviors following surgery compared with children without SDB. Logistic regression identified adenotonsillectomy (OR 9.89 [3.2–30.9], P < 0.01) and posthospital daytime sleepiness (OR 2.8 [1.3–5.9], P < 0.01) as risk factors for postoperative problematic behaviors. Conclusions Children presenting for surgery with symptoms of SDB have an increased risk for problematic behaviors following surgery. These results are potentially important in questioning whether the observed increase in problematic behaviors is biologically grounded in SDB or simply a response to poor sleep habits/hygiene.
Background
Sleep disordered breathing (SDB) represents a spectrum of disorders ranging from mild snoring to obstructive sleep apnea syndrome (OSAS) and is thought to afflict approximately 10%-11% of school-aged children [1,2]. Although obesity is the most common risk factor for SDB in adults, the primary cause of SDB in children is a narrowing of the upper airway, most commonly, as a result of adenotonsillar hypertrophy. Clinical manifestations of SDB include periodic episodes of hypopnea, apnea, sleep fragmentation, and arterial oxygen desaturation [3]. Not only can these symptoms affect sleep integrity and contribute to daytime sleepiness but school-aged children with SDB often exhibit neurobehavioral problems including hyperactivity, aggressive behaviors, inattention, and social problems [4][5][6][7].
Given the association between adenotonsillar hypertrophy and SDB, many of these children will, at some time, undergo adenotonsillectomy. Indeed, this surgery is considered the treatment of choice for most children with SDB and, other than those who are overweight or obese, has been shown to reverse respiratory morbidity, restore normal sleep patterns, and improve behavioral and neurocognitive function [4,[8][9][10].
Despite the apparent ameliorating effect of adenotonsillectomy on SDB symptoms, children, particularly those with OSAS, undergoing anesthesia and surgery have an increased risk for perioperative adverse events including airway obstruction, oxygen desaturation, and breathholding [11][12][13][14][15][16]. Undiagnosed SDB, in particular, may place children at risk for perioperative complications [17]. However, despite the evidence for increased perioperative adverse events among children with SDB, there is a paucity of information regarding how these children respond following anesthesia and surgery. This is important given that recent studies suggest that both typically behaving children and those with attention deficit/hyperactivity disorders (ADHD) may exhibit increased negative behavioral changes that can last up to several weeks following anesthesia and surgery [18][19][20][21][22]. This study was, therefore, designed to compare postoperative behaviors between children with and without symptoms of SDB and to identify risk factors for behaviors deemed problematic. The primary hypothesis tested was that children with a history of SDB would have an increased incidence of postoperative problematic behaviors compared with those without SDB.
Methods
This study was approved by the University of Michigan's Institutional Review Board with written informed consent from parents/guardians. The study represents a secondary analysis of data from a primary study evaluating the effects of SDB on perioperative respiratory adverse events in children [16]. The study sample comprised children aged 2 to 14 years of age presenting for an outpatient elective surgical procedure requiring general anesthesia. Children were excluded if they presented with the American Society of Anesthesiologists' (ASA) physical class 3, 4, or 5, required emergency surgery, were cognitively impairment, or had a history of cardiovascular disease.
Following written informed consent and prior to surgery, parents completed the Sleep-Related Breathing Disorder (SRBD) subscale of the Pediatric Sleep Questionnaire which has been validated for screening in children 2-18 years of age [23]. For the purpose of this study, we utilized the 16-item SRBD subscale which also comprises two four-item subscales for both snoring and daytime sleepiness. The process for scoring has been described elsewhere but, in essence, is based on the number of positive responses out of the number answered. Children with scores of ≥0.33 on the SRBD subscale were considered positive for SDB [23].
Anesthetic study protocol
Anesthesia management was at the discretion of the anesthesia providers who were blinded to the results of the SRBD questionnaire but may have been aware of some children with previously diagnosed OSA. Demographic data (age, gender, race/ethnicity) were collected prospectively by trained research assistants together with information regarding the type and duration of anesthesia and surgery, co-morbidities (e.g., age-and gender-adjusted body mass index [BMI], asthma, ADHD), and use of perioperative opioids.
Postoperative follow-up
Parents were telephoned 7-10 days (depending on parent availability) after surgery to evaluate their child's recovery and behavior since discharge from the hospital. Information was collected by trained research assistants who were blinded to the SDB status of the child. The SRBD subscale used preoperatively was re-administered, and behavioral assessment was measured using the Post Hospitalization Behavior Questionnaire (PHBQ) [24]. The PHBQ measures changes in children's postoperative behaviors referenced to their normal prehospitalization behaviors and has been validated in children aged 2-13 years [19,20]. This questionnaire consists of 27 items and six categories of anxiety: general anxiety, separation anxiety, sleep anxiety, eating disturbances, aggression against authority, and apathy/withdrawal. Responses are scored on a five-point scale from −2 to +2 reflecting behaviors referenced to normal, i.e., "much less," "less," "same," "more," and "much more" than before surgery. For the purposes of comparison, behaviors that were scored as either "more" or "much more" than normal were considered to be problematic [19].
Statistical analysis
Statistical analyses were performed using PASW statistical software (v 18.0, PASW Inc., Chicago, IL). Descriptive data were analyzed using frequency distributions. Comparisons of non-parametric data were performed using Mann-Whitney U, chi-square, and Fisher's exact tests, as appropriate. Parametric data were analyzed using unpaired t tests. The positive and negative predictive values of the SRBD tool in predicting problematic behaviors were obtained from 2 × 2 contingency tables. Factors found, by univariate analysis, to be associated with postoperative problematic behaviors as well as those believed to be clinically relevant were entered into a logistic regression model to identify factors predictive of the outcome. Internal consistency of reliability of items in the SRBD and PHBQ tools was measured using the correlation coefficient (Cronbach's α). Cronbach α values of >0.7 were considered to represent acceptable internal reliability for the entire SRBD and PHBQ tools and >0.50 for the subscales (due to the smaller number of items) [25].
Data from this study were part of a comprehensive study examining perioperative adverse events in children with and without SDB [16]. As such, the initial sample size for this study was based on a primary outcome of adverse events. The secondary outcome was postoperative behaviors. The sample size used in this study has, however, 90% power to detect what Cohen [26] refers to as a moderate effect size, i.e., 0.6 for postoperative problematic behaviors between children with and without SDB.
Results
A total of 439 eligible children were approached for inclusion of which 102 either declined or were excluded due to incomplete data. Complete data were thus available for 337 children. Items in the SRBD and PHBQ tools and their respective subscales were tested for internal consistency of reliability. Cronbach α values for reliability of the SRBD and PHBQ tools in our sample were 0.74 and 0.84, respectively. Cronbach α values for the snoring and sleepiness subscales were 0.82 and 0.60, respectively. Reliability measures of the PHBQ subscales for general anxiety, separation anxiety, eating disturbances, sleep anxiety, apathy/withdrawal, and aggression yielded α values of 0.45, 0.71, 0.66, 0.67, 0.55, and 0.78, respectively.
As reported in our previous study [16], just over a quarter of children screened positive for previously undetected SDB. Table 1 describes the demographics of the children with and without a history of SDB. Twelve percent of children in the sample had a prior diagnosis of OSA per parent report. Not surprisingly, results showed that children with SDB were significantly more likely to have undergone adenotonsillectomy compared with children without SDB and were also more likely to be overweight or obese. There were, however, no differences between groups with respect to the number of children with ADHD. Table 2 compares the anesthetic and pain management of the children between groups. Despite the fact that it was not possible to standardize the anesthetic protocols of all the varied surgical procedures, there were no differences between groups with respect to anesthetic and pain management.
Responses to items in the SRBD tool showed that 76 (87.4%) and 55 (65.5%) parents of children who screened positive for SDB reported that their child snored "more than half the time" and "always" snores, respectively. In addition, 39 (45.3%) parents reported that their child had "trouble breathing," and 42 (46.7%) noted that they had observed their child stop breathing during the night. Forty-five (50.6%) parents reported that their child had a problem with daytime sleepiness, and 24 (26.7%) also noted that their child's teacher had reported that their child was sleepy in school.
Examination of the pre-and post-adenotonsillectomy data showed that snoring and sleepiness decreased significantly following surgery. For example, prior to surgery, 89.1% and 54.7% of children scored ≥0.33 on the snoring and sleepiness subscales, respectively, compared with 33.9% and 37.1% following adenotonsillectomy (P < 0.05). Non-airway surgery had no effect on these variables. The positive predictive value of the SRBD tool in predicting postoperative maladaptive behaviors was Based on the child's BMI corrected for age and gender [27].
0.81, and the corresponding negative predictive value was 0.45. Table 3 compares the postoperative behaviors between children with and without SDB. These data show that children with SDB were more likely to exhibit an increase in postoperative problematic behaviors over normal compared with children without SDB. In particular, children with SDB were more likely to exhibit behaviors consistent with anxiety, eating disturbances, apathy, and aggression. Further analysis also revealed that children with a parent report of existing OSA/SDB had significantly higher total PHBQ scores (indicating more problematic postoperative behaviors) than children who were positive and those who were negative for SDB based on SRBD scores (7.42 ± 8.5 vs 4.42 ± 5.4, vs 2.29 ± 3.92, respectively, P < 0.05). Exploratory univariate analysis revealed several factors associated with increased problematic behaviors following surgery (Table 4). For the purpose of this analysis, SDB and its components of snoring and daytime sleepiness were each defined as having a score of ≥0.33 on their respective scales [23]. Interestingly, problematic behaviors were associated not only with posthospital snoring and daytime sleepiness, as might be expected, but also with preoperative snoring and sleepiness. All the associated and clinically relevant factors from this initial exploratory analysis were subsequently forced into a logistic regression and included age, gender, overweight/obesity, adenotonsillectomy, anesthesia and surgery duration, use of perioperative opioids, and preand posthospital snoring and sleepiness. Analysis of this model revealed that adenotonsillectomy and posthospital daytime sleepiness were both independent predictors of postoperative problematic behaviors (Table 4). However, given the significant contribution of adenotonsillectomy to the model, we also performed a logistic regression on the data from children who did not have adenotonsillectomy or other airway surgeries. This second model revealed that even among children having nonairway surgery, daytime sleepiness remained an independent predictor of postoperative problematic behaviors (OR [95% CI] = 2.45 [1.28-4.67]). With respect to sleepiness, parents who reported that their child "rarely/sometimes" went to bed at the same time each night were significantly more likely to exhibit postoperative problematic behaviors compared with children who "usually" went to bed at the same time (81.8% vs 59.4%, P = 0.015). Similarly, children who "rarely/sometimes" were asleep within 20 min of going to bed were significantly more likely to exhibit problematic behaviors after surgery than children who "usually" were asleep within this time frame (70.3% vs 58.3%, P = 0.035).
Discussion
The results of our previous study [16] found that a quarter of the children presenting for anesthesia and surgery had symptoms consistent with SDB that, apparently, were not recognized. This is important given the observations that children who were positive for SDB based on SRBD scores had an increased risk of postoperative problematic behaviors compared with children without sleep-related disorders. This is consistent with other studies of nonsurgical pediatric populations showing that children with SDB have a greater prevalence of neurobehavioral problems including somatic complaints, hyperactivity, behavioral problems, anxiety, and aggression [4,5,28,29]. Children without SDB who undergo anesthesia and surgery have also been shown to have an increased risk of problematic behaviors which can last up to several weeks postoperatively [18][19][20][21][22]. Among surgical patients, some studies have identified preoperative anxiety as a predictor of postoperative negative behaviors and have shown that these behaviors may be mitigated by premedication with midazolam [30,31]. Although we did not measure anxiety nor were we able to standardize the anesthetic regimen due to the many different surgical procedures involved, the fact that the anesthesiologists were mostly blinded to SDB status and that the anesthetic and pain management of the two groups including the use of midazolam were similar likely minimized any chance of confounding.
Of particular interest in this study was the observation that daytime sleepiness rather than snoring or other features of SDB was independently predictive of postoperative problematic behaviors among both adenotonsillectomy and non-adenotonsillectomy children and, as such, begs the question of whether these behaviors are biologically grounded as a consequence of SDB or simply a response to poor sleep habits or hygiene. Indeed, problematic behaviors were associated not only with posthospital snoring and daytime sleepiness, as might be expected, but also with preoperative snoring and daytime sleepiness. This suggests that preoperative factors whether biological or otherwise likely contribute to the development of problematic behaviors postoperatively. These results are in concert with others suggesting an association between daytime sleepiness and a range of behavioral and cognitive disorders [6,32]. Recently, O'Brien et al. showed that sleepiness, not snoring, was predictive of conduct problems including aggressive behavior and bullying among a cohort of urban public school children [6]. While daytime sleepiness may reflect an underlying sleep pathology, it has also been associated with unsupervised Behaviors reported as "more/much more" than normal. Data are n (%). SDB sleep-disordered breathing, OR odds ratio, CI confidence interval. *P < 0.05.
bedtime curfews, chaotic family dynamics, and the use of television and other electronic devices in the child's bedroom late at night [6,32]. Indeed, it was interesting to note that children in our study who consistently went to bed at the same time each night had fewer problematic behaviors compared with children with inconsistent bedtimes. The finding that adenotonsillectomy was predictive of increased postoperative behavioral problems was not wholly unexpected given that this type of surgery is often quite painful and can interfere with normal eating patterns for several days after surgery. In this study, analysis of the PHBQ subscales showed that children who underwent adenotonsillectomy exhibited increased anxiety, eating disturbances, and aggression against authority. Indeed, in a study of otherwise healthy children undergoing elective surgical procedures under anesthesia, Karlin et al. identified pain as an independent predictor of problematic behaviors following surgery [33]. In our study, although postoperative pain scores were not recorded, results showed no differences in the pain management of children with and without SDB.
Despite the fact that children who had undergone adenotonsillectomy in our study were observed to exhibit greater problematic behaviors in the week immediately following surgery, studies suggest that over time, these behaviors do tend to resolve [8,28]. Indeed, research shows that impaired neurocognitive function and behaviors in children with SDB and OSAS are at least partially reversible within 3 months to 1 year following adenotonsillectomy [8,28].
There are several limitations to this study that are recognized. First, this is a non-randomized study and as such may be subject to selection bias. However, since subjects were recruited consecutively as they presented for surgery, they were likely representative of the target surgical population. Another potential limitation is that the categorization of SDB was based on a questionnaire rather than a diagnosis by polysomnography. Certainly, from a pragmatic and cost perspective, one would not expect all children with SDB-related symptoms to undergo polysomnography prior to surgery. However, in instances where polysomnography is neither available nor feasible, the SRBD subscale of the Pediatric Sleep Questionnaire has been shown to be both reliable and valid in identifying SDB in children in clinical research [23]. Another potential limitation was that although daytime sleepiness was assessed using valid measures, this study did not examine all causes of sleepiness and, as such, we were unable to distinguish between organic sleepiness and that resulting from poor sleep habits. Finally, although our choice of time to follow-up was based on a previous study which used the PHBQ to evaluate postoperative behaviors in children with ADHD [22], we recognize that a longer time to follow-up may have provided additional information.
Conclusions
While the association between SDB and behavior problems in children is well known, there is a paucity of data regarding the effect of SDB on postoperative behaviors in children. Given that many children present for anesthesia and surgery with unrecognized symptoms of SDB, these results will be important in alerting parents to the potential for behavioral problems in the postoperative period. Furthermore, the results of this study are important in that they not only show an association between SDB and postoperative behavioral problems but perhaps, more importantly, show that these behaviors may well be mitigated by daytime sleepiness rather than SDB per se. Although examination of the root cause of sleepiness in these children was beyond the scope of this study, an understanding of the link between sleep-disordered breathing, sleepiness, and postoperative problematic behaviors will be important in alerting surgeons, anesthesiologists, and parents to the importance of good sleep hygiene before and after surgery and to teachers as a means to anticipate possible negative behaviors in the classroom.
Competing interests
The authors declare that they have no competing interests. | 2016-05-04T20:20:58.661Z | 2014-10-14T00:00:00.000 | {
"year": 2014,
"sha1": "ad1f55e4a6ba8d9d9cf24b2fb95a4e69d17692ec",
"oa_license": "CCBY",
"oa_url": "https://perioperativemedicinejournal.biomedcentral.com/track/pdf/10.1186/2047-0525-3-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "481e1f0f083825fdaa91acec13366290be07b471",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
86437577 | pes2o/s2orc | v3-fos-license | Recent Developments in Sonic Crystals as Barriers for Road Traffic Noise Mitigation
: Noise barriers are the most widespread solution to mitigate noise produced by the continuous growth of vehicular traffic, thus reducing the large number of people exposed to it and avoiding unpleasant effects on health. However, conventional noise barriers present the well-known issues related to the diffraction at the edges which reduces the net insertion loss, to the reflection of sound energy in the opposite direction, and to the complaints of citizens due to the reduction of field of view, natural light, and air flow. In order to avoid these shortcomings and maximize noise abatement, recent research has moved toward the development of sonic crystals as noise barriers. A previous review found in the literature was focused on the theoretical aspects of the propagation of sound through crystals. The present work on the other hand reviews the latest studies concerning the practical application of sonic crystal as noise barriers, especially for road traffic noise mitigation. The paper explores and compares the latest developments reported in the scientific literature, focused on integrating Bragg’s law properties with other mitigation effects such as hollow scatterers, wooden or recycled materials, or porous coating. These solutions could increase the insertion loss and frequency band gap, while inserting the noise mitigation action in a green and circular economy. The pros and cons of sonic crystal barriers will also be discussed, with the aim of finding the best solution that is actually viable, as well as stimulating future research on the aspects requiring improvement.
Introduction
Despite the prescriptions of noise maps and action plans [1] in 2002, the recent European Environmental Noise Directive revision [2] reported that noise pollution continues to be a major health problem in Europe. About 100 million people in the 33 European Union (EU) member states are exposed to harmful road traffic noise levels exceeding 55 dB(A) of L den , and 32 million are exposed to a noise level higher than 65 dB(A) of L den . Even if not considered the most annoying, road traffic is the most diffused noise source, to the point that it is used as a reference for estimating other sources' limits [3]. The continuous growth of vehicular traffic and the large number of people exposed to it have made sleep disturbance [4,5] and annoyance [6] caused by road traffic noise important issues observed both by citizens and control bodies. Studies have shown that exposure to road traffic noise can induce further adverse health effects, including cardiovascular effects [7,8], learning impairment [9,10], and hypertension ischemic heart disease [11].
Road traffic noise exposure can be reduced by applying mitigation strategies on the sources, such as improving vehicle engine and design, reducing tire/road emission by using special surfaces [12], or by controlling vehicle flow in restricted areas. Such actions are not always affordable ψ(r) can be written in the form: ψ(r) = e ikr u(r), where u(r) is a function with the same periodicity as the atomic structure of the crystal. The solution of the Bloch wave for a periodic potential leads to the formation of bands of allowed and forbidden energy regions, called band gaps. The same principle can be applied to acoustic waves passing through periodic structures. The main difference between atomic structures and sonic crystals is the size of the scatterers, since it is known that the wavelength of the propagating wave and the size and spacing of scatterers must be of the same order of size, in order to produce the destructive interference which creates the band gap.
Considering a sound pressure represented by p(x, t) = P(x)e iωt , Bloch's theorem restricts the function P(x) to the form: P(x) = e ikx φ(x), where ω is the angular frequency and k is the wave vector.
As the wavenumber k is varied, the solution of the wave equation yields a band structure for the different wave frequencies within the sonic crystal, with the formation of band gaps where no frequency is allowed.
The actual band structure depends on the geometric configuration of the scatterers. The works presented in [17,18], which this study refers to for insights on the theoretical aspects, offer more detailed mathematical calculations, whereas this study focuses on recent advances in the application of sonic crystal as noise barriers, especially for road traffic noise mitigation. The latest developments reported in the scientific literature will be shown and compared in order to find the best solution that is actually feasible; in addition, the difficulties that are limiting their current widespread use will be discussed.
Sonic Crystals as Acoustic Barriers
Years after the exhibition of Eusebio Sempere's sculpture in Madrid in 1995, which is recognized as the first experimental work on noise attenuation from a periodic structure [19], the scientific community discovered how sonic crystals could actually be used to reduce noise pollution and research on the application of 2-D sonic crystals has increased considerably [20][21][22].
From a physical point of view, sonic crystals are non-homogeneous structures created by the arrangement of scatterers in a periodic configuration with a square, rectangular, or triangular pattern. Three different macro-categories can be distinguished: parallelepipeds, such as steel sheets, periodically arranged in a medium such as air or water are known as 1-D sonic crystals; periodically placed cylinders are called 2-D sonic crystals; and spheres periodically arranged in a volume, like a cube, are called 3-D sonic crystals. As shown in the available literature, only 2-D crystals have had a practical application, currently making them the most widespread, and this work will therefore deal exclusively with such crystals. The scatterers must be made of a material with high acoustic impedance with respect to the medium in which they are positioned, such as acrylic cylinders in air or steel plates in water [23,24].
This particular periodic arrangement of the scatterers allows the sonic crystals to acquire sound attenuation properties in a specific range of frequencies, known as the band gap. This attenuation is achieved by the destructive interference of the sound wave due to the scatterers in the band gap and the attenuation of the propagation wave caused by the evanescent effect [25]. Indeed, when an acoustic wave interacts with a periodic structure, its spectral characteristics change and only some of the incident wave frequencies pass through the structure without being attenuated [26]. The size of the scatterers and their spacing must be of the order of the incident wavelength so that the periodic structure can interact with the incident wave. The physical mechanism governing this phenomenon is Bragg's law for destructive interference of a wave with incident glancing angle θ, in which the central frequency of the band gap f BG is determined by the lattice constant α, that is, the distance between two lattice scatterers and the speed of sound in the medium c: f BG = c 2α sin θ . The size of the band gap is influenced by the following parameters: • the density ratio M, that is, the ratio between the densities of the scatterers' material and that of the medium in which they are immersed; • the filling factor ff, expressing the ratio between the volume occupied by the scatterers and the total volume of the crystal; • the lattice designs.
The wavelength of a sound wave corresponding to the entire spectrum of audible frequencies (20 Hz-20 kHz) is of the order of 17 m-0.017 m; therefore, sonic crystals consisting of a few scatterers arranged periodically can already determine a significant attenuation of the sound in the band gap region. However, the setup requires a careful analysis of the parameters in order to match the band gap with the most disturbing frequencies.
Most of the initial works have studied how to optimize the spatial arrangement of the cylinders and the filling factor, obtaining an IL up to 25 dB [27][28][29]. Sanchis et al. [30], on the other hand, have studied the sound pressure field reflected by the sonic crystals, finding that the stationary wave ratio increases in the same frequency range in which stop-band phenomena occur. The reflection properties have been further investigated and connected to the sonic crystal band structures [31]. At present, most of the research is focused on studying different methods to increase the Bragg's law frequency attenuation range, by adding to the sonic crystal particular absorbent materials [32] or using Helmholtz resonators as scatterers [33].
Parameters Influencing Insertion Loss and Band Gap
This chapter reports the choices made in some studies in order to optimize the construction of a barrier that is effective for road noise [34]. Even though several configurations of sonic crystals are currently available, their selection depends on the intended use. Numerous studies have decreed the However, the sound pressure level map in Figure 1 [39] also confirms this assumption for a steady point source.
The distance of receivers from the barrier is also excluded from the parameters analyzed because it is obvious that receivers further away from the barrier will typically be less protected from the incident wave field and will receive a stronger contribution of diffracted energy over the top of the barrier [40].
Shape of Scatterers
The first studies on scattering phenomena refer to rigid circular cylinders, but later studies emerged using different diffuser shapes: squares [41], squares rotated along the vertical plane, with a consequent negative refraction [42], rectangular [43], or triangular [44,45]. Different studies used different boundary conditions, making a direct comparison difficult. Chong [39] performed a Finite element Method (FEM) analysis in which he assessed the IL of different sonic crystal barriers formed by scatterers of various shapes and orientation.
The results in Figure 2 show how the triangular shape with alternately turned faces has the highest absorption peak, but that the band gap results at frequencies above 2 kHz can be suitable for the mitigation of industrial sources with a narrow noise emission spectrum. In the road traffic noise frequency band [46], the scatterers' shape bringing the greatest IL is the elliptical one with the long side facing the source. However, the most used type is the circular one, which shows an IL of less than 5 dB in frequencies around 1 Hz with respect to the elliptical one.
Diameter of Scatterers
The size of scatterers seems to play a key role in the attenuation [47], although quantitatively it seems to depend on the other parameters such as number of rows or lattice shape. For square or The distance of receivers from the barrier is also excluded from the parameters analyzed because it is obvious that receivers further away from the barrier will typically be less protected from the incident wave field and will receive a stronger contribution of diffracted energy over the top of the barrier [40].
Shape of Scatterers
The first studies on scattering phenomena refer to rigid circular cylinders, but later studies emerged using different diffuser shapes: squares [41], squares rotated along the vertical plane, with a consequent negative refraction [42], rectangular [43], or triangular [44,45]. Different studies used different boundary conditions, making a direct comparison difficult. Chong [39] performed a Finite element Method (FEM) analysis in which he assessed the IL of different sonic crystal barriers formed by scatterers of various shapes and orientation.
The results in Figure 2 show how the triangular shape with alternately turned faces has the highest absorption peak, but that the band gap results at frequencies above 2 kHz can be suitable for the mitigation of industrial sources with a narrow noise emission spectrum. In the road traffic noise frequency band [46], the scatterers' shape bringing the greatest IL is the elliptical one with the long side facing the source. However, the most used type is the circular one, which shows an IL of less than 5 dB in frequencies around 1 Hz with respect to the elliptical one. However, the sound pressure level map in Figure 1 [39] also confirms this assumption for a steady point source.
The distance of receivers from the barrier is also excluded from the parameters analyzed because it is obvious that receivers further away from the barrier will typically be less protected from the incident wave field and will receive a stronger contribution of diffracted energy over the top of the barrier [40].
Shape of Scatterers
The first studies on scattering phenomena refer to rigid circular cylinders, but later studies emerged using different diffuser shapes: squares [41], squares rotated along the vertical plane, with a consequent negative refraction [42], rectangular [43], or triangular [44,45]. Different studies used different boundary conditions, making a direct comparison difficult. Chong [39] performed a Finite element Method (FEM) analysis in which he assessed the IL of different sonic crystal barriers formed by scatterers of various shapes and orientation.
The results in Figure 2 show how the triangular shape with alternately turned faces has the highest absorption peak, but that the band gap results at frequencies above 2 kHz can be suitable for the mitigation of industrial sources with a narrow noise emission spectrum. In the road traffic noise frequency band [46], the scatterers' shape bringing the greatest IL is the elliptical one with the long side facing the source. However, the most used type is the circular one, which shows an IL of less than 5 dB in frequencies around 1 Hz with respect to the elliptical one.
Diameter of Scatterers
The size of scatterers seems to play a key role in the attenuation [47], although quantitatively it seems to depend on the other parameters such as number of rows or lattice shape. For square or
Diameter of Scatterers
The size of scatterers seems to play a key role in the attenuation [47], although quantitatively it seems to depend on the other parameters such as number of rows or lattice shape. For square or triangular lattices, a smaller cylinder is best for three rows, whereas attenuation seems to worsen in the case of two rows. For square-based lattices, frequency bands with a negative IL emerge, creating a mechanism of noise amplification not occurring in a triangular lattice. When source and receiver are at the same position with respect to the barrier, a square grid could cause the sound to propagate directly, as a sort of waveguide effect. The triangular lattice can avoid this issue.
Martins et al. [35] varied the dimension of scatterers by keeping the lattice constant and fixed at 0.40 m. Due to Bragg's principle, the changes to the dimensions of the cylinders can be limited, thus in the order of 0.05 m. Moreover, scatterers can be made of natural resources, such as logs, meaning that cylinders can have a non-uniform diameter due to manufacturing defects. Therefore, small variations in the randomly arranged diameter of the lattice were studied to produce a substantial difference in terms of sound attenuation provided by the entire structure. With a lattice constant of 0.40 m and random variations of the order of 10% and 20% of the reference diameter (0.20 m), negligible differences in attenuations were reported. In fact, even when a maximum variation of 20% occurs in some of the cylinders, the calculated insertion loss values are only slightly modified, with maximum variations of less than 0.5 dB in all frequency bands. The IL results obtained by Martins et al. [35] are shown in Figure 3. triangular lattices, a smaller cylinder is best for three rows, whereas attenuation seems to worsen in the case of two rows. For square-based lattices, frequency bands with a negative IL emerge, creating a mechanism of noise amplification not occurring in a triangular lattice. When source and receiver are at the same position with respect to the barrier, a square grid could cause the sound to propagate directly, as a sort of waveguide effect. The triangular lattice can avoid this issue. Martins et al. [35] varied the dimension of scatterers by keeping the lattice constant and fixed at 0.40 m. Due to Bragg's principle, the changes to the dimensions of the cylinders can be limited, thus in the order of 0.05 m. Moreover, scatterers can be made of natural resources, such as logs, meaning that cylinders can have a non-uniform diameter due to manufacturing defects. Therefore, small variations in the randomly arranged diameter of the lattice were studied to produce a substantial difference in terms of sound attenuation provided by the entire structure. With a lattice constant of 0.40 m and random variations of the order of 10% and 20% of the reference diameter (0.20 m), negligible differences in attenuations were reported. In fact, even when a maximum variation of 20% occurs in some of the cylinders, the calculated insertion loss values are only slightly modified, with maximum variations of less than 0.5 dB in all frequency bands. The IL results obtained by Martins et al. [35] are shown in Figure 3. . IL for square and triangular lattices, double-or triple-row, varying the scatterer diameter and IL for triangular lattices, double-or triple-row, for diameter variations in some random elements, modified from [35].
Jean and Defrance [36] also confirm that better attenuations can be obtained by doubling the scatterer diameter D from 0.15 to 30 m, rather than doubling the filling factor ff ( Figure 4).
Number of Scatterers
A sonic crystal barrier must be parallel to the source, especially if the latter is linear (like roads and railways). Rare cases of sonic crystal barriers with a radial arrangement for point sources are present in the literature, but are not applicable to linear sources [48]. The length of the barrier is sitedependent and therefore is not concerned by the present analysis, which is dedicated to the number of rows along the propagation line between source and receiver.
Using 0.2 m wooden cylindrical scatterers with 0.4 m of lattice constant, Martins et al. [35] have shown that, at road traffic frequencies, two rows of scatterers are better than three. However, it would be advisable to have a barrier of at least 40 scatterers to avoid diffraction effects at the edges ( Figure 5). Morandi et al. [49] showed that increasing the number of rows leads to an increase in the sound insulation index in the frequency ranges between 600 and 1000 Hz for square-based lattices. The benefit of having multiple scatterer files, however, already saturates after the fourth row. It is reasonable to assume that four rows are the maximum number of scatterers needed for a sonic crystal barrier, as also confirmed by Godinho et al. [40].
However, for cylinders of small size (0.04 m) in a square lattice, Jiang et al. [50] suggest that there exists a relationship between the number of required rows and the size of the cylinders, just as there is a relationship between cylinder size and band-gap frequency due to the Bragg effect. Their results showed an IL that increases with the number of rows, at least up to seven rows and then flattened out by eight. . IL for square and triangular lattices, double-or triple-row, varying the scatterer diameter and IL for triangular lattices, double-or triple-row, for diameter variations in some random elements, modified from [35].
Jean and Defrance [36] also confirm that better attenuations can be obtained by doubling the scatterer diameter D from 0.15 to 30 m, rather than doubling the filling factor ff ( Figure 4).
Number of Scatterers
A sonic crystal barrier must be parallel to the source, especially if the latter is linear (like roads and railways). Rare cases of sonic crystal barriers with a radial arrangement for point sources are present in the literature, but are not applicable to linear sources [48]. The length of the barrier is sitedependent and therefore is not concerned by the present analysis, which is dedicated to the number of rows along the propagation line between source and receiver.
Using 0.2 m wooden cylindrical scatterers with 0.4 m of lattice constant, Martins et al. [35] have shown that, at road traffic frequencies, two rows of scatterers are better than three. However, it would be advisable to have a barrier of at least 40 scatterers to avoid diffraction effects at the edges ( Figure 5). Morandi et al. [49] showed that increasing the number of rows leads to an increase in the sound insulation index in the frequency ranges between 600 and 1000 Hz for square-based lattices. The benefit of having multiple scatterer files, however, already saturates after the fourth row. It is reasonable to assume that four rows are the maximum number of scatterers needed for a sonic crystal barrier, as also confirmed by Godinho et al. [40].
However, for cylinders of small size (0.04 m) in a square lattice, Jiang et al. [50] suggest that there exists a relationship between the number of required rows and the size of the cylinders, just as there is a relationship between cylinder size and band-gap frequency due to the Bragg effect. Their results showed an IL that increases with the number of rows, at least up to seven rows and then flattened out by eight. . IL for square and triangular lattices, double-or triple-row, varying the scatterer diameter and IL for triangular lattices, double-or triple-row, for diameter variations in some random elements, modified from [35].
Jean and Defrance [36] also confirm that better attenuations can be obtained by doubling the scatterer diameter D from 0.15 to 30 m, rather than doubling the filling factor ff ( Figure 4).
Number of Scatterers
A sonic crystal barrier must be parallel to the source, especially if the latter is linear (like roads and railways). Rare cases of sonic crystal barriers with a radial arrangement for point sources are present in the literature, but are not applicable to linear sources [48]. The length of the barrier is sitedependent and therefore is not concerned by the present analysis, which is dedicated to the number of rows along the propagation line between source and receiver.
Using 0.2 m wooden cylindrical scatterers with 0.4 m of lattice constant, Martins et al. [35] have shown that, at road traffic frequencies, two rows of scatterers are better than three. However, it would be advisable to have a barrier of at least 40 scatterers to avoid diffraction effects at the edges ( Figure 5). Morandi et al. [49] showed that increasing the number of rows leads to an increase in the sound insulation index in the frequency ranges between 600 and 1000 Hz for square-based lattices. The benefit of having multiple scatterer files, however, already saturates after the fourth row. It is reasonable to assume that four rows are the maximum number of scatterers needed for a sonic crystal barrier, as also confirmed by Godinho et al. [40].
However, for cylinders of small size (0.04 m) in a square lattice, Jiang et al. [50] suggest that there exists a relationship between the number of required rows and the size of the cylinders, just as there is a relationship between cylinder size and band-gap frequency due to the Bragg effect. Their results showed an IL that increases with the number of rows, at least up to seven rows and then flattened out by eight.
Number of Scatterers
A sonic crystal barrier must be parallel to the source, especially if the latter is linear (like roads and railways). Rare cases of sonic crystal barriers with a radial arrangement for point sources are present in the literature, but are not applicable to linear sources [48]. The length of the barrier is site-dependent and therefore is not concerned by the present analysis, which is dedicated to the number of rows along the propagation line between source and receiver.
Using 0.2 m wooden cylindrical scatterers with 0.4 m of lattice constant, Martins et al. [35] have shown that, at road traffic frequencies, two rows of scatterers are better than three. However, it would be advisable to have a barrier of at least 40 scatterers to avoid diffraction effects at the edges ( Figure 5). Morandi et al. [49] showed that increasing the number of rows leads to an increase in the sound insulation index in the frequency ranges between 600 and 1000 Hz for square-based lattices. The benefit of having multiple scatterer files, however, already saturates after the fourth row. It is reasonable to assume that four rows are the maximum number of scatterers needed for a sonic crystal barrier, as also confirmed by Godinho et al. [40].
Filling Factor
Santos et al. [51] studied the effect on IL produced by different spacings between the scatterers and the repositioning of a central row or some random scatterers on scale models (Figures 6 and 7). The rectangular lattice, in all cases, shows an almost constant IL, but some absorption peaks change their frequency. The triangular lattice, on the other hand, does not demonstrate the same property and results in a very effective type of lattice only if it is entirely structured. On the contrary, configurations with a smaller number of scatterers can be chosen with the rectangular lattice, thus reducing the economic load without however having a significant loss of IL, as also confirmed by However, for cylinders of small size (0.04 m) in a square lattice, Jiang et al. [50] suggest that there exists a relationship between the number of required rows and the size of the cylinders, just as there is a relationship between cylinder size and band-gap frequency due to the Bragg effect. Their results showed an IL that increases with the number of rows, at least up to seven rows and then flattened out by eight.
Filling Factor
Santos et al. [51] studied the effect on IL produced by different spacings between the scatterers and the repositioning of a central row or some random scatterers on scale models (Figures 6 and 7). The rectangular lattice, in all cases, shows an almost constant IL, but some absorption peaks change their frequency. The triangular lattice, on the other hand, does not demonstrate the same property and results in a very effective type of lattice only if it is entirely structured. On the contrary, configurations with a smaller number of scatterers can be chosen with the rectangular lattice, thus reducing the economic load without however having a significant loss of IL, as also confirmed by Koussa et al. [52]. In addition, Martins et al. [35] studied the influence of spacing between the scatterers by varying the lattice constant between 0.5, 0.4, and 0.3 meters. For a triangular lattice, no significant changes resulted when two rows were used, whereas the choice of the lattice constant become important for three rows with respect to barrier effectiveness. In fact, a lattice constant of 0.3 m is the best solution at 630, 1250, and 1600 Hz, but it becomes almost transparent to those at 800 and 1000 Hz. Thus, for sources like road traffic a lattice constant of 0.4 or 0.5 m is suggested.
Additionally, Rubio et al. [43] observed that the attenuation peak shifts toward higher frequencies as the distance between scatterers increases, due to the destructive interference between the propagating and evanescent waves [53].
Evidently, for equal sound attenuation, a higher lattice constant is preferable due to the consequent higher visibility and air flow passage. In addition, Martins et al. [35] studied the influence of spacing between the scatterers by varying the lattice constant between 0.5, 0.4, and 0.3 meters. For a triangular lattice, no significant changes resulted when two rows were used, whereas the choice of the lattice constant become important for three rows with respect to barrier effectiveness. In fact, a lattice constant of 0.3 m is the best solution at 630, 1250, and 1600 Hz, but it becomes almost transparent to those at 800 and 1000 Hz. Thus, for sources like road traffic a lattice constant of 0.4 or 0.5 m is suggested.
Recent Applications
Additionally, Rubio et al. [43] observed that the attenuation peak shifts toward higher frequencies as the distance between scatterers increases, due to the destructive interference between the propagating and evanescent waves [53].
Evidently, for equal sound attenuation, a higher lattice constant is preferable due to the consequent higher visibility and air flow passage.
Recent Applications
Despite all the acoustic attenuation properties previously shown, the use of sonic crystals as noise barriers is still not widespread. An historical issue for their application in a real case scenario is the incidence angle of the sound waves.
Most of the studies available in the literature were performed using point sources, such as loudspeakers, which can represent real sources such as industrial sources [54], but not roads or railways, which are schematized by linear sources. A point-type source, in fact, emits acoustic waves with a spherical symmetry; conversely, linear sources possess a cylindrical symmetry. For point sources, the source-receiver relative position is very relevant and moving the source from the edge to the center of the barrier can reduce its IL by 8 dB [35] due to the waveguide effect. As previously mentioned, the effect becomes less evident for triangular lattices. Indeed, a less ordered but more irregular lattice reacts better to a change in the direction of incidence acoustic waves as compared to a lattice with a regular base [36].
To the best of the authors' knowledge, no studies on this subject have yet been carried out on real-scale barriers, but studies on scaled barriers have shown that this effect is absent for linear sources [36]. This certifies the sonic crystal barriers as very effective for mitigating noise produced by roads or railways.
Current research on sonic crystals focuses on integrating their acoustic abatement brought about by Bragg's law with other mitigation effects, not only in order to increase IL intensity, but also to broaden the frequency band gap. This chapter summarizes these new research studies.
Hollow Scatterers
Using thin resonant cylinders with elastic shells, hollow cylinders, or a combination of both can result in a sonic crystal whose IL benefits from both Bragg's law and the individual scatterers' resonance. A series of thin elastic shells exposed to atmospheric agents is a weak structure and therefore not suitable for use as an outdoor sound barrier. However, a split-ring resonator (SRR) [55,56] can be used as an alternative that increases the IL in the low frequencies if the scatterers' resonance frequencies are correctly set so as to be below the Bragg band gap. Helmholtz resonators, a particular type of resonator, consist of a hollow container for air with a small opening that causes a coupling between the inside and outside air [57]. The size must be small compared to the wavelength. Attenuation is given by the combination of radiation loss and viscous losses due to friction.
In his studies, Chong [39] found the best setting in polyvinyl chloride (PVC) cylindrical SRRs with a 0.11 m diameter, a 3 mm thickness, and a 12 mm opening in the direction of the sound source. As presented in Figure 8, the sonic crystal's IL has been compared with a similar configuration having solid scatterers, resulting in clear improvements in the frequency band near 1.2-2 kHz and a worse performance between 500 Hz and 800 Hz. As suggested by the same author, the best performance frequency range can be tuned by changing the size of the cylinders and using narrower holes (0.004 m) in the outer cylinders.
Cavalieri et al. [58] used a locally resonant sonic crystal made of wood that, exploiting both the multiple coupled resonances and the Bragg band gaps, obtained good absorption results even if the source studied was railway noise.
Scatterers Coated with Porous Material
An absorbing surface can dissipate the sound energy of an incident wave. Thus, the acoustic absorption coefficient of the scatterers is a factor that can influence the IL of the entire sonic crystal. The higher the acoustic absorption coefficient is, the greater the possibility that the IL will result in the frequency bands between 600 and 1600 Hz [35]. This effect can also mitigate the waveguide effect in square lattices when the source and receiver are aligned. Then, the choice of the material with which the cylinders are coated is an important parameter.
Sánchez-Dehesa et al. [59] worked on noise barriers of sonic crystals exploiting the sound absorption properties of porous materials. In their study, a rigid core was surrounded by a cylindrical shell of recycled rubber offering additional sound absorption to the multiple scattering in a periodic structure [60,61]. Three 1 m high rows of 0.08 m diameter scatterers in a triangular configuration were analyzed with 0.02, 0.03, or 0.04 m thickness of porous shell. As presented in Figure 9, the different scenarios were also compared to a conventional noise barrier with the same dimensions of 0.30 m and 1 m high.
The effectiveness of the porous shell can be seen at all frequencies if compared to a sonic crystal where the scatterers are not surrounded by the porous shell [62]. Moreover, in some frequencies, the IL exceeds even that of the conventional noise barrier. The attenuation increase carried by the 0.02 m porous layer shell over a 0.04 m metal core is three times greater than the attenuation obtained by the barrier with the metal core alone. The work of Sánchez therefore shows even better results than those obtained by a similar study performed by Umnova et al. [60], probably because of the materials and settings used, such as cylinder dimensions and lattice type. In fact, Umnova et al. used a very smallscale model with cylinders composed of an aluminum core with a 0.0635 m radius and a wool felt shell of 0.00175 m, arranged in three rows of a lattice with a square base and a lattice constant of 0.015 m.
Scatterers Coated with Porous Material
An absorbing surface can dissipate the sound energy of an incident wave. Thus, the acoustic absorption coefficient of the scatterers is a factor that can influence the IL of the entire sonic crystal. The higher the acoustic absorption coefficient is, the greater the possibility that the IL will result in the frequency bands between 600 and 1600 Hz [35]. This effect can also mitigate the waveguide effect in square lattices when the source and receiver are aligned. Then, the choice of the material with which the cylinders are coated is an important parameter.
Sánchez-Dehesa et al. [59] worked on noise barriers of sonic crystals exploiting the sound absorption properties of porous materials. In their study, a rigid core was surrounded by a cylindrical shell of recycled rubber offering additional sound absorption to the multiple scattering in a periodic structure [60,61]. Three 1 m high rows of 0.08 m diameter scatterers in a triangular configuration were analyzed with 0.02, 0.03, or 0.04 m thickness of porous shell. As presented in Figure 9, the different scenarios were also compared to a conventional noise barrier with the same dimensions of 0.30 m and 1 m high.
Scatterers Coated with Porous Material
An absorbing surface can dissipate the sound energy of an incident wave. Thus, the acoustic absorption coefficient of the scatterers is a factor that can influence the IL of the entire sonic crystal. The higher the acoustic absorption coefficient is, the greater the possibility that the IL will result in the frequency bands between 600 and 1600 Hz [35]. This effect can also mitigate the waveguide effect in square lattices when the source and receiver are aligned. Then, the choice of the material with which the cylinders are coated is an important parameter.
Sánchez-Dehesa et al. [59] worked on noise barriers of sonic crystals exploiting the sound absorption properties of porous materials. In their study, a rigid core was surrounded by a cylindrical shell of recycled rubber offering additional sound absorption to the multiple scattering in a periodic structure [60,61]. Three 1 m high rows of 0.08 m diameter scatterers in a triangular configuration were analyzed with 0.02, 0.03, or 0.04 m thickness of porous shell. As presented in Figure 9, the different scenarios were also compared to a conventional noise barrier with the same dimensions of 0.30 m and 1 m high.
The effectiveness of the porous shell can be seen at all frequencies if compared to a sonic crystal where the scatterers are not surrounded by the porous shell [62]. Moreover, in some frequencies, the IL exceeds even that of the conventional noise barrier. The attenuation increase carried by the 0.02 m porous layer shell over a 0.04 m metal core is three times greater than the attenuation obtained by the barrier with the metal core alone. The work of Sánchez therefore shows even better results than those obtained by a similar study performed by Umnova et al. [60], probably because of the materials and settings used, such as cylinder dimensions and lattice type. In fact, Umnova et al. used a very smallscale model with cylinders composed of an aluminum core with a 0.0635 m radius and a wool felt shell of 0.00175 m, arranged in three rows of a lattice with a square base and a lattice constant of 0.015 m. The effectiveness of the porous shell can be seen at all frequencies if compared to a sonic crystal where the scatterers are not surrounded by the porous shell [62]. Moreover, in some frequencies, the IL exceeds even that of the conventional noise barrier. The attenuation increase carried by the 0.02 m porous layer shell over a 0.04 m metal core is three times greater than the attenuation obtained by the barrier with the metal core alone. The work of Sánchez therefore shows even better results than those obtained by a similar study performed by Umnova et al. [60], probably because of the materials and settings used, such as cylinder dimensions and lattice type. In fact, Umnova et al. used a very small-scale model with cylinders composed of an aluminum core with a 0.0635 m radius and a wool felt shell of 0.00175 m, arranged in three rows of a lattice with a square base and a lattice constant of 0.015 m.
Coupled Barriers
Koussa et al. [52] combined all the effects previously described and evaluated the IL of a sonic crystal coupled with a traditional barrier. In situations with sufficient space to insert a sonic crystal in front of a barrier, the coupled solution should increase the absorption of a normal barrier with the improvements brought by Bragg scattering, absorption of porous materials, and Helmholtz resonators. The barrier shown in Figure 10 . This barrier has been configured with three different types of scatterers and their IL has been compared to that of the single conventional barrier. The first configuration uses rigid scatterers, the second uses resonant cavities, and the third uses resonant cavities internally coated with glass wool, an absorbent material. Unlike the results reported by Sánchez-Dehesa et al. [59], the absorption of the barrier increases with the size of the opening.
Coupled Barriers
Koussa et al. [52] combined all the effects previously described and evaluated the IL of a sonic crystal coupled with a traditional barrier. In situations with sufficient space to insert a sonic crystal in front of a barrier, the coupled solution should increase the absorption of a normal barrier with the improvements brought by Bragg scattering, absorption of porous materials, and Helmholtz resonators. The barrier shown in Figure 10 . This barrier has been configured with three different types of scatterers and their IL has been compared to that of the single conventional barrier. The first configuration uses rigid scatterers, the second uses resonant cavities, and the third uses resonant cavities internally coated with glass wool, an absorbent material. Unlike the results reported by Sánchez-Dehesa et al. [59], the absorption of the barrier increases with the size of the opening.
The improvement of the attenuation of the composite barrier, compared to the traditional barrier, can reach 6 dB(A). Furthermore, the amplifying effects on the opposite side of the mitigation area due to reflection on the rigid wall are attenuated. The improvement of the attenuation of the composite barrier, compared to the traditional barrier, can reach 6 dB(A). Furthermore, the amplifying effects on the opposite side of the mitigation area due to reflection on the rigid wall are attenuated.
Low-Height Barriers
The recent attention of studies on acoustic barriers has been focused on reducing their visual impact by decreasing their height [63,64]. In urban environments, in fact, pedestrians and cyclists, quiet areas, and residents living in the lower floors of buildings can be protected from road, tram, or rail noise by using only 1 m high barriers if properly positioned very close to the source.
Koussa et al. [65] studied a 1 m high sonic crystal barrier made of rows of hollow cylinders of different sizes and lattice constant. The scatterers in the first row had a diameter of 0.05 m and a lattice constant of 0.085 m, while those in the second row had a diameter of 0.14 m and a lattice constant of 0.17 m. The diameter of the scatterers in the last row was 0.20 m; the scatterers were rigid and without space between them. In this configuration, the solid last row would not help to solve the problem of air flow transit and visibility for a standard height barrier, but a row of 1 m high rigid cylinders with no space between them gives the benefit that the beam transmitted is more attenuated than diffracted over the entire frequency range, without actually hindering sunlight and air flow.
The authors studied the case in which the diffusers of the two periodic bands were hollow cylinders or absorbing cavities internally coated with glass wool. With the help of numerical simulations, a significant efficiency of the sonic crystal barriers has been shown to significantly attenuate traffic noise in an urban environment, as shown in Figure 11. The recent attention of studies on acoustic barriers has been focused on reducing their visual impact by decreasing their height [63,64]. In urban environments, in fact, pedestrians and cyclists, quiet areas, and residents living in the lower floors of buildings can be protected from road, tram, or rail noise by using only 1 m high barriers if properly positioned very close to the source.
Koussa et al. [65] studied a 1 m high sonic crystal barrier made of rows of hollow cylinders of different sizes and lattice constant. The scatterers in the first row had a diameter of 0.05 m and a lattice constant of 0.085 m, while those in the second row had a diameter of 0.14 m and a lattice constant of 0.17 m. The diameter of the scatterers in the last row was 0.20 m; the scatterers were rigid and without space between them. In this configuration, the solid last row would not help to solve the problem of air flow transit and visibility for a standard height barrier, but a row of 1 m high rigid cylinders with no space between them gives the benefit that the beam transmitted is more attenuated than diffracted over the entire frequency range, without actually hindering sunlight and air flow.
The authors studied the case in which the diffusers of the two periodic bands were hollow cylinders or absorbing cavities internally coated with glass wool. With the help of numerical simulations, a significant efficiency of the sonic crystal barriers has been shown to significantly attenuate traffic noise in an urban environment, as shown in Figure 11.
For exposure to vehicular traffic noise, the IL resulted in 9.5 dB(A) with hollow scatterers and 11.9 dB(A) when the inside of the cavities was covered by an absorbing material. Both values are higher than those of the standard low-height barrier.
Green Barriers
One last practical application that has taken place in the scientific field in recent years is the use of wooden materials as scatterers in a sonic crystal. This choice would fully include the sonic crystals in the concept of green and circular economy, with recycled or directly available on-site materials.
In their studies, Godinho et al. [66], Amado-Mendes et al. [38,67], and Jean and Defrance [36] used wooden logs as scatterers in a barrier of sonic crystals. Some used on-scale prototypes, whereas others tested real dimension sonic crystal barriers made of timber logs. All studies agreed about the positive results obtained by using these materials and that further improvement seems possible by finding the right optimization of the periodic arrangements and log diameters.
Other authors have even studied how whole trees can attenuate noise if properly distributed in space. Martínez-Sala et al. [68] showed that trees organized in a periodic array produce attenuation at low frequencies. This attenuation is not due to the ground effect but to the destructive interference of the scattered waves in the crystal. A periodic array of trees can be used as a green acoustic barrier with IL dependent on the filling fraction and frequency behavior dependent on the type of lattice used. Gulia and Gupta [69] obtained a reduction for noise impact by modelling rows of Thuja trees arranged in a periodic pattern on the sides of a road. Significant sound attenuation, with a maximum Figure 11. Insertion loss of low-height barrier in different cylinder settings, modified from [65].
For exposure to vehicular traffic noise, the IL resulted in 9.5 dB(A) with hollow scatterers and 11.9 dB(A) when the inside of the cavities was covered by an absorbing material. Both values are higher than those of the standard low-height barrier.
Green Barriers
One last practical application that has taken place in the scientific field in recent years is the use of wooden materials as scatterers in a sonic crystal. This choice would fully include the sonic crystals in the concept of green and circular economy, with recycled or directly available on-site materials.
In their studies, Godinho et al. [66], Amado-Mendes et al. [38,67], and Jean and Defrance [36] used wooden logs as scatterers in a barrier of sonic crystals. Some used on-scale prototypes, whereas others tested real dimension sonic crystal barriers made of timber logs. All studies agreed about the positive results obtained by using these materials and that further improvement seems possible by finding the right optimization of the periodic arrangements and log diameters.
Other authors have even studied how whole trees can attenuate noise if properly distributed in space. Martínez-Sala et al. [68] showed that trees organized in a periodic array produce attenuation at low frequencies. This attenuation is not due to the ground effect but to the destructive interference of the scattered waves in the crystal. A periodic array of trees can be used as a green acoustic barrier with IL dependent on the filling fraction and frequency behavior dependent on the type of lattice used. Gulia and Gupta [69] obtained a reduction for noise impact by modelling rows of Thuja trees arranged in a periodic pattern on the sides of a road. Significant sound attenuation, with a maximum of 19 dB, has been obtained in frequencies up to 500 Hz, coherently with what was reported by Martínez-Sala et al. [68], showing that properly arranged trees can be used to mitigate noise pollution while helping to reduce air pollution and vibrations to receivers [70]. Lagarrigue et al. [71], by using a fast-growing plant, applied the principle of Helmholtz resonators to the green barriers. With hollow bamboo rods drilled between each node, it is possible to gain an additional band gap in the low-frequency range, but real-scale applications are needed for a proper quantification.
Discussion
As previously reported, rigid scatterers made of wood, aluminum, or PVC have been used in the literature as elements of sonic crystals in order to exploit Bragg's law and create acoustic barriers. Parameters affecting this absorption have been discussed, showing that different authors do not completely agree on all settings and research teams are following different approaches. In particular, not many studies have been conducted on real-scale barriers applicable to real road noise impact mitigations, thus research on this aspect should be encouraged, instead of on-scale studies.
Recent developments have shown how sound crystals in mixed materials improve their frequency response thanks to the use of porous materials, solving the problem of the angular dependence of acoustic attenuation and increasing the absorption frequency range with the reflection and absorption properties of the materials themselves. Additional improvements can be obtained using scatterers with Helmholtz resonant cavity shape, allowing the sound to penetrate into the periodic structure elements with consequent additional sound absorption properties that lead to a new attenuation band in a lower frequency range. The depth of the sonic crystal can be also reduced this way, while making the noise crystals relatively more efficient for some frequency bands.
Some studies reported that sonic crystal barriers can have absorption peaks even higher than conventional barriers in some frequencies, but they are generally less efficient in the other frequencies.
In order to optimize the absorption, some studies developed a coupled barrier made of rows of sonic crystals, some with different sizes, and a conventional noise barrier. However, this solution appears more like an improvement of a standard barrier rather than an improvement of a sonic crystal barrier because it does not solve the issues for which they were created: reducing the visual impact on receivers and allowing air flow through it. Table 1 summarizes the design and IL obtained in the best settings of the studies analyzed, excluding those using a very tiny scale, such as [72][73][74][75][76][77][78][79].
Conclusions
The solutions most commonly used to mitigate the noise produced by infrastructures is to install conventional noise barriers, which have a valid mitigation effect but prevent air flow and limit the view of those affected. The present paper carried out an analysis of the recent literature concerning sonic crystals as noise barriers especially for road traffic noise reduction.
Starting from the first simple structures composed of rigid cylinders arranged in a regular lattice in order to exploit the Bragg diffraction principle, this work studied the influence that cylinder parameters (such as shape, number, diameter, and absorption coefficient) and crystal settings (such as lattice constant, possible presence of holes, and incidence angle of the sound waves) have on the sonic crystal's insertion loss (IL).
Furthermore, it has been shown how current research is focused on integrating the Bragg's law abatement properties to the absorption properties of some materials. Thus, scatterers have been externally coated with porous materials, or they have been produced with Helmholtz resonant cavities in them or with cavities filled with absorbent material. Finally, the sonic crystals have also been coupled to a standard barrier to expand the insertion loss to some specific frequencies, including a low-height variant for particular urban applications. Some authors have also shown that the reduction of some random scatterers in the lattice does not compromise the IL, but allows a saving in the construction.
The analysis showed that sonic crystals have the benefits of being effective in noise abatement, while ensuring the possibility of passing air and light through them. Some authors have even proposed the application of a special window as mitigation at source for industrial noise [81,82]. The possibility of using natural materials as scatterers, such as wood derivatives or even whole wooden logs [66], or rubber powder as an absorbent material can also push this product into the actual "green" policy and circular economy.
However, the comparative analysis carried out in order to find the best design for road traffic noise abatement purposes showed that, in some cases, the results of an author do not correspond to those of others. Moreover, an ideal sonic crystal barrier set-up has not yet emerged, but the optimization of the barrier at each site plays a key role [76].
Despite all the acoustic attenuation properties previously shown, the use of sonic crystals as noise barriers is still struggling to become widespread. Indeed, real case scenarios demand a tribute in terms of space usage, which is higher than a conventional noise barrier and not always affordable along roads. Moreover, in practical cases where the sonic crystals are installed next to a road, they can accumulate much dirt or many animal remains between the cylinders under normal use. In order to preserve hygiene and their effectiveness, a constant cleaning is necessary, representing an increased maintenance cost.
The most important limitation to the current widespread use of sonic crystals is however represented by the effective area of mitigation behind them. In fact, all the works presented were carried out close to the barrier, whereas studies showing how sonic crystals can be effective at greater distances are needed. In order to expand the mitigation area to residential distances, cylinder height could be increased, but this would make it difficult to secure the foundations. In order to overcome this issue, it could be possible to integrate sonic crystals over natural or artificial bumps, which would make their use safer, more effective and visually acceptable, thus obviously increasing the space required for their installation. Another possible solution to widening their range of action could be to integrate a mitigation aimed at some frequencies with a more broadband intervention, for example, by changing road pavements into less noisy ones, such as rubber asphalts. In this way, the mitigation would be even more oriented toward a green and circular economy, by recycling old tires for both asphalts and porous materials to be placed in scatterers, perhaps made of local wood or PVC from recycled materials. | 2019-02-02T12:37:34.417Z | 2019-01-30T00:00:00.000 | {
"year": 2019,
"sha1": "e9ec5d369dad0a64bcf18c3049fae4db1dfeb2a5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3298/6/2/14/pdf?version=1656404092",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1c28f11204d19b79c68099aea1f3f891decee188",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
21529638 | pes2o/s2orc | v3-fos-license | Tantalate-based Perovskite for Solar Energy Applications
To realize a sustainable society in the near future, the development of clean, renewable, cheap and sustainable resources and the remediation of environmental pollutions using solar energy as the driving force would be important. During the past few decades, plen‐ ty of efforts have been focused on this area to develop solar light active materials to meet the increased energy and environmental crisis. Owning to the unique perovskite-type structure, tantalate-based semiconductors with unable chemical composition show high activities toward the conversion of solar radiation into chemical energy. Moreover, vari‐ ous engineering strategies, including crystal structure engineering, electronic structure engineering, surface/interface engineering, co-catalyst engineering and so on, have been developed in order to modulate the charge separation and transfer efficiency, optical ab‐ sorption, band gap position and photochemical and photophysical stability, which would open a realm of new possibilities for exploring novel materials for solar energy applica‐ tions.
Introduction
In view of global energy crisis and environmental pollution, the search for renewable and clean energy resources and the development of eco-friendly systems for environmental remediation have received great attention. Solar energy is the prime renewable source of energy for every life on the earth. The amount of solar energy that strikes the Earth yearly in the form of sunlight is approximately ten thousand times the total energy that is consumed on this planet [1]. However, sunlight is diffuse and intermittent, which impedes its collecting and storage that play critical roles in the full exploitation of its potentials. As one of the most promising solutions for storing and converting solar energy, semiconductor photocatalysis has attracted much attention, since it provides an environmental benign strategy for splitting water into hydrogen and oxygen, reducing carbon dioxide into useful chemicals and fossil fuels, and completely eliminating all kinds of contaminants under the sunlight illumination under ambient conditions [2][3][4]. Generally, the fundamental principles of semiconductor photocatalysis have been extensively reported in previous works [5]. A photocatalytic reaction is initiated on the basis of the formation of photogenerated charges (such as electrons and holes) after capture of sunlight by a semiconductor. Consequently, electrons can transit from valence band (VB) to conduction band (CB) leaving behind holes in the VB. If the photogenerated electron-hole pairs' separation is maintained, the photogenerated carriers can move to the semiconductor surfaces to react with the adsorbed small molecules (dioxygen and water), generating the redox foundations of active species which lead to water splitting and/or destruction of organic compounds [6]. It is also noted that the photogenerated electrons in the CB can also recombine with the photogenerated holes in the VB to dissipate the input energy in the form of heat or radiated light (see Figure 1). From the perspective of efficient utilization of solar energy, the recombination between the photogenerated electrons and holes is not desired, which limits the efficiency of a semiconductor photocatalyst. For better photocatalytic performance, the photogenerated electrons and holes must be separated effectively, and charges have to be transferred rapidly at the mean time across the photocatalysis to astrict the recombination. To date, several semiconductors, including TiO 2 , ZnO, SnO 2 , BiVO 4 and so forth, have been extensively investigated [7][8][9][10]. Among them, tantalate-based semiconductors with perovskite-type structure have certainly verified to be one of the brilliant photocatalysts for producing hydrogen from water and the oxidative disintegration of much organic containment [11]. For instance, NaTaO 3 with a perovskite-type structure showed a quantum yield of 56% under ultraviolet light irradiation after lanthanum doping and NiO co-catalyst loading [12]. Nevertheless, because of their broad band gap, most tantalate-based semiconductors can only react under ultraviolet or near-ultraviolet radiation, which reduces the utilization of ~43% of the solar spectrum To efficiently utilize the sunlight in visible region, the design of visible-light response tantalate-based catalysts is current demanded. Up to now, numerous methodologies have been developed to prepare different visible-light-driven tantalate-based photocatalysts, including doping strategy, heterojunction, facet control and so on [13,14].
This chapter emphasizes certain topical works that concentrate on tantalate-based photocatalysts for solar energy application. The aim is to display that the rational design, fabrication and modifications of tantalate-based semiconductors have tremendous effects onto their final photocatalytic activity, simultaneously, providing some stimulating perspectives on the future applications.
Synthetic strategies of alkali tantalates
Alkali tantalate-based perovskite semiconductors (likewise LiTaO 3 , NaTaO 3 and KTaO 3 ) have a general formula of ABO 3 and have drawn a lot of attention due to the peculiar superconductivity, photocatalytic property, electrochemical reduction and electromagnetic features. There are two kinds of totally different cationic sites in a perovskite photocatalysis, in which A-site is coordinated by twelve O 2-, and is usually occupied by relative bigger cations (Li, Na and K). The B-site is taken up by smaller cations (Ta) with a coordination of six, as illustrated in Figure 2. The bond angles of Ta-O-Ta are 143 o for LiTaO 3 , 163 o for NaTaO 3 and 180 o for KTaO 3 , respectively [15]. Wiegel and coworkers have reported the relationship between crystal structures and energy delocalization for alkali tantalates. When the Ta-O-Ta bond angle is close to 180 o , the migration of excitation energy can be accelerated and the band gap decreases [16]. Thereby, the delocalization of excited energy of LiTaO 3 , NaTaO 3 and KTaO 3 increases in turn. This result suggests that KTaO 3 may be predicted to be with the best photocatalytic activity. Alkali tantalate with different sizes, morphologies and compositions can be prepared via traditional solid-state method, solvothermal, sol-gel, molten salt and other methods. Basically, the traditional solid-state method is quite often used to prepare alkali tantalates, which includes the high temperature processing of the combination of alkali salts and tantalum pentaoxide. Kudo and coworkers successfully prepared ATaO 3 (A = Li, Na and K) materials with high crystallinity via solid-state method. It is found that all alkali tantalate showed superior photocatalytic activity toward stoichiometric water splitting under ultraviolet condition [17]. The high photocatalytic activity is chiefly depending on the high CB level consisting of Ta 5d orbitals [15]. Among them, KTaO 3 is the most photocatalytic active, which may be ascribed to the fact that KTaO 3 can absorb the most of photons and possesses the least distorted perovskite structure, being consistent with the above-mentioned discussion. The evolution rate of H 2 and O 2 was determined to be 29 and 13 μmolh -1 , respectively. To improve more photocatalytic activity of ATaO 3 , a modified solid-state method was adopted by adding extra amount of alkali to compensate the loss [18]. When preparing the alkali tantalates with the existence of excess alkali, the photocatalytic activity of LiTaO 3 , NaTaO 3 and KTaO 3 materials were improved ten to hundred times. LiTaO 3 is the naked alkali tantalate photocatalyst which showed the highest activity. This is because LiTaO 3 possesses higher conduction band levels than that of NaTaO 3 and KTaO 3 , which may predict higher transfer rate of excited energy and the subsequent higher phtocatalytic activity. This type of phenomena was likewise observed for CaTa 2 O 6, SrTa 2 O 6 and BaTa 2 O 6 photocatalysts with similar crystal structures [19]. One should note that the synthetic strategy also has great influence on the structural features as well as photocatalytic activity. For instance, sol-gel method was also used to prepare NaTaO 3 nanoparticles. By using CH 3 COONa 3H 2 O and TaCl 5 as the raw materials and citric acid as the complexing agent, NaTaO 3 nanoparticles with monoclinic phase that shows an indirect band gap, high densities of states near the band edges and a Ta-O-Ta bond angle close to 180 o (Figure 3) are obtained. This result is quite different to NaTaO 3 that was synthesized via solid-state method, which formed the orthorhombic phase that has a direct band gap and a Ta-O-Ta bond angle of 163°. It is found that monoclinic NaTaO 3 has lots of effective states available for the photogenerated charge pairs. Meanwhile, the larger surface area and the advantageous features in the electronic and crystalline structures for the monoclinic NaTaO 3 have resulted in a remarkably higher photocatalytic activity for the sol-gel synthesized NaTaO 3 than that for the solid-state derived orthorhombic NaTaO 3 [20]. Besides sol-gel method, the molten-salt approach is also adopted to prepare alkali tantalate materials [21,22]. By a convenient molten-salt process, a series of NaTaO 3 and KTaO 3 efficient photocatalysts is successfully synthesized, which are highly crystallized single crystal nanocubes (about 100 nm large). Doping tetravalent Zr 4+ and Hf 4+ in NaTaO 3 and KTaO 3 efficiently increases the activity and stability of catalyst at the same time, although the energy levels have no change. Moreover, Zr 4+ and Hf 4+ doping can also led to particle size reduction and nearly monodispersed feature of NaTaO 3 and KTaO 3 nanoparticles. In the absence of co-catalyst, the photocatalytic activity can reach 4.65 and 2.31 mmolh -1 toward H 2 and O 2 production, respectively [22]. A novel kind of strontium-doped NaTaO 3 mesocrystals was also prepared by a common molten-salt way. The obtained three-dimensional architectures showed high crystallinity, preferred orientation growth and high surface area. The ability for hydrogen generation of photocatalyst achieves 27.5 and 4.89 mmolh -1 for methanol aqueous solution and pure water splitting under ultraviolet light irradiation [23]. However, either solid-state method or molten-salt approach often leads to ultra-low surface areas of alkali tantalates, which limits the photocatalytic activity. Hydrothermal synthesis is advantageous for regular nucleation of nanocrystals with welldefined particles, morphologies, crystallinity and surface areas [24]. For instance, nano-sized Ta 2 O 5 and NaTaO 3, KTaO 3 and RbTaO 3 cubes are prepared by a facile hydrothermal method [25]. It is observed that pH influences much in the process of tantalum compound nanoparticles preparation. The obtained morphologies ranging from agglomerated particles in acidic medium over sticks at neutral pH value to cubes in elementary media can be achieved, which are similar to titanates [26]. A microwave-assisted hydrothermal technique was reported using Ta 2 O 5 and NaOH as starting materials under quite mild conditions with short reaction time. The BET surface area of NaTaO 3 nanoparticles prepared by microwave-assisted hydrothermal method is about 1.5 times than that prepared by conventional hydrothermal method [27]. After loading NiO as co-catalyst, this photocatalyst showed photocatalytic activity for overall water splitting more than two times greater than those prepared by conventional hydrothermal process [28]. As an outstanding example, NaTaO 3 nanoparticles through hydrothermal treatment highly improved the photocatalytic activity by a factor of 8 toward water splitting in comparison with the photocatalysts obtained by traditional solid-state method, which is attributed to their smaller particle size, larger surface area and higher crystallinity [29,30].
Doping strategies
Introducing foreign elements, including metal ions or non-metal ions, into semiconductor host matrix is one of the most effective methods to modulate the electronic structure of the host semiconductor and produce enhanced photocatalytic performance. Owing to a big difference in radius of A-and B-site ions in alkali tantalates, the dopants can selectively permeate into the A or B sites, which determine chemical composition, surface features, electronic structure and their photocatalytic properties. To date, studies on the alkali tantalates derived by doping strategy are thoroughly investigated. La-doped NaTaO 3 is the most active photocatalyst in photocatalytically splitting water area [12]. In this case, the catalytic activity of NaTaO 3 is extremely modulated by doping with La 3+ . For example, the crystallinity growths and a surface stair structure with nanometer-scale features are constructed, which improve the separation efficiency of the photogenerated electron-hole pairs and the photocatalytically splitting water activity. The surface step structure is also formed in alkaline earth metal ion doped NaTaO 3 , which showed improvement of photocatalytic water splitting properties [31]. Bi 3+ doped NaTaO 3 nanoparticles are prepared under different initial stoichiometric ratio by traditional solid-state reaction, which showed visible light absorption and tunable photocatalytic activity. Controlling the original molar ratio of the reactants, the intrusion of bismuth at sodium site and tantalum site in NaTaO 3 can be well-modulated and the optimum performance can be easily changed. Occupancy of Bi atom at Na site of NaTaO 3 is not contributing to increase visible light absorption while occupancy of Bi at Ta site or at both Na and Ta site induces visible light absorption and the subsequent methyl blue degradation under visible light [32,33]. La, Cr codoping NaTaO 3 system is also developed by spray pyrolysis from aqueous and polymeric precursor solution. The hydrogen evolution rate of La, Cr codoped NaTaO 3 was enhanced 5.6 times to 1467.5 μmol g -1 h -1 , and the induction period was shortened to 33%, compared to the identical values achieved by the Cr-doped NaTaO 3 photocatalyst prepared from aqueous precursor solution [34]. Besides metal ion doping, several non-metal ions are also incorporated into the host matrix of alkali tantalates for improved visible light absorption and photocatalytic performance [35,36]. A plane-wave-based density functional theory calculation is conducted to predict the doping effects on the variations of the band structure of non-metal ions doped NaTaO 3 . There were studies about nitrogen, sulfur, carbon and phosphorus monodoping and nitrogen-nitrogen, carbon-sulfur, phosphorus-phosphorus and nitrogen-phosphorus codoping. Nitrogen and sulfur monodoping can improve the valence band edge to higher and keep the ability to split water into H 2 and O 2 remain unchanged, as is shown in Figure 4. Double hole-mediated codoping can decrease the band gap dramatically. Nitrogen-nitrogen, carbonsulfur and nitrogen-phosphorus codoping could narrow band gap to 2.19, 1.70 and 1.34 eV, respectively, which could absorb visible light.
Defect chemistry engineering
Defect chemistry plays an important role in modulating the electronic structure, charge carrier conductivity and photocatalytic performance [38]. Defect chemistry often shows different impacts on the photocatalytic efficiency for most of the semiconductors. Previous literature on NaTaO 3 indicated that the accretion of the extra quantity of Na in the synthesis of NaTaO 3 blocked construction of sodium ion defects in NaTaO 3 crystals, leading to the extreme enhancement of photocatalytic activity [18]. Basically, the native defects, such as oxygen vacancies and sodium ion defects, are often observed in NaTaO 3 . Oba and coworkers investigated the formation energies and electronic structure of lattice vacancies, antisite defects and lanthanum impurities in NaTaO 3 using first-principles calculations based on density-functional theory [39]. Under oxygen-poor environments, oxygen vacancy as a double donor is a main defect. In La-NaTaO 3 , the replacement of La at Ta site is similar to make up as a shallow acceptor under oxygen-rich environments whereas the replacement of La at Na site forms as a double donor under oxygen-poor environments. The location predilection of lanthanum leads to self-compensation in heavily doped cases, which have great impact on the change in carrier concentration and photocatalytic activity [12]. Defective center not only alters the carrier concentration but also induces visible light absorption. In Eu 3+ doped NaTaO 3 , a nonstoichiometric Na/Ta molar ratio led to site-selective occupation of Eu 3+ dopant ions, which resulted in a monotonous lattice expansion and local symmetry distortion [11]. The site-selective occupation of Eu 3+ gave rise to certain types of defective centers due to the charge difference between Eu 3+ ions and Na + and/or Ta 5+ ions, which is crucial to the modification of absorption in visible region and photocatalytic activity.
Heterojunction of nano-/microarchitectures
The constructions of heterojunction by combining a semiconductor with other semiconductors have attracted much research attention because of their perfect effectiveness in the separation of the photogenerated charge carriers and boosting the photocatalytic activity. In the past few years, a lot of significant findings have been described on the heterojunction of nano-/ microarchitectures. Nano-Cu 2 O/NaTaO 3 composite for the degradation of organic pollutants have also been successfully developed [13]. Nano-Cu 2 O/NaTaO 3 composite exhibits highly enhanced photocatalytic activity in comparison to their individual counterpart. Furthermore, C 3 N 4 /NaTaO 3 and C 3 N 4 /KTaO 3 composite photocatalysts were also developed [40,41]. Loading of C 3 N 4 is a good strategy to achieve the visible light photocatalytic activity ( Figure 5). Photogenerated electron jumped from the VB to CB of C 3 N 4 could unswervingly insert into the conduction band of NaTaO 3 or KTaO 3 , making C 3 N 4 /NaTaO 3 and C 3 N 4 /KTaO 3 as visible light-driven photocatalyst. Both of the composites showed superior photocatalytic activity toward Rhodamine B degradation under visible light irradiation, being close to commercial P25. Yin and coworkers reported the preparation of novel C-NaTaO 3 -Cl-TiO 2 composites via a facile solvothermal method. When C-NaTaO 3 is joined with Cl-TiO 2 to construct a core shell configuration, the visible light-induced degradation activity toward NO x of the catalysts under visible light irradiation could be highly improved because of the suppression of the recombination of photogenerated charge carriers [42]. Zaleska et al. prepared a series of novel binary and ternary composite photocatalysts based on the combination of KTaO 3 , CdS and MoS 2 semiconductors via hydro/solvothermal precursor route. They found that the highest photocatalytic activity toward phenol degradation under both UV-Vis and visible light irradiation and superior stability in toluene removal was observed for ternary hybrid obtained by calcination of KTaO 3 , CdS and MoS 2 powders at the 10: 5: 1 molar ratio [43].
Mesoporous structures construction
As one of the most important factors, surface area also imposes a big effect on the photocatalytic activity of the semiconductors. The majority of photocatalytic reactions occur at semiconductor surfaces, and therefore the photocatalytic activities of semiconductor oxides are usually greatly improved by the increase in surface area [44]. To further improve the surface area, nanocrystalline NaTaO 3 thin films with ordered three-dimensional mesoporous and nanostick-like constructions were successfully produced by PIB-b-PEO polymer-based sol-gel method. NaTaO 3 prepared at 650 ο C exhibits a BET surface area of about 270 m 2 cm -3 , which is much larger than the ever reported values [45]. These nanocrsytalline mesoporous NaTaO 3 samples show both enhanced ultraviolet light photocatalytic activity and can keep steady performance. A confined space synthesis process was also used for preparing colloidal array of NaTaO 3 by using three-dimensional mesoporous carbon as the hard template. This method brings about the creation of a colloidal collection of mesoporous NaTaO 3 particles (20 nm). After NiO loading, the mesoporous NaTaO 3 nanoparticles showed photocatalytic activity for overall water splitting more than three times as high as non-structured bulk NaTaO 3 particles [46]. A carbon modified NaTaO 3 mesocrystal nanoparticle was also successfully synthesized by a onepot solvothermal method by employing TaCl 5 , NaOH and glucose as the starting materials and distilled H 2 O/ethylene glycol mixed solution as a reaction solvent. The as-synthesized mesocrystal nanoparticles exhibited a high specific surface area of 90.8 m 2 g -1 with large amounts of well-dispersed mesopores in the particles. The carbon-modified NaTaO 3 mesocrystal demonstrated excellent efficiency for continuous NO gas destruction under visible light irradiation, which is considerably superior to those of the unmodified NaTaO 3 specimen and commercial Degussa P25, owning to large specific surface area, high crystallinity and visible light absorption [47].
Noble metal co-catalyst engineering
As well documented in previous literatures, co-catalyst introduces two positive factors into the photocatalyst, including promotion on the separation of photogenerated charge carriers and construction of active sites for reduction and/or oxidation reaction. Several noble metals have been commonly used as co-catalysts for photocatalytic applications. For example, water splitting activity of NaTaO 3 :La was improved when Au was loaded either by photodeposition method or by impregnation method. Moreover, Au/NaTaO 3 :La prepared by impregnation method exhibits much higher and more stable photocatalytic activity toward water splitting due to the fact that O 2 reduction on photodeposited Au co-catalyst was more efficient than that of impregnated Au co-catalyst [48]. Besides Au nanoparticles, Pt is also frequently used as cocatalyst for increasing the photocatalytic activity of alkali tantalates. With the deposition of Pt nanoparticles as co-catalyst, rare earth (including Y, La, Ce and Yb) doped NaTaO 3 exhibits a clear improvement of the hydrogen evolution, which is due to the fact that Pt nanoparticles act as electron scavengers reducing the photogenerated charge carrier recombination rate and facilitating the electron move to metal sites from the CB of NaTaO 3 , being as the catalytic center for hydrogen generation [49]. Moreover, Pd nanoparticles are also used as a co-catalyst for H 2 production from water containing electron donor species. Su et al. prepared novel Pd/NiO core/shell nanoparticles as co-catalyst, which are placed on the surface of La doped NaTaO 3 photocatalyst. It is noted that Pd nanoparticles are more effective for H 2 generation from water containing methanol, while Pd/NiO core/shell nanoparticles exhibit a higher H 2 generation by splitting pure water. The presence of NiO not only provides hydrogen evolution sites and suppresses the reverse reactions on Pd-based catalysts but also improves the stability of the Pd nanoparticles on the La doped NaTaO 3 surfaces [50]. In another case, when RuO 2 (1 wt.%) was introduced as co-catalyst, the ability for H 2 generation of NaTaO 3 prepared by an innovative solvo-combustion reaction was improved significantly, reaching around 50 mmol of H 2 after 5 h, which is the best of other reports in literature [51].
Earth abundant elements co-catalyst engineering
Due to too much scarcity and expense of noble metal co-catalyst to apply for wider scope solar energy applications, the development of high-efficiency and low-cost noble-metal-free cocatalysts is acutely necessary. Lately, co-catalysts composed of earth abundant elements have been explored extensively to replace noble metal co-catalysts for solar energy applications [52]. NiO is a p-type semiconductor with a band gap energy ranging within 3.5-4.0 eV, which is widely used as the co-catalyst of tantalates-based semiconductors for enhancing photocatalytic activity [53]. In the case of NiO/NaTaO 3 :La photocatalyst with high photocatalytic reactivity, NiO acts as co-catalyst loading as ultrafine NiO particles, which possesses characteristic absorption bands at 580 and 690 nm, The ultrafine NiO particles were highly active for hydrogen evolution as well as Pt of an excellent co-catalyst [12]. A detailed study on the structural features of NiO nanoparticles indicated that the interdiffusion of Na + and Ni 2+ cations created a solid-solution transition zone on the outer sphere of NaTaO 3 . The high photocatalytic activity resulting from a low NiO loading suggests that the interdiffusion of cations heavily doped the p-type NiO and n-type NaTaO 3 , reducing the depletion widths and facilitating charge transfers through the interface barrier [54]. Besides NiO, Ni metallic nanoclusters were also used as co-catalyst. For instance, a series of nickel-loaded La x Na 1-x TaO 3 photocatalysts was synthesized by a hydrogen peroxide-water based solvent method. Systematical investigation indicated that the activity of hydrogen generation from pure water is in sequence: Ni/NiO > NiO > Ni, whereas the activity sequence with respect to aqueous methanol is: Ni > Ni/NiO > NiO. Ni metallic nanoclusters exhibit the most active sites and facilitate the formation of hydrogen from aqueous methanol. In the case of Ni/NiO core/shell structure, Ni metallic nanoclusters induce the migration of photogenerated electrons from the bulk to catalyst surface, while NiO acts as H 2 evolution site and prevents water formation from H 2 and O 2 [55].
Molecular co-catalyst engineering
Molecular co-catalyst engineering have received much research attention in recent years. In a molecular/semiconductor hybrid system, the noble-metal-free molecular complex as cocatalyst can not only facilitate the charge separation but also help us to understand the mechanisms of hydrogen evolution and carbon dioxide reduction at molecular level [56]. Although the study on molecular sensitized alkali tantalates is limited, an excellent research has been done by Hong and coworkers. In this case, by using a molecular co-catalyst [Mo 3 S 4 ] 4+ , the photocatalytic activity of NaTaO 3 was significantly improved. The hydrogen production rate is about 28 times higher than pure NaTaO 3 because [Mo 3 S 4 ] 4+ clusters can provide a large number of effective active sites for hydrogen evolution and the matching of the conduction band of NaTaO 3 and the reduction potential of [Mo 3 S 4 ] 4+ also acts as one of the major determinants for the enhancement of the photocatalytic activity [57].
Synthetic methodologies of alkaline earth and transition metal tantalates
Solid-state reaction method and hydrothermal method are used routinely to synthesize alkaline earth and transition metal tantalates. Almost all the alkaline earth and transition metal tantalates can be obtained by high-temperature solid-state method using Ta [65] have been synthesized successfully by this method, which show prospects in many application including photocatalytic semiconductor, solar cells and electronic device. The high-temperature treatment of traditional solid-state reaction will increase the size of particles and thus decrease the surface area. Sr 2 Ta [67] were reported to be obtained by similar way with extra alkali, which can supply the loss at high temperature to suppress defects formation. This makes the crystal structure grow well and has better catalytic efficiency than others synthesized with a theoretical ratio in most cases. This improved solid-state reaction method would efficiently inhibit the recombination of photocarrier to enhance the photocatalytic activity. A new polymerizable complex technique is one of the preparation methods of alkaline earth tantalates, which has a relative moderate condition. This method includes the provision of Ta-base compound and then come into being the sticky sol-gel, after the treatment at 600-700 °C. Comparing with solid-state method, the tantalate-based photocatalysts synthesized by a polymerizable complex way often have greater crystallinity and better crystal size, which will lead to remarkably increase the photocatalytic efficiency [68]. Comparing with solid-state method, hydrothermal method has been widely used in synthesizing perovskite tantalates with very lower reaction temperature. Lots of alkaline earth tantalate could be prepared by the hydrothermal method exhibiting higher activity. In 2006, Zhu and coworkers synthesize monomolecular-layer Ba 5 Ta 4 O 15 nanosheets by hydrothermal method [65], which show enhanced activity ten times better than that of solid-state method-derived Ba 5 Ta 4 O 15 particles in photodegradation reactions of Rhodamine B solution. Perovskite Ca 2 Ta 2 O 7 has also been synthesized by hydrothermal process in aqueous KOH solution at 373 K for 120 h, which shows photocatalytic water splitting activity under UV-light irradiation [69]. Moreover, sol-gel route, as a common way to prepare the nanomaterials, also can be used for preparation of some perovskite tantalates. One typical case is that the ferroelectric SrBi 2 Ta 2 O 9 [70] and SrBi 2 Ta 2 O 9 nanowires [71] were synthesized using ethylene glycol as solvent, which showed greater dielectric and ferroelectric properties than the ceramics prepared by the solid-state reactions owning to a denser and more homogeneous microstructure with a better distribution of grain orientations. Sol-gel method is also used to prepare metastable phase like Sr 0.5 TaO TaO 3 reported by many groups [61,65,70,72,74,75]. This kind of perovskite composites are promising materials with multiple elements, perovskite framework and layer-like structures, which can be classified into three category structures by the different interlayer structure. On the other hand, it has been reported that some perovskite alkaline earth tantalates with double-perovskite structure also show photocatalytic activity under ultraviolet light irradiation [79]. And some transition metal tantalates have simple cubic perovskite-type structure like AgTaO 3 , Ba 3 ZnTa 2 O 9 , Sr 2 GaTaO 6 [80] and NaCaTiTaO 6 , NaCaTiNbO 6 , NaSrTiTaO 6 and NaSrTiNbO 6 [81].
Metal/non-metal doping strategies for band gap engineering
Introducing external ions into crystal structure has been generally approved as a positive way to improve the visible-light photocatalytic activity of semiconductors with larger band gaps. For nitrogen-doped layered oxide Sr 5 Ta [84]. It is found that, in most perovskite cases, the valence band levels were shifted upwards, in which the maximum contribution to valence band maximum comes from the p orbitals of the dopant anions, which shift. On the other hand, the dopant cations shift the CB level downwards because the CBM is chiefly governed by the d orbitals of foreign cations. This conclusion was applicable to perovskite structure tantalates system (Figure 7) directly by Liu and coworkers [85].
Multi-component heterojunction
Multi-component semiconductor combination tactic shows effectivity to improve photocatalytic activity by separation of the photo-generated charge carriers with a formation of a heterojunction structure. Heterojunction structure is the interface that is located at two areas of different crystalline semiconductors. This kind of material has to consider the following points, including near crystal structure, similar interatomic spacing and close coefficient of thermal expansion. Otherwise, they should have discrepant band gap values, which is exact contrary to a homojunction. It is benefited to regulate the electronic energy bands. To promote the redox ability and photocatalytic activity, composite photocatalysts involving two or more components were extensively studied. One type of such composites is usually constructed by coupling semiconductors with larger band gap for the purpose of the higher redox ability. A charming work is the Ba 5 Ta 4 O 15 /Ba 3 Ta 5 O 15 composite reported by Roland Marschall et al., which synthesized through the sol-gel method showed brilliant activities in OH radical generation and photocatalytic hydrogen production [86]. The outstanding activity is expected to come from enhanced charge carrier separation. In 2011, Wang and coworkers present Ptloaded graphene-Sr 2 Ta 2 O 7-x N x (Figure 8) with enlarged visible light absorption region and enhanced photocatalytic hydrogen generation [87].
Co-catalyst surface modification
Transition metals and their oxides are usually used as practical co-catalysts for photocatalysis. The role of the co-catalysts attached on the surface of the semiconductor material is particularly significant. It increases the overall photocatalytic activity by helping to separate charge pairs, which can work for both bulk and surface electron/hole pathway. The chemical reaction that took place at surface is promoted by the co-catalysts. Various metals and oxides loaded on the surface of semiconductor show different effects. In most photocatalytic water splitting systems, several metals like Au and Pt can accelerate the rate of reduction of hydrogen observably [88,89]. On the other hand, some oxides like NiO, NiO x and RuO 2 can promote the rates of both hydrogen and oxygen production [90][91][92]. Among them, NiO x exhibited highest activity in photocatalytic process [78]. As a hydrogen evolution site, the co-catalyst has to extract the photogenerated electrons from the CB of host materials. Thus, the conduction band level of co-catalyst should be below that of photocatalyst. In addition, photocatalytic water splitting is sensitive to the deposition methods of co-catalysts. Kudo et al. reported that photocatalysts show diversity in photocatalytic water splitting with different deposition methods [59]. Moreover, transition-metal sulfides like MS (M = Ni, Co, Cu) have also been developed as cocatalysts to improve the photocatalytic activity. These sulfides have the same effects with other co-catalysts in reaction process [93].
Summary and outlook
Tantalate-based perovskite semiconductors are well known for their wide spread applications in photocatalysis, ionic conductors, luminescence host materials and ferroelectric ceramics. Drawbacks of wide band gap and low charge separation efficiency inhibit the further development of tantalate-based perovskite semiconductors as superior photocatalysts. The combination of various strategies, such as doping, heterojunction and co-catalyst engineering, induces a thrilling beginning for exploring visible light active and highly efficient photocatalysts for solar energy applications. However, the studies on tantalate-based perovskite semiconductors are currently unsystematic. Meanwhile, the as-mentioned strategies and the derived photocatalytic systems with high efficiency and stability still need to be further developed. | 2017-09-18T19:19:04.186Z | 2016-02-03T00:00:00.000 | {
"year": 2016,
"sha1": "f60933b23d637d34b477e5f22c4072fcfca775c6",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/49380",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9da26c57f2090933bb1d48e6e6c343b267141fbb",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
3366262 | pes2o/s2orc | v3-fos-license | Rate Coefficient for the Reaction of Cl Atoms with cis-3-Hexene at 296 ± 2 K
The rate coefficient of the cis-3-hexene + Cl atoms reaction at 296 ± 2 K and 750 ± 10 Torr was determined using the relative rate technique. The reaction was investigated using an 80 L Teflon reaction bag and a gas chromatograph coupled with flame-ionization detection. Chlorine atoms were produced by the photolysis of trichloroacetyl chloride. No previous experimental data was available in the literature, to the best of our knowledge. The mean second-order rate coefficient value found was (4.13 ± 0.51) × 10 cm molecule s. The experimental value agrees with the rate coefficient estimated by structure-reactivity analysis, 4.27 × 10 cm molecule s. Moreover, both addition and hydrogen abstraction channels contribute to the global kinetics, with branching ratios 70:30. Effective lifetime with respect to Cl atoms is predicted as 67.2 hours; however, the cis-3-hexene + Cl channel is suggested to be non-negligible at atmospheric conditions. Other atmospheric implications are discussed.
Introduction
Volatile organic compounds (VOCs) are emitted to the troposphere from different sources of biogenic and anthropogenic origin, playing a fundamental role in atmospheric chemistry.Among the anthropogenic origin compounds released, cis-3-hexene is of a particular interest, emitted in gasoline vapor. 1,2In the troposphere, cis-3-hexene reacts with hydroxyl radicals, ozone and nitrate radicals.Barbosa et al. 3 have studied the kinetic for the reaction of cis-3-hexene with hydroxyl radicals using the relative rate method and the experimental mean second-order rate coefficient value was determined as (6.27 ± 0.66) × 10 -11 cm 3 molecule -1 s -1 .The kinetics of the reaction with ozone and nitrate radicals have been investigated at room temperature using the relative method and the rate coefficients values (in cm 3 molecule -1 s -1 ) have been determined as (1.44 ± 0.17) × 10 -16 and (4.37 ± 0.49) × 10 -13 , respectively. 4,5lorine atoms are also important atmospheric oxidants and have been observed in the marine boundary layer.In the early morning, the concentration of Cl atoms reaches the highest value and reactions of VOCs with Cl could be even more important than the reactions with OH radicals, the major daytime oxidant. 6To the best of our knowledge, the reaction of Cl atoms with cis-3-hexene, despite the importance of such reaction to Atmospheric Chemistry, has not been studied yet.
The main goal of this work is the experimental study of the kinetics of the cis-3-hexene + Cl reaction.The rate coefficient at 298 K and atmospheric pressure is reported for the first time based on the use of the relative rate method.Aspects of the reaction mechanism and atmospheric implications are also discussed.
Experimental
The experimental study was performed at the Instituto de Investigaciones en Fisico Química de Córdoba, Argentina.An 80 L collapsible Teflon bag was used and the rate coefficients were determined by the relative rate method.][9] Briefly, the reactant and the reference compound were introduced into the chamber using nitrogen or ultrapure air.Cl atoms were generated by the trichloroacetyl chloride photolysis using three germicide lamps (Philips 30 W), with a λ maximum around 254 nm, and the time of photolysis varied from 20 seconds to 1 minute.A gas syringe (Hamilton gas tight 5 mL) was periodically used to collect samples and the gas sample was analyzed using a gas chromatograph (Clarus 500, PerkinElmer) equipped with an Elite-1 column (PerkinElmer, length: 30 m, inner diameter 0.32 mm, film thickness: 0.25 μm) and a flame ionization detector (FID).The temperature of the column was 33 °C for 20 min.Helium was used as the carrier gas with flow rate of 0.8 mL min -1 .
cis-3-Hexene and reference compounds with trichloroacetyl chloride were introduced into the chamber and left in the dark for 2 hours.Under such conditions, no evidence for a reaction between cis-3-hexene and the reference compounds has been found.No reaction was observed for the organic specie and trichloroacetyl chloride in the absence of UV light.
The reactant and the reference compounds decay through the following reactions: Cl + cis-3-hexene → products, k hex (1) Cl + reference → products, k ref (2) where, k hex and k ref are the rate coefficients for reactions of the cis-3-hexene and the reference compound with Cl atoms, respectively.Assuming that the cis-3-hexene and the reference compounds are lost entirely due to the reactions 1 and 2, the following relation can be obtained: (3) In equation 3, the subscripts 0 and t correspond to the time instants 0 and t, respectively.
Rate coefficients for the cis-3-hexene with Cl atoms were obtained, in each experiment, at 298 ± 2 K and atmospheric pressure 750 ± 10 Torr, relative to the rate coefficients of the Cl atoms reactions with n-heptane and cyclopentane used as reference compounds.
The infrared (IR) spectrum of cis-3-hexene was recorded with a Nicolet FTIR (Fourier transform infrared) spectrometer, with 1.0 cm -1 resolution.The absorption cell used was a Pyrex cell sealed with NaCl windows and with an optical path-length equal to 23.0 ± 0.1 cm.Gas sample pressures were measured with a capacitance manometer (MKS Baratron, range 10 Torr).Background spectra were measured with the sample cell under vacuum.The infrared spectrum, recorded in the 500-1500 cm -1 region at 298 K, was used to calculate radiative efficiencies (RE) 10 and the global warming potential.
The model adopted for calculating the radiative efficiencies considers a uniform distribution of the compound over the troposphere.The RE values for short-lived compounds calculated from this model can be significantly lower, since the concentration should strongly decrease with altitude.Taking this assumption into account, the calculated RE values, as estimated in this work, should be better considered as an upper limit.
The global warming potential (GWP) is calculated relative to CO 2 over a specified time horizon from a model which also takes into account the RE values and tropospheric lifetimes.In some cases, CFCl 3 (CFC-11) is used as the standard, and this GWP is called the halocarbon global warming potential (HGWP).The HGWP was calculated relative to CFC-11 using the following expression: 11 (4) where τ cis-3-hexene and τ CFC-11 (τ CFCl 3 ) are the corresponding tropospheric lifetimes; M cis-3-hexene and M CFC-11 (M CFCl 3 ) are the corresponding molar masses; RE cis-3-hexene and RE CFC-11 (RE CFCl 3 ) are the radiative efficiencies of the hexene and CFCl 3 , respectively, and t is the time horizon over which the RE is integrated.
The GWPs of the cis-3-hexene, relative to CO 2 , were calculated by multiplying the HGWP values by the scaling Vol. 28, No. 11, 2017 factors of 6730 and 4750 on a time horizon of 20 and 100 years, respectively. 12These scaling factors are the GWP values of the CFC-11.
Results and Discussion
Rate coefficient for cis-3-hexene + Cl → products The rate coefficient for the cis-3-hexene + Cl atoms reaction is the sum of the coefficients for the addition and abstraction channels and was measured at 296 ± 2 K and atmospheric pressure.The reference reactions are: where k 1 and k 2 are the rate coefficients (in cm 3 molecule -1 s -1 ): k 1 = (3.97± 0.27) × 10 -10 and k 2 = (3.26± 1.0) × 10 -10 , as reported by Ezell et al. 6 and Wallington et al., 13 respectively.Four experiments with each reference compound were performed.
Different initial concentrations of the reference compounds were used in each experiment.
The linearity of the data points is observed in all experiments, with correlation coefficients greater than 0.99.Moreover, the intercepts are close to zero, suggesting that the contribution of secondary reactions with the products of the reactions studied can be neglected.
The initial concentrations of cis-3-hexene and of the reference compounds are presented in Table 1, as well as the number of experiments, the k hex /k ref ratio and the k hex rate coefficient.
Error propagation was considered to estimate the uncertainty on rate coefficients.As previously discussed, 3 these uncertainties have been calculated by assuming both the standard error of the slopes of the logarithm concentration curves and the reported errors on the reference rate coefficients. 6,13Errors due to sample handling and chromatographic method were introduced in the standard error of the slope.
At least four experiments using two reference compounds were performed and the k hex rate coefficient was determined.Consequently, the final rate coefficient is a mean value achieved from all experiments and the uncertainty is equal to twice the standard deviation.
Reactivity and reaction mechanism
A comparison of the room temperature rate coefficients determined for the reactions of cis-3-hexene with Cl atoms and OH radicals 3 shows that the former is 6.6 times higher.In order to (i) evaluate the rate coefficient for the reaction of Cl atoms with cis-3-hexene determined in this work, (ii) explain the higher reactivity towards Cl atoms and (iii) to infer about the reaction mechanism, the reactivity of a series of alkenes was compared and the contributions of hydrogen abstraction and electrophilic addition channels were evaluated.These issues can be assessed by comparing the rate coefficients of the reactions of a series of alkenes with Cl and OH radicals.
Concerning the differences between the rate coefficients for the reaction of the alkene with OH radicals and Cl atoms, let us first note that the electrophilic character of chlorine atom and OH radicals are different and can be investigated from the experimental electron affinity values, which are 3.61 and 1.827 eV for Cl and OH, respectively.Since both reactions are initiated by the electrophilic attack of the oxidation agent, the higher electrophilic character of the Cl atoms explains the higher rate coefficient for the reaction of hydrocarbons with this specie.
In Figure 3, rate coefficients for the OH reactions with alkenes are highly correlated with the number of hydrogen atoms replaced by methyl groups in the molecule (triangles), whereas similar correlation is not observed for the rate coefficients for Cl reactions (circles).However, neglecting the ethylene from this group, a much better correlation is found between the rate coefficients for the Cl reactions with alkenes and the number of replaced hydrogen atoms (black line), showing that the contribution of the replacement of a hydrogen atom by a methyl group to the reactivity towards chlorine atoms is greater than for the kinetics of OH reactions with alkenes.In fact, the rate coefficients were expected to increase, since the replacement of the hydrogen atom by an alkyl group causes the electronic density on the π orbitals to increase, favoring the electrophilic attack.Therefore, the higher slope observed for the k Cl rate coefficients can also be attributed to the higher electrophilic character of the Cl atoms.
The second group comprises the OH and Cl reactions (and the corresponding rate coefficients) with 1-alkenes (Figure 4).The values of rate coefficients for OH radicals with 1-alkenes suggest that the increase of the side chain along a homolog series has a small effect in the rate coefficient when the double bond is at the terminal carbon atom.
Different from the trend observed for the k OH values, a significant increase is observed for the k Cl rate coefficients, as evidenced in Figure 4.Note that the k OH rate coefficients (squares) are found in the range from 3.0 × 10 -11 to 4.3 × 10 -11 cm 3 molecule -1 s -1 , whereas the k Cl rate coefficients (circles) are found in the range from 2.5 × 10 -10 to 6.0 × 10 -10 cm 3 molecule -1 s -1 .Since the increase of the side chain along a homolog series produces a minor effect over the electronic density on the π orbitals, thus representing a minor contribution to the electrophilic addition, the different slopes observed for k OH and k Cl in this group can only be attributed to the hydrogen abstraction channel.As the side chain increases in the homolog series, the number of hydrogen atoms also increases and therefore the contribution of the hydrogen abstraction channel to the global kinetics can be increased.
The question now is how to predict how much the contribution of each possible channel to the global kinetics is.
The rate coefficients for alkane reactions with OH radicals and chlorine atoms, where only the hydrogen abstraction channel is predominant, were also compared.
The contribution of the carbon chain length increase in the alkanes reactivity towards OH radicals and Cl atoms is shown in Figure 5.
In Figure 5, the rate coefficients for the reaction of the OH radicals with propane (1.1 × 10 -12 cm 3 molecule -1 s -1 ), butane (2.4 × 10 -12 cm 3 molecule -1 s -1 ), hexane ( 5 . 2 × 1 0 -1 2 c m 3 m o l e c u l e -1 s -1 ) , h e p t a n e ( 6 .8 × 1 0 -1 2 c m 3 m o l e c u l e -1 s -1 ) , o c t a n e ( 8 . 1 × 1 0 -12 c m 3 m o l e c u l e -1 s -1 ) a n d n o n a n e (9.7 × 10 -12 cm 3 molecule -1 s -1 ) were determined by Atkinson, 22 whereas the rate coefficient for the pentane + OH reaction (3.9 × 10 -12 cm 3 molecule -1 s -1 ) was determined by Sivaramakrishnan and Michael. 23he following rate coefficients (cm 3 molecule -1 s -1 ) for the reaction of the Cl atoms with alkanes have been used: 1.4 × 10 -10 (propane), 19 2.1 × 10 -10 (butane), 24 2.5 × 10 -10 (pentane), 3.1 × 10 -10 (hexane), 3.6 × 10 -10 (heptane), 4.1 × 10 -10 (octane), 25 and 4.3 × 10 -10 (nonane). 26he comparison of the slopes for k Cl rate coefficients (Figures 4 and 5) suggests that the contribution of the increase in carbon length to the reactivity is similar for alkanes and alkenes, suggesting that the contribution from the hydrogen abstraction channel to the kinetics of the alkenes + Cl reaction is similar to that for the corresponding alkane + Cl reactions.Same comparison for k OH shows that the rate coefficients for the reactions of alkenes increase with the carbon length faster: the slope for the k OH rate coefficients of the reactions of alkenes is almost twice the slope for the reactions of alkanes, suggesting that the hydrogen abstraction channel is of minor contribution to the alkene + OH reactions.Therefore, the hydrogen abstraction channel contributes more to the overall kinetics of the reactions of Cl with alkenes, than for the reactions of OH with alkenes.
A possible estimate for the addition and hydrogen abstraction branching ratios can be done on the basis of the structure-reactivity scheme suggested by Ezell et al., 6 which considers the rate coefficient (k) as a sum of terms corresponding to the direct abstraction of non-allylic hydrogen atoms, addition to the double bond and abstraction of the allylic hydrogen atoms: k = k alkyl + k add + k allyl (7) The first term in this sum is obtained from the following expression: (8) where k 1o , k 2o and k 3o are the contributions of hydrogen abstractions from primary, secondary and tertiary groups, respectively, to the overall rate coefficient and F(X), F(Y) and F(Z) are factors necessary to take into account the neighboring group effects.Values for individual parameters were given by Atkinson. 27he second term in equation 7, k add , represents the contribution of the addition rate coefficient for the overall rate coefficient and takes into account the possible formation of primary and secondary radicals, two equivalent secondary radicals, tertiary and primary radicals or tertiary and secondary radicals.
The k allyl term in equation 7 is the contribution of the allylic hydrogen abstraction to the overall rate coefficient and assumes three possible values for the hydrogen bounded to primary, secondary or tertiary carbon atoms.
Taking this scheme into account, the overall rate coefficient for cis-3-hexene + Cl reaction can be predicted as 4.27 × 10 -10 cm 3 molecule -1 s -1 , in agreement with our experimental result ((4.13 ± 0.51) × 10 -10 cm 3 molecule -1 s -1 ).Moreover, the branching ratios for addition and hydrogen abstraction channels are 70 and 30%, supporting the conclusion that the contribution of the hydrogen abstraction channel is not negligible.
Atmospheric implications
In the atmosphere, cis-3-hexene can also be removed by reactions with O 3 , NO 3 and OH radicals.The effective lifetimes of cis-3-hexene with respect to reaction with each of the oxidants were calculated using the relationship expressed as: (9) with X = Cl atoms, OH and NO 3 radicals and O 3 molecules, using the estimated 12 h average day-time global concentration of OH radicals (1 × 10 6 radicals cm -3 ), 28 the 12 h average night-time concentration of NO 3 radicals (5 × 10 8 molecule cm -3 ) 29 and 24 h average O 3 concentration (7 × 10 11 molecule cm -3 ) 30 and considering average global concentrations of 1 × 10 4 atoms cm -3 of chlorine. 31From the rate coefficients available in the literature [3][4][5] and the experimental rate coefficient obtained in this work, the lifetimes were calculated and ranged from 1.27 h to 2.8 days.These calculations do not take into account local atmospheric conditions and seasonal variations which are capable of changing these oxidant concentrations.
The tropospheric lifetimes shown in Table 2 indicate that cis-3-hexene is rapidly removed by OH radicals and O 3 during day-time and by NO 3 radicals at night.The contribution of Cl atoms is small, but maybe significant in those areas with higher Cl concentrations.For instance, with an OH concentration of 5 × 10 5 cm -3 , observed in the early morning hours, 32 atomic chlorine concentrations of only 1% of that OH could contribute significantly to the chemical removal of volatile organic compounds in the marine boundary layer. 33Moreover, maximum Cl concentrations as high as 1 × 10 5 atom cm -3 have been reported in the marine boundary layer at mid-latitudes at dawn emphasizing the locally significant effect of Cl atoms on the concentration and lifetimes of some atmospheric organic compounds in both remote marine boundary layer and coastal urban regions. 34Significant Cl concentrations may also be found in mid-continental polluted areas from photolysis of ClNO 2 , a particular pollutant formed at night by reactions of soluble chloride specie (emitted by anthropogenic sources) with nitrogen oxides, as reported by Thornton et al. 35 Therefore, the importance of chlorine reactions, with respect to the OH reactions, with hydrocarbons may also be inferred beyond the marine boundary layer.
Another atmospheric concern of VOCs is their contribution to the greenhouse warming, as expected by the global warming potential (GWP) which is calculated relative to CO 2 over a specified time horizon.][38][39] The plot of the cross-sections (cm 2 molecule -1 cm -1 ) as a function of wavenumber (cm -1 ) of the cis-3-hexene is shown in Figure 6.
The integrated IR absorption cross-section (500 and 1500 cm -1 ) value for the cis-3-hexene is 1.35 × 10 -17 cm 2 mol -1 cm -1 .Uncertainties in the cross-section measurement arise from the following sources: the sample concentration (1%), sample purity (3%), path length (1%), spectrum noise and residual baseline offset after subtraction of background (1.5%).Considering these individual uncertainties, we quote a conservative uncertainty of ± 6%.Unfortunately, there are no literature data for the absorption cross-section of the studied hexene to compare with.
Table 3 shows the calculated value of the RE for the cis-3-hexene with the RE of CFC-11 12 (in units of W m -2 ) and the calculated values of HGWP and GWP on a time horizon of 20 and 100 years.
Summarizing, the lifetime of the studied compound indicates that they will be removed from the troposphere in few hours.In addition, it is clear from the GWP values that these compounds will not have a significant contribution to the radiative efficiencies of climate change.
Figure 2 .
Figure 2. Plot of the kinetic data for the reaction of cis-3-hexene with Cl atom using cyclopentane as reference compound.[S] 0 and [S] t are the concentrations of the cis-3-hexene at times 0 and t, respectively; [R] 0 and [R] t are the concentrations of the reference compound at times 0 and t, respectively.
Figure 1 .
Figure 1.Plot of the kinetic data for the reaction of cis-3-hexene with Cl atom using n-heptane as reference compound.[S] 0 and [S] t are the concentrations of the cis-3-hexene at times 0 and t, respectively; [R] 0 and [R] t are the concentrations of the reference compound at times 0 and t, respectively.
Figure 3 .
Figure 3. Replacement of hydrogen atoms by methyl groups in alkenes and comparison of the reactivity of their reactions with chlorine atoms and with OH radicals.
Figure 5 .
Figure 5. Homologous series of alkanes and comparison of the reactivity between their reactions with chlorine atoms and with OH radicals.
Figure 4 .
Figure 4. Homologous series of alkenes and comparison of the reactivity between their reactions with chlorine atoms and with OH radicals.
Table 1 .
Initial concentration of the reactants (hex: cis-3-hexene and ref: reference compound), rate constant ratios (k hex /k ref ) and the relative rate constant (k hex ) for the reaction of OH radicals with cis-3-hexene at 298 K and atmospheric pressure
Table 3 .
Global lifetime (τ global ), estimated RE, HGWP and GWP for the cis-3-hexene, over a specified time horizon of 20 and 100 years | 2018-02-19T11:46:56.720Z | 2017-11-01T00:00:00.000 | {
"year": 2017,
"sha1": "3b6a37a069eb831af1757b0be2374487bd8bfec0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21577/0103-5053.20170078",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "3b6a37a069eb831af1757b0be2374487bd8bfec0",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
233025004 | pes2o/s2orc | v3-fos-license | GSECnet: Ground Segmentation of Point Clouds for Edge Computing
Ground segmentation of point clouds remains challenging because of the sparse and unordered data structure. This paper proposes the GSECnet - Ground Segmentation network for Edge Computing, an efficient ground segmentation framework of point clouds specifically designed to be deployable on a low-power edge computing unit. First, raw point clouds are converted into a discretization representation by pillarization. Afterward, features of points within pillars are fed into PointNet to get the corresponding pillars feature map. Then, a depthwise-separable U-Net with the attention module learns the classification from the pillars feature map with an enormously diminished model parameter size. Our proposed framework is evaluated on SemanticKITTI against both point-based and discretization-based state-of-the-art learning approaches, and achieves an excellent balance between high accuracy and low computing complexity. Remarkably, our framework achieves the inference runtime of 135.2 Hz on a desktop platform. Moreover, experiments verify that it is deployable on a low-power edge computing unit powered 10 watts only.
I. INTRODUCTION
Recently, along with the extensive utilization of LiDAR sensors in various applications such as AR, autonomous driving, increasing impressive outputs of semantic segmentation on 3D point clouds gain wide attention from the academy and industry community. In particular, high-efficiency ground segmentation for point clouds is of considerable crucial. However, unlike 2D images with a dense and organized structure, point clouds captured from the LiDAR sensor are sparse, unevenly distributed, and unordered by nature. These specific properties impose enormous challenges to extract useful information from point clouds. In KITTI [1] dataset, Velodyne HDL-64E LiDAR has been equipped to collect point clouds of surroundings. Millions of points are received every second, yet over 1/3 of which are reflected from the ground. On the contrary, person, rider, vehicular, and bicycles are less than 1% in most frames according to SemanticKITTI [2] dataset. It is obvious that applications dealing with point clouds suffer performance deterioration because of abundant ground points. Therefore, ground segmentation is not only used to generate terrain models but also acted as a crucial pre-process since post-processes of applications depend on the validity of the segmentation result. It also appears that in the wake of point cloud data becoming higher definition and growing demand for deploying on more edge computing units, the novel ground segmentation task needs to take into account both high accuracy and low computing complexity.
Heuristic-based methods are first introduced, in which hand-crafted geometric features are applied to estimate ground panels then segment ground points. With the rise of deep learning research, learning-based methods are developed rapidly and offer a promising performance. However, 3D CNN operation suffers high computing complexity because of unnecessary computation in the sparse region. Therefore, some pioneer works represent point clouds into an organized form that can apply 2D CNN operation. In our work, we use pillar representations where points are discretized with a specific resolution on the x and y-axis, then apply 2D semantic segmentation.
In 2D semantic segmentation, U-Net [3] is one of the leading approaches, which provides several benefits: U-Net allows for the use of global location and context at the same time; it works with a few training samples and yields good performance for segmentation tasks; an end-to-end pipeline processes the entire image in the forward pass and directly produces segmentation maps. These benefits assure that U-Net preserves the input images' entire context, which is a primary improvement compared to patch-based segmentation approaches. Hence, we apply U-Net as the backbone in our work.
In this paper, our goal is to find a method that would perform ground segmentation from raw point clouds on a low-power edge computing unit. After reviewing the methods mentioned above, we first propose the network, GSECnet as shown in Fig. 1, which uses the U-Net as the backbone and equips with depthwise-separable convolution [4] and attention module [5]. We successfully cut down the model parameters to 1% of the original U-Net implementation. Another novel idea of our framework is the full pillarwise prediction, which significantly reduces the computing complexity. Meanwhile, we study the normal of points ignored by ground segmentation works, and find that it boosts accuracy with slightly increasing computing complexity in inference. To investigate the performance of GSECnet, we train and test the model on SemancticKITTI. Compared to top methods: PointNet++ [6] and GndNet [7], we show an excellent balance between high accuracy and low computing complexity.
Our contributions are threefold: • We present an end-to-end deep neural network framework called GSECnet for ground segmentation of point clouds, workable on a low-power edge computing unit. • We adopt the normal feature of points and distribution controlled undersampling to the framework and observe improvements in experiments. • We use full pillar-wise prediction to achieve lower computing complexity. The rest of the paper is organized as follows. In Section II, we review related work; Section III describes the proposed method and implementation process; Evaluation on semanticKITTI dataset and ablation study are shown in Section IV; Finally, concluding remarks and future work follow in Section V.
II. RELATED WORK
Mainly due to the particular aspects of the 3D point cloud, many meaningful attempts of ground segmentation have been performed. These related works can be roughly categorized into heuristic methods and learning methods. This section briefly surveys heuristic methods and discusses their limitations, such as weak adaptability in practices. Then we review learning methods from two types of data representation: point-based and discretization-based [8].
A. Heuristic ground segmentation
Traditionally, earlier works that appeared for ground estimation mainly rely on hand-crafted features from geometrical constraints and statistical rules. The ground plane method is fitted to the extracted geometric information of points using a model-fitting algorithm. The elevation map approach [9] projects a 3D point cloud to a 2.5D grid, then the maximum and minimum threshold values of elevation are manually assigned. This approach cannot handle multiple horizontal surfaces such as bridges, tunnels, or treetops. Gallo [10] leverages RANSAC to fit the ground plane, which has not dealt with vertical panels of buildings. Besides, Liu [11] integrates Gaussian process regression and robust locally weighted regression to model the ground plane, resulting in high computational complexity. Wolf [12] develops a framework to capture geometric features of segmentations by a pre-trained CRF. These methods with geometric constraints perform well in their defined scenes, but various limitations occur in realistic scenes. In short, heuristic ground segmentation approaches show shortcomings in adapting to all cases simultaneously.
B. Learning-based ground segmentation
More recently, learning-based approach has focused on attention, which uses the neural network to classify points by learning a feature representation. Two groups of representative works are reviewed below.
Point-based methods directly take raw point clouds and output segmentation results. As the pioneering work, PointNet [13] presents a deep learning architecture that can directly handle point sets. It models each point with shared MLPs and then aggregates a global feature using a symmetric aggregation function, which achieves outstanding segmentation results. PointNet++ [6] builds a hierarchical structure to extract the local contextual features between points. Some works contributed to constructing an ordered feature sequence for convolution operation. PointwiseCNN [14] focuses on defining the point convolution operations. PyramidPoint [15] employs a dense pyramid structure instead of a U-Net. However, these point-based methods need the high computational cost neighbor searching algorithm to obtain neighboring information [8], which is problematic to be applied on edge computing units.
Discretization-based methods emerge as the alternative with high efficiency, which convert a point cloud into the ordered discrete structure such as lattice, voxel, or pillar. SPLATNet [16] interpolates point clouds to a permutohedral sparse lattice then executes 3D CNN. VoxelNet [17] discretizes point clouds into voxels and uses dense 3D convolutions. Nevertheless, it is difficult for these 3D CNN frameworks to gain both accuracy and efficiency. To reduce the computational cost of 3D CNN, PointPillars [18] and pillar-based detection [19] use pillars instead of voxels and encode features of the point cloud to the pseudo image, and then apply CNN 2D. Paigwar [7] adopts an analogous way but uses a CRF-based elevation map as ground truth to learn each pillar's height, which can estimate and model ground at a speed of 55 Hz on KITTI dataset. Inspired by these discretization-based methods, in our preliminary work, PointNet was employed to generate the pillars feature map from point clouds.
III. PROPOSED FRAMEWORK: GSECNET
In this section, we first introduce the overall structure of GSECnet, as depicted in Fig. 2. We delineate the generation of pillars feature map by passing through pillarization and PointNet, the design of depthwise-separable U-Net with attention module, and the process of data undersampling from raw point clouds.
A. Point cloud pillarization
Because of point clouds' sparsity aspect, all points must be inspected individually to determine whether some of them belong to the interesting area; meanwhile, copious void pillars exist after pillarization. In this work, we plan an appropriate pillar size to guarantee both accuracy and efficiency. In the PointPillars authors' point of view, the z dimension is needless because it largely increases computational complexity while not contributing to the segmentation result. Following the pipeline of pillar-based approaches, we discretize the environment into a pillar grid map (128, 128, 1), and the size of each pillar is 0.8m x 0.8m x 8m.
B. Data augmentation
Similar to PointPillars and GndNet, all points in each pillar are augmented. In particular, normal features of points are appended. In Fig. 3, we observe that normals of most ground points are organized and orthogonal to the horizontal plane. On the contrary, objects points' normals randomly orientated to diverse directions. Clearly, appended normal features will assist our classification network to segment ground points correctly.
Given a point p i (x i , y i , z i ), we select k nearest points, denoted as Q i {q i1 , q i2 , ..., q ik } by using KD-tree [20], and then apply least squares method to fit the part of plane with selected points. Thereafter, normals of selected points can be obtained. The process is given by from (1), equation of line for space, we get (2), k points equation in space. In (3), the normals of p i are calculated, while normalization is conducted in (4). V i is the normals before normalization; N i is the normals of a point p i . This process is similarly implemented in open3D [21]. Eventually, the original 4-dimensional of point are augmented to 12-dimension {x, y, z, i, x c , y c , z c , x p , y p , x n , y n , z n }, where x, y, z denote the coordinates; i the intensity; x c , y c , z c distance to the mean of all points in the pillar; x p , y p the offset from the pillar center; and x n , y n , z n the normals.
C. Pillars feature maps
Taking advantage of the simplicity and powerful representation capability of PointNet, we apply it to aggregate features of pillars. A simplified PointNet is adopted in our work, which comprises a linear layer with the batch norm and ReLu. Augmented point features are leveraged to generate 64 channels pseudo image of pillars, called pillars feature map, using the simplified PointNet with a max-pooling operation. Thus, the size of the pillars feature map is (128, 128, 64). This process follows the line of pillar-based approach, yet we modify the size of pillars feature map in our work.
D. Depthwise-separable U-Net with attention module
Semantic segmentation works often adopt standard U-Net. Nevertheless, the model parameter is large and not feasible to deploy on our target platform because Jetson Nano has limited memory and low computing ability. Some works such as GndNet modifies U-Net by cutting layers, yet the computing ability is still insufficient to be deployed on Jetson Nano in our experiments. Additionally, because of vacant pillars, imbalanced data lie in the pillars feature map. Thus, we supersede standard U-Net and design a depthwiseseparable U-Net with attention module. This model has significantly fewer parameters, only 1% of standard U-Net implementation, but performance is retained similar to standard U-Net in our work. A smaller with similar performance to large models is crucial for autonomous driving vehicles driven by restricted isolated power. Furthermore, it is the first time to segment ground on a Jetson Nano with the KITTI point cloud input to the best of our knowledge.
Our encoder-decoder network extends U-Net as backbone with depthwise-separable convolutions (DSCs) [4] and convolutional block attention modules (CBAMs) [5]. We transfer convolution operations in U-Net to DSCs to decrease the model parameters, while use CBAMs after convolution operations to enhance classification ability by avoiding empty pillars. Three encoder-decoder modules are employed in our network, as shown in Fig. 4. On an encoder side, DSC operation (yellow arrows) abstracts features with a small number of parameters. Then, features pass through CBAMs (blue arrows) to learn the inherent relationship of points and wait to concentrate via the skip-connections (grey arrows), which permits the model to use multiple scales of the input to generate the output. Meanwhile, maxpooling (cyan arrows) reduces the feature map by half. On the decoder side, a bilinear upsampling (red arrows) doubles the feature map size, and then the culminating feature maps are concatenated with the previous encoder's output via the skip-connections. Then, DSC operation with a double convolution reduces the number of feature maps by a quarter or half. Finally, instead of a fully connected layer, the last layer is a 1×1 convolution (green arrow), which yields a single feature map showing the predicted segmentation results. Recently, the authors in [22] suggested a similar one and demonstrated that their model achieves comparable performance as standard U-Net for weather prediction. However, our model is around 1/10 size of their model and optimized to the pillars feature map.
E. Distribution controlled undersampling
Before pillarization, undersampling is requisite to regulate the number of input points fewer than a fair amount (e.g., 100K in our study). Since the point cloud distribution is imbalanced, points in the far region are much fewer than the adjacent region. Yet, we note that works often adopt average random undersampling, which leads to the loss of critical points in the far region. To maintain uniform distribution, we design an undersampling strategy called distribution controlled undersampling.
For more details, we generate sections as shown in Fig. 5. The probabilities of section undersampling are determined according to both the distance from LiDAR sensor and the density of points in sections. This process is summarized in Algorithm 1. Lastly, points P are assigned the different sampling ratios according to S. Although the distribution controlled undersampling is somewhat slower than average random undersampling, it shows improved ground segmentation performance stated in the ablation study.
F. Loss function
We remark that a great number of vacant pillars remain in the pillars feature map. These easy negative samples can overwhelm training and lead to degenerate models. Hence, we utilize focal loss [23] for pillar classification and give small weights of easy negative samples to prevent wrong learning direction. The focal loss is expressed as where a denotes a weighting factor; p t the predicted probability for the class with ground points label; b a tunable focusing parameter. In our study, a and b are assigned 0.25 and 2, respectively.
IV. EXPERIMENTS
Our experiments involve two parts. First, we evaluated our model on desktop and Jetson Nano against PointNet++ and GndNet for ground segmentation on the SemanticKITTI. Next, the impact of our proposed U-Net model and normal features were investigated in the ablation study.
A. Dataset
KITTI dataset is one of the most popular public segmentation datasets. On the base of that, SemanticKITTI contains 11 sequences of the KITTI dataset with 28 classes labeled data. Our training set comprises sequences 01, 02, 03, 04, 05, 06, 07, 09, and 10; meanwhile sequence 08 is used as the test set. Moreover, we generated the ground truth of ground pillars from classes of road, sidewalk, parking, and other ground.
B. Evaluation metrics
Ground segmentation performance was evaluated by accuracy, mIoU, and F1-score from the confusion matrix. Accuracy measures the fraction of the total points that the model accurately classifies; mIoU measures the similarity between predicted ground points and ground truth; F1-score indicates our model's performance regarding precision and recall. Metrics are defined as follows: where TP, TN, FP, and FN correspond to the set of True Positive, True Negative, False Positive and False Negative points matches for ground points, respectively.
C. Implementation details
We trained all models on GTX2080Ti GPU and i7 6700k CPU. Raw cloud points were augmented with a batch size of 16. Adam optimizer was adopted with weight decay 0.0005. The initial learning rate was 0.003 and declined by a factor of 0.35 when the loss had stopped dropping. Loss was convergent roughly around 20 epochs, and it took 6-8 hours on our desktop to train the model. More implementation details can be found in https://sammica.github.io/gsec
D. Quantitative and qualitative evaluation
We evaluated our model against the-state-of-art works: PointNet++ and GndNet on SemanticKITTI sequence No. 8 with the same settings. Table I depicts the results regarding the accuracy, mIoU, F1-score, and runtime. In the table, PointNet++ gains the highest accuracy, mIoU, F1score, but runtime is the slowest. Point-based method gives low efficiency due to high complexity computing, as we explained before. GndNet shows a good balance of accuracy and efficiency, while ours outperform GndNet by a wide margin with an overall runtime of 135.2 Hz. To determine whether our model is deployable on a lowpower edge computing unit, not on the professional deep learning platform with abundant resources. We then tested our proposed model on Jetson Nano, which seems unable to perform the KITTI point clouds semantic segmentation task. Experiments proved our model is workable and running by 0.1 Hz on Jetson Nano, while other models are not workable.
E. Ablation study
To study the performance trade-off of the depthwiseseparable U-Net with attention module, we evaluated it against standard U-Net, Attention U-Net, and SegNet in Gnd-Net, reported in Table II. Attention U-Net achieves a little higher accuracy, mIoU, and F1-score than ours. However, our implementation only has 0.27M parameters and needs 1.47GMac to infer. Smaller model size and low computing complexity permit ours to be deployable on Jetson Nano. We then tested our model with and without normal features of points on the same dataset. Table III indicates the performance improvement with normal features appended.
V. CONCLUSION
This paper proposed GSECnet, designed to perform point clouds ground segmentation for edge computing. To this end, first, a lightweight and attentive U-Net was designed. Next, we proved that dataset augmentation with normal features permits the better performance of the classification network. Building upon these improvements, we demonstrated that our model performs an excellent trade-off between accuracy and efficiency. Furthermore, experiments confirmed that GSECnet is the only model deployable on Jetson Nano. However, the experiment on Jetson Nano showed that the runtime of GSECnet is around 0.1 Hz, which is far from being used in practice. In future work, we will solve this limitation by using a different representation of point clouds. | 2021-04-06T01:15:53.908Z | 2021-04-05T00:00:00.000 | {
"year": 2021,
"sha1": "4a601055a645aecce5dee3046cab1de3cc1b8358",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d0d8964f25e1a66e49fe819cd05d4dd4e1d6275c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
10319942 | pes2o/s2orc | v3-fos-license | Ultrafine carbon particles down-regulate CYP1B1 expression in human monocytes
Background Cytochrome P450 monoxygenases play an important role in the defence against inhaled toxic compounds and in metabolizing a wide range of xenobiotics and environmental contaminants. In ambient aerosol the ultrafine particle fraction which penetrates deeply into the lungs is considered to be a major factor for adverse health effects. The cells mainly affected by inhaled particles are lung epithelial cells and cells of the monocyte/macrophage lineage. Results In this study we have analyzed the effect of a mixture of fine TiO2 and ultrafine carbon black Printex 90 particles (P90) on the expression of cytochrome P450 1B1 (CYP1B1) in human monocytes, macrophages, bronchial epithelial cells and epithelial cell lines. CYP1B1 expression is strongly down-regulated by P90 in monocytes with a maximum after P90 treatment for 3 h while fine and ultrafine TiO2 had no effect. CYP1B1 was down-regulated up to 130-fold and in addition CYP1A1 mRNA was decreased 13-fold. In vitro generated monocyte-derived macrophages (MDM), epithelial cell lines, and primary bronchial epithelial cells also showed reduced CYP1B1 mRNA levels. Benzo[a]pyrene (BaP) is inducing CYB1B1 but ultrafine P90 can still down-regulate gene expression at 0.1 μM of BaP. The P90-induced reduction of CYP1B1 was also demonstrated at the protein level using Western blot analysis. Conclusion These data suggest that the P90-induced reduction of CYP gene expression may interfere with the activation and/or detoxification capabilities of inhaled toxic compounds.
Background
Several epidemiologic studies attribute increased morbidity and mortality to exposure to environmental particles [1-3]. These adverse health effects due to the inhalation of particulate matter are a topic of ongoing scientific and public concern. Particulate matter (PM) is a complex mixture of many different components, which can be characterized by origin (anthropogenic or geogenic), by physicochemical properties (such as solubility) or by particle size. Particles with a mean aerodynamic size between 10 and 2.5 μm (PM 10 ) are classified as coarse particles, fine particles have a size between 2.5 and 0.1 μm and particles with a diameter less than 0.1 μm are termed ultrafine. Not only the particle size but other particle-associated parameters like particle number, surface area or reactive compounds adsorbed to the surface may be involved in the observed health effects [4]. Because of their small size, ultrafine particles contribute only modestly to total mass, but they are the predominant fraction by number in PM. Most urban particles result from combustion processes, therefore the major fraction contains ultrafine carbonaceous particles [5]. After deposition in the lung larger particles are phagocytized by alveolar and airway macrophages [6,7], but the fine and ultrafine carbon particles remain in the lung for a longer period of time [5]. Ultrafine particles are phagocytized to a minor extend but they can still enter macrophages and epithelial cells and even penetrate into the circulation. Thus ultrafine particles not only trigger local inflammatory reactions in the lung but also cause systemic extrapulmonary effects [8]. Ultrafine particles also have the capacity to inhibit phagocytosis by alveolar macrophages [9]. Macrophages and their monocyte progenitors are major elements of the inflammatory response. In addition to performing phagocytosis they can release inflammatory mediators such as cytokines and chemokines and they are crucially involved in destruction of microbes and particles using various enzymatic systems [10]. Cytochromes like CYP1B1 are also expressed by macrophages and these enzymes are part of the "digestive" and detoxifying machinery of these cells [8].
The xenobiotic metabolism can be divided into two phases: modification (phase I) and conjugation (phase II). An important group of phase I enzymes consists of the cytochrome P450 oxidases (CYP) which belong to the monoxygenases. In humans 57 CYPs are known and about 25% of them are considered to be involved primarily in the xenobiotic metabolisms [11]. Superfamily members are classified according to the similarity of their primary structure. The expression of the CYP1 subfamily can be induced by polycyclic aromatic hydrocarbons (PAH), which are ubiquitously occuring environmental carcinogens [12] and are particularly known to be present in cigarette smoke [13]. The induction of CYP1 genes is regulated by a heterodimer of the aryl hydrocarbon receptor (AhR) and the aryl hydrocarbon receptor nuclear translocator (Arnt) [14]. Two cytochromes, CYP1A1 and CYP1B1, are mainly involved in the formation of ultimate carcinogenic diol-epoxides of PAH such as benzo[a]pyrene (BaP) [15]. The expression of these enzymes is largely extrahepatic and both enzymes are present in many tumor tissues [16,17]. CYP1B1 has been identified as a major P450 enzyme in normal human blood monocytes [18] and CYP1B1 is also present in human lung and lung-derived cell lines [19].
Monocytes, macrophages and epithelial cells are affected by particles. CYP1B1 is involved in both, detoxification as well as metabolic activation of xenobiotics [12]. Thus it is important to address the question of whether and in what way particles affect CYP1B1. This study is aimed on the effects of carbon particles on the expression of CYP1B1 in monocytes/macrophages and bronchial epithelial cells and we demonstrate a pronounced down-regulation of mRNA expression and protein level of this important extrahepatic enzyme.
Effect of particle exposure on CYP1B1 mRNA expression
In earlier experiments using gene expression arrays, we noted a decreased expression of CYP1B1 mRNA in monocyte-derived macrophages (MDM) of patients with COPD (http://www.ncbi.nlm.nih.gov/geo/; accession number GSE8608). Since exposure of particles plays a major role in the etiology of this disease, we studied the effect of particles on cells of the monocyte/macrophage lineage. To cover a wider range of particle materials (chemistry) and their physical properties (size, surface structure) and because of economical reasons (cost-effectiveness) we initially used a mixture of both particles, ultrafine P90 (mean size 90 nm) and fine TiO 2 (mean size 200 nm). We analyzed the effect of this mixture on CD14++ monocytes after 3h of exposure.
Expression levels were calculated relative to corresponding mRNA level for α-enolase in the same sample, such that negative values indicate a transcript prevalence that is less than that of α-enolase in the same cell population. As shown in Fig. 1A, incubation of CD14++ monocytes with this mixture of ultrafine P90 and fine TiO 2 resulted in a pronounced decrease of CYP1B1 mRNA transcripts (-4,308 ± 2,231) reflecting a 95-fold reduction compared to untreated cells (-45 ± 37). LPS showed a minor effect with respect to CYP1B1 expression (-68 ± 41).
We then addressed the question which of the two types of particles in the mixture caused the down-regulation of CYP1B1 mRNA in CD14++ monocytes. As shown in Fig. 1A the decrease of CYP1B1 transcripts can be attributed to ultrafine P90 alone which reduced expression 136-fold to a level of -6,109 ± 1,759, while the fine TiO 2 showed no effect (-50 ± 21) compared to untreated cells (-45 ± 37).
Since the down-regulation of CYP1B1 is only found in P90 we next asked whether this was due to the ultrafine nature of the particle. We therefore tested ultrafine TiO 2 but found no activity with an expression level of -34 ± 20 in untreated and of -27 ± 18 in treated PBMC (n = 3; data not shown). Hence low size is not sufficient a feature to explain the effect on CYP1B1. Other properties found in P90 and not in TiO 2 appear to be responsible.
CYP1A1 is a cytochrome monoxygenase closely related to CYP1B1, so we asked whether expression of this gene is also influenced by particles. Incubation of CD14++ monocytes with ultrafine P90 for 3 h led to a pronounced 13-fold down-regulation of CYP1A1 mRNA (-12,204 ± 5,622). No effect of LPS (-1,508 ± 781) or fine TiO 2 (-1,013 ± 746) was detected when compared to untreated cells (-940 ± 823) (Fig. 1B). These data show that CYP1A1 is also affected by ultrafine P90 albeit to a lesser extent compared to CYP1B1.
Exclusion of LPS-contamination of particles
A frequent problem in cell biology is LPS-contamination of materials including particles such as those used in our experiments. While in our system LPS alone had a minor effect on its own, we still needed to exclude a contribution of this compound when combined with particles. For this we used Polymyxin B, a compound which is known to inhibit pro-inflammatory signals induced by LPS [20]. Polymyxin B on its own did not alter mRNA expression of CYP1B1 (-10 ± 3) compared to untreated cells (-12 ± 4). It did suppress however the moderate LPS-induced decrease of CYP1B1 mRNA from -24 ± 10 to -13 ± 9 (p < 0.05). On the other hand the P90-induced down-regulation of expression of CYP1B1 mRNA (-1,292 ± 1,031) was not altered by Polymyxin B (-945 ± 935) (not significant, Fig. 2). These findings support the assumption that particles are not contaminated by LPS.
Dose response and time course of P90-induced CYP1B1 mRNA repression
To determine optimum dose of ultrafine carbon particles for cytochrome monoxygenase 1B1 mRNA repression we incubated CD14++ monocytes with or without different doses of P90 for 3 h. Subsequently CYP1B1 mRNA levels were detected (Fig. 3A). The effects were significant even at low doses, starting with 0.32 μg/ml (-18 ± 8), followed by 3.2 μg/ml (-69 ± 41). A more pronounced decrease of CYP1B1 transcripts was seen at 32 μg P90/ml (-1,152 ± 882, 96-fold reduction), at 320 μg P90/ml (2192 ± 1173, 183-fold reduction) and at 1,000 μg/ml (-2,814 ± 1,754, 235-fold reduction). To exclude a toxic effect on cells incubated with the high particle concentrations, a trypan blue viability test was performed. It showed no decrease of cell viability at all particle concentrations. Levels of α-enolase mRNA also gave no evidence of loss of viability of these cells (data not shown). For all further experiments we used a particle concentration of 32 μg/ml dose.
Next we analyzed the time course of CYP1B1 mRNA repression (Fig. 3B). CD14++ monocytes were incubated with P90 at 32 μg/ml from 0.5 h to 20 h. Repression of CYP1B1 was detectable after 0.5 h incubation (-27 ± 2), was more pronounced after 1 h (-89 ± 18) and reached a plateau after 3 and 6 h (-1,856 ± 605 116-fold reduction, and -1,766 ± 584, 110-fold reduction, respectively). After 20 h CYP1B1 mRNA levels recovered but were still 46-fold reduced (-740 ± 275) compared to untreated cells. Based A) Effect of LPS, ultrafine P90, and fine TiO 2 on CYP1B1 mRNA levels in CD14++ monocytes Figure 1 A) Effect of LPS, ultrafine P90, and fine TiO 2 on CYP1B1 mRNA levels in CD14++ monocytes. Cells were purified from PBMC of healthy donors by MACS separation. CD14++ cells remained untreated (none) or were stimulated with LPS (10 ng/ml), with particle-mix of ultrafine P90 and fine TiO 2 (each with 32 μg/ml) or with each particle separately for 3 h. Cells were lysed and mRNA levels were determined by RT-PCR. Data were normalized to levels of the house keeping gene α-enolase. (n = 3 incubations from different donors, mean ± S.D.; * p < 0.05 compared to untreated controls). B) Down-regulation of CYP1A1 mRNA in monocytes. CD14++ cells remained untreated (none) or were stimulated with LPS (10 ng/ml), with particle-mix of ultrafine P90 and fine TiO 2 (each with 32 μg/ml) or with each particle separately for 3 h. Cells were lysed and mRNA levels were determined by RT-PCR. (n = 3 incubations from different donors, mean ± S.D.; * p < 0.05 compared to untreated controls).
Effect of particles on CYP1B1 and CYP1A1 mRNA expression in MDM from healthy donors and COPD patients
MDMs are more mature than monocytes and might show an effect of ultrafine P90 on CYP1B1 expression different from what is seen in monocytes. We therefore matured freshly isolated CD14++ monocytes with M-CSF at 100 ng/ml for 5 days and subsequently treated the cells with 32 μg P90/ml for 3 h (Fig. 4A). We used cells from healthy donors and COPD patients to address the question whether these patients have an altered capacity to deal with exogenous particulates.
Looking at CYP1A1 expression in the same MDM we noted a much lower level or constitutive expression com-pared to CYP1B1. In healthy donors CYP1A1 mRNA expression was -19,350 ± 17,811, after LPS and particle treatment 1A1 transcript levels were -84,195 ± 49,677 (4fold) and -55,720 ± 96,911 (3-fold) respectively. In MDM of COPD patients 1A1 mRNA levels were -66,840 ± 59,479 in untreated cells and -83,509 ± 66,575 in particletreated cells (not significant; data not shown).
Particle induced down-regulation of CYP1B1 is not due to contaminant LPS Figure 2 Particle induced down-regulation of CYP1B1 is not due to contaminant LPS. CD14++ monocytes remained untreated (none) or were stimulated with LPS (10 ng/ml) and ultrafine P90 (32 μg/ml) each with and without 15 min Poly-myxinB preincubation to suppress the LPS effect. After incubation for 3 h cells were lysed and mRNA levels were determined by RT-PCR (n = 6 incubations from different donors, mean ± S.D.; * p < 0.05 compared to untreated cells). A) MDM were generated from CD14++ monocytes purified from PBMC by MACS separation followed by 5-day incubation with M-CSF (100 ng/ml) Figure 4 A) MDM were generated from CD14++ monocytes purified from PBMC by MACS separation followed by 5day incubation with M-CSF (100 ng/ml). Cells remained untreated (none) or were stimulated with LPS (10 ng/ml) or with particle-mix of ultrafine P90 and fine TiO 2 (each with 32 μg/ml) for 3 h (n = 5 patients with COPD, n = 8 healthy controls, mean ± S.D.; * p < 0.05 compared to untreated controls). B) Effect of particles on CYP1B1 expression in sputum macrophages of healthy non-smokers, healthy smokers and patients with COPD. Sputum macrophages were purified using RosetteSep to deplete unwanted leukocytes. Cells remained untreated (none) or were stimulated with LPS (10 ng/ml) or with particle-mix of ultrafine P90 and fine TiO 2 (each with 32 μg/ml) for 3 h (n = 5 non-smokers, n = 4 smokers, n = 7 patients with COPD, mean ± S.D.; * p < 0.05). C) Cells remained untreated (none) or were stimulated with LPS (10 ng/ml), with particle mix of ultrafine P90 and fine TiO 2 (each with 32 μg/ml) or with each particle separately for 22 h (each with 32 μg/ml) (A549 n = 3, Calu-3 n = 4 experiments from different cell passages, mean ± S.D.; * p < 0.05 compared to untreated control). D) Effect of ultrafine P90 on CYP1B1 mRNA expressin in primary bronchial epithelial cells. Cells were obtained by bronchial brush biopsy. Cells remained untreated (none) or were stimulated with ultrafine P90 (32 μg/ml) for 3 h (n = 7 incubations from different donors, mean ± S.D.; * p < 0.05 compared to untreated control). Taken together MDMs when compared to blood monocytes do show a decrease of cytochrome monoxygenases but the effect appears to be less pronounced with respect to CYP1B1 and CYP1A1.
Effect of particles on CYP1B1 mRNA expression in sputum macrophages from non-smokers, smokers, and COPD patients
Macrophages in the airways are exposed to inhaled particles and this may impact on CYP expression. We therefore isolated macrophages from induced sputum of healthy non-smokers, healthy smokers and COPD patients (6 exsmokers and one current smoker). In healthy non-smokers particles reduced CYP1B1 expression in sputum macrophages 2.8-fold (-530 ± 547) compared to untreated cells (-191 ± 198) (Fig. 4B). In COPD expression of CYP1B1 was at -90 ± 97. Treatment of COPD sputum macrophages for 3 h with ufP90/fTiO 2 led to a 2.7-fold decrease of CYP1B1 transcripts (-247 ± 346) compared to 3 h untreated cells. The single currently smoking patient showed the highest mRNA expression of CYP1B1 (-22) with no decrease of transcript after ultrafine P90/fine TiO 2 stimulation (-29).
When looking at healthy smokers we found a very high expression level for CYP1B1 (-18 ± 15). Incubation with the particle mixture for 3 h had no effect on CYP1B1 transcripts of smokers (-16 ± 13) compared to untreated cells (Fig. 4B). These data show that in smokers sputum macrophages are refractory to the action of particles.
Effect of particles on CYP1B1 mRNA expression in human epithelial cells
Inhaled particles also deposit on alveolar epithelial cells. We thus investigated the effect of ultrafine P90 and fine TiO 2 on CYP1B1 mRNA expression in the human alveolar epithelial cell line A549 and a bronchial epithelial cell line Calu-3. In these experiments we used an incubation time of 22 h as a preliminary time point, which afterwards was optimized depending on cell type (primary cells or cell line) and read-out (transcript or protein analysis).
To confirm the results of epithelial cell lines we investigated the effects of P90 on CYP1B1 mRNA expression in primary bronchial epithelial cells. Cells were obtained by bronchial brush and treated with and without 32 μg ultrafine P90/ml for 3 h. As shown in Fig. 4DCYP1B1 transcript levels were reduced 4-fold in ultrafine P90-treated cells (-1,163 ± 1,161) compared to untreated cells (-267 ± 373). This included two cases with a very strong response (35-and 14-fold). Hence primary epithelial cells can be very sensitive to the action of ultrafine carbon particles.
Effect of benzo[a]pyrene on CYP1B1 mRNA expression in human peripheral blood mononuclear cells (PBMC)
Cytochrome P450 enzymes are known to be involved in the metabolism of polycyclic aromatic hydrocarbons (PAH). Benzo[a]pyrene (BaP), a carcinogenic representative of this class of compounds, strongly up-regulated CYP1B1 mRNA levels in PBMC. Time course experiments showed an increase in transcripts from -88 ± 20 in untreated cells to -6 ± 2 in cells after 3 h, and -4 ± 2 after 6 h incubation with BaP (data not shown).
Effect of P90 on CYP1B1 protein levels in Calu-3
To investigate CYP1B1 protein levels we performed Western blot analysis with isolated microsomal protein of Calu-3 cells. In the blot shown in Fig. 6, the densitometric reading for CYP1B1 protein in Calu-3 decreased from 3,036,276 AU (untreated cells, 32 h) to 373,799 AU after ultrafine P90-treatment for 32 h. In average of 3 experiments CYP1B1 protein level in untreated cells was 2,721,541 ± 379,059 AU and P90 treatment reduced this to 342,109 ± 46,303 AU (Fig. 6). CYP1B1 mRNA levels analyzed at the same time point (32 h) and in the same cells used for Western blotting showed a decrease of transcript for ultrafine P90-treated cells (-314 ± 144) compared to untreated cells (-3,551 ± 1,889; p < 0.05) (Fig. 6, right panel).
These data show that P90 will lead to a pronounced reduction of CYP not only at the mRNA but also at the protein level.
Discussion
The data of this study show that P90 causes a strong down-regulating effect on the CYP1B1 expression. Cyto-chrome P450 monooxygenases are involved in detoxification and toxification of xenobiotic substances. Toxification is caused by metabolic transformation of non or less toxic precursors into reactive intermediates. Cells may thus be influenced by two mechanisms/modes of action, they may either be protected or they may become more susceptible to other inhaled substances by exposure to ultrafine P90.
The strongest down-regulation of CYP1B1 expression after stimulation with particle mix of ultrafine P90 and fine TiO 2 was observed in monocytes (60-fold, Fig. 1A). It is unclear at this point why CYP1B1 is so much more sensitive to P90 effects than CYP1A1 (Fig. 1B). Whether this is determined at the promoter level needs to be addressed in the future. As P90 was identified as the active component in the particle mix (Fig. 1A) we addressed two questions.
The first question was, whether ultrafine TiO 2 particles were capable to show such strong down-regulation of CYP1B1 mRNA expression as seen with ultrafine P90. To study if the strong P90 effect is triggered by its ultrafine nature (12 nm in diameter, specific surface area of 300 m 2 /g), we incubated cells with ultrafine TiO 2 with a diameter of 20 nm and a specific surface area of 48 m 2 /g [5]. Neither fine nor ultrafine TiO 2 treatment did alter CYP1B1 mRNA expression. Beck-Speier et al. have shown a highly significant correlation between the PGE 2 /TXB 2 formation and the specific particle surface area but not the mass concentration [21]. The smaller surface area of ultrafine TiO 2 could be an explanation for the weaker effect of ultrafine TiO 2 (48 m 2 /g) compared to ultrafine P90 (300 m 2 /g). Also the chemical composition or surface structure of the particles may contribute to their reactivity towards various Western blot analysis of ultrafine P90 effect on CYP1B1 protein in the human lung epithelial cell line Calu-3 Figure 6 Western blot analysis of ultrafine P90 effect on CYP1B1 protein in the human lung epithelial cell line Calu-3. Calu-3 cells were treated 32 h with or without P90 (32 μg/ml). Microsomal protein was isolated and the 57 kDa CYP1B1 protein was detected by a rabbit polyclonal antibody (CYP1B11-A, Alpha Diagnostic). Shown is a representative experiment out of three independent experiments. The right diagram shows the average CYP1B1 mRNA expression in Calu-3 cells after 32 h P90 treatment (n = 3, mean ± S.D.; * p < 0.05 compared to untreated cells). found for ultrafine TiO 2 and ultrafine P90 purchased from the same manufacturer and with the same composition and surface area than those used herein, different effects in mice when instilled into the lungs. Ultrafine P90 exhibited a higher effect in influx of neutrophile granulocytes into the lungs, higher membrane damage (causes release of G-glutamyl transferase), and higher levels of macrophage inflammatory protein 2 (MIP-2) in the lavage fluid 48 h after instillation than for ultrafine TiO 2 .
Secondly we addressed the question, whether the observed effect was caused by an LPS contamination of P90. LPS contamination can be excluded by neutralizing LPS with Polymyxin B. Our experiments with Polymyxin B showed clearly that the down-regulation of CYP1B1 is caused by ultrafine P90 and not by a potential endotoxin contamination of the carbon black particles (Fig. 2). The LPS-mediated reducing effect on CYP1B1 expression could be abolished by Polymyxin B, whereas the P90 effect was not altered by Polymyxin B treatment.
With increasing concentrations of ultrafine P90 the decrease of CYP1B1 expression becomes stronger. Each of the observed effects was significant. Therefore the P90 concentration of 32 μg/ml was used for all further experiments. The concentration of 32 μg/ml is in accordance with environmentally relevant particle concentration. Higher concentrations do not reflect realistic physiological conditions [23].
We confirmed the reduced CYP1B1 mRNA-expression in additional cell types. In MDM (Fig. 4A) we observed a 2.7fold decrease of CYP1B1 after particle treatment. In sputum macrophages of healthy non smokers (Fig. 4B) we showed a 4-fold and of COPD patients a 3-fold reduction of CYP1B1. One of the COPD patients currently smoked and showed a high level of CYP1B1 mRNA in untreated sputum macrophages that was not affected after ultrafine P90 treatment. Additionally in healthy smokers no effect of particle treatment on CYP1B1 transcript level was detected. This may be due to the high concentration of organic compounds, e.g. polycyclic aromatic hydrocarbons (PAH), in cigarette smoke which in turn induced CYP1B1 mRNA expression in a competitive manner. A competitional behavior between induction of CYP1B1 mRNA by benzo[a]pyrene (BaP), a carcinogenic constituent of tobacco smoke [24], and decrease of CYP1B1 transcript by ultrafine P90 was clearly shown herein (Fig. 5). Also these findings may suggest that monocytes become less sensitive to ultrafine P90 treatment during maturation, maybe because of a stronger signal transduction or better uptake of particles in monocytes. In parallel to the lower response to particles, sputum macrophages showed no response to LPS with respect to down-regulation of CYP1B1.
Epithelial cells are also affected by particle exposure. In epithelial cell lines (Fig. 4C) we observed significant down-regulation of CYP1B1 after ultrafine P90 stimulation. Primary epithelial cells obtained by bronchial brush (Fig. 4D) showed in average a 4-fold reduction of CYP1B1 mRNA with strong reductions (35-and 14-fold) in two samples. To exclude a leukocyte contamination we analyzed the cells by flow cytometry and microscopic cell differentiation. The stronger effects seen in two of the seven samples may be due to the inter-individual variation or may be due to the different clinical conditions and medications. Taken together there was no apparent clinical feature (diagnosis, medication, smoking status) to explain the higher responses of bronchial epithelial cells in the two cases.
To confirm the CYP1B1 mRNA down-regulation on protein level, we isolated the microsomal fraction, because CYP1B1 is bound to the endoplasmatic membrane. It was not possible to isolate sufficient numbers of primary cells in order to obtain enough protein for Western blotting. Therefore for Western blot we used the bronchial epithelial cell line Calu-3 as a model cell line because they also show a decrease in CYP1B1 transcripts at mRNA level after ultrafine P90 exposure. The strongest reduction of CYP1B1 mRNA was seen after 22 h, therefore we incubated Calu-3 cell with P90 for 32 h to investigate protein expression. Densitometric analysis of the Western blots confirmed a reduction of CYP1B1 protein following P90 treatment (Fig. 6). We assume that also in monocytes and macrophages CYP1B1 protein will be decreased substantially, similar to the mRNA reduction in these cells.
When alveolar macrophages, monocytes, and airway epithelial cells are exposed to particles, they are phagocytized by these cell types [25,26] and can subsequently interfere with gene expression or cell functions. For cells of the monocytic lineage, one possibility is a transport mechanism of particles via the macrophage receptor with collagenous structure (MARCO). In a study of Kanno et al. an association between the uptake of fine and ultrafine particles and MARCO was shown [27]. In our study we blocked the MARCO receptor with antibodies. The reduction of CYP1B1 transcripts after ultrafine P90 treatment was not altered by blocking antibodies compared to isotype controls (data not shown). Accordingly the transport mechanism of ultrafine P90 particles appears not to be based on the MARCO receptor and the mechanism involved is still elusive.
Transcription of CYP1B1 and CYP1A1 genes is regulated by the interaction between PAH and the cytosolic arylhydrocarbon receptor. Dioxin is the most potent agonist [17], but also BaP can activate the receptor complex [28]. Cigarette smoke and diesel soot contain a huge amount of PAH adsorbed to particles [29,30]. Herein we showed a BaP-mediated induction of CYP1B1 in a concentrationdependent manner (Fig. 5). Simultaneously treatment with ultrafine P90 attenuates the CYP1B1 induction. The attenuation may be less pronounced with low concentration of BaP (0.1 μM) because of absorption effects of BaP onto P90 particles, which reduce the amount of bioavailable BaP. Overall these findings indicate a competing behaviour of BaP and P90. In case of low BaP concentrations (0.1 μM), P90 still reduces CYP1B1 expression and this may affect its detoxifying/toxifying activity.
Besides CYP-gene products, myeloperoxidase (MPO) was reported to play an important role in metabolic activation of chemical carcinogens [31]. MPO catalyzes the conversion of H 2 O 2 into hypochlorous acid (HOCl), which is a very strong oxidant and which in turn may be responsible for the metabolic activation of inhaled chemicals or organic compounds on inhaled particles. In our investigations we screened CD14++ monocytes for expression of MPO on mRNA level after exposure to particles (Printex 90 and fine TiO 2 ). Compared to resting monocytes (-641), particle incubation ( . This dysregulation was found to be associated with an enhanced formation of DNA adducts and enhanced genotoxic effects. CYP-expression is also regulated by aryl hydrocarbon receptor (AhR) and AhR nuclear translocator (Arnt). Wu et al.
[35] discussed the reduction of AhR-and Arnt-expression as the underlying mechanism for TNF-induced decrease of CYP1A2 expression after LPS-treatment. In our system no influence of particles was detected on AhR and Arnt on transcriptional level (data not shown).
While a lot is known about induction of CYP-genes little information is available on suppression of these genes by particles. We show herein an unexpected and pronounced reduction of CYP expression in various cell types. The reduction of CYP protein may lead to disruption of various cellular functions: pathophysiological response to stress signals (toxic substances or inflammation), an adaptive homeostatic response (controlled generation of reactive oxygen species) or a part of tightly regulated physiological pathway (production of bile acid) [ The reduction of CYP1B1 after ultrafine P90 treatment could be a form of protective mechanism concerning oxidative stress. Reactive oxygen species (ROS) generated by CYP-catalyzed metabolisms can cause oxidative stress in cells [39]. Reduced CYP1B1 may prevent further DNA damage being caused by ROS. Furthermore, decreased availability of CYP1B1 enzyme leads to a lesser activation of PAH to reactive metabolites which otherwise could cause DNA damage and cancer development [12]. On the other hand CYP1B1 and CYP1A1 are also important for detoxifying various xenobiotics. A reduced expression of these cytochromes in monocytes/macrophages and lung epithelial cells induced by inhaled carbon particles may lead to increased damage by xenobiotics because of the lack of its detoxifying character. Further investigations will be required to elucidate by which mechanism the particles affect the CYP1B1 expression.
Cell lines and culture medium
The . Before addition of 10% FCS (S0115, Biochrom, Berlin, Germany), the supplemented medium was ultrafiltered through a Gambro 2000 column (Gambro, Hechingen, Germany) in order to remove any inadvertent LPS contamination.
Donors of primary human cells
Primary human cells were obtained from healthy human volunteers (non-smoker and smoker), from patients with COPD, or from patients with other lung diseases (Table 1). Written informed consent was obtained from each individual. The protocol was approved by the Ethics Committee of the Medical School of the Ludwigs-Maximilians-University (Munich, Germany).
Isolation of peripheral blood mononuclear cells (PBMC), enrichment of CD14++ monocytes, and generation of monocyte-derived macrophages (MDM)
Human PBMC were isolated from heparinized (10 U/ml) blood by density gradient centrifugation (Lymphoprep, 1114545, 1.077 g/ml, Oslo, Norway). Cells were directly used for subsequent isolation of monocytes and cultured under LPS-free conditions.
For cell enrichment the MACS magnetic separation technique was used (all columns and reagents from Miltenyi Biotec, Bergisch-Gladbach, Germany). For isolation of CD14++ monocytes, PBMC were in a first step depleted of CD16-positive cells. For this, a total of 20 × 10 6 cells were resuspended in 80 μl of PBS containing 25 μl of Anti-CD16 microbeads (130-045-701). After incubation for 30 min at 4°C cells were washed and resuspended in 1.5 ml PBS and this was loaded onto a LD column (120-000-497) that was positioned in a MidiMACS magnet (130-042-302). Nonadherent cells were recovered and used for enrichment of CD14++ cells. For this, anti-CD14 microbeads (130-050-201) was diluted 1:5 in PBS and added to the cells to a final volume of 100 μl. After incubation for 30 min at 4°C cells were washed and resuspended in 1.5 ml PBS and this was loaded onto a LS column (120-000-475). The column was washed five times with 2 ml PBS each. Cells were recovered from the column by flushing the column five times with 2 ml PBS using a plunger. CD14++ cells were washed and resuspended in supplemented RPMI 1640 medium (mentioned above).
CD14++ monocytes with a purity of 92% or higher were used.
Sputum induction and processing
Sputum macrophages were collected from healthy human volunteers and from patients with COPD after informed consent. The subjects were instructed to mouthwash with water. Sputum induction was performed by stepwise inhalations for 10 min each with increasing concentrations of sodium chloride (0.9%, 3%, and 5% in healthy individuals and 0.9% and 3% in COPD patients), nebulized by an ultrasonic nebulizer (Multisonic LS 290, Schill, Probstzella, Germany). After inhalation individuals were encouraged to cough deeply. Sputum was coughed into petri dishes (d = 13.5 cm) and processed immediately.
To homogenize the sputum samples by cleavage of disulphide bonds of mucin glycoproteins, four volume parts of sputolysin reagent (560000, Calbiochem, Bad Soden, Germany) containing 6.5 mM dithiothreitol and 100 mM phosphate buffer (pH 7.0) were added. After vortexing briefly, the mixture was incubated at 37°C and vortexed every 10 min until the sputum was homogenized, in total no longer than 50 min. The sputum samples were diluted with 1 volume PBS (Gibco, Karlsruhe, Germany) and filtered consecutively through a 100 μm and a 40 μm mesh filter (352360 and 352340, Becton Dickinson, Heidelberg, Germany). Cells were then pelleted by centrifugation for 10 min at 400 × g and 4°C and resuspended in 3 ml PBS. 10 μl packed erythrocytes and 50 μl RosetteSep Human Monocyte Enrichment Cocktail (15068, CellSystems, St. Katharinen, Germany) were added, which crosslinks unwanted cells to multiple red blood cells, forming immunorosettes. After 20 min incubation at room temperature the sample was layered on the top of lymphoprep and centrifuged 30 min at 800 × g. The enriched macrophages were harvested from the plasma interface.
Brush biopsy of airway epithelial cells
Patients who underwent diagnosic bronchoscopy were brush biopsied with a sterile single-sheated nylon cytology brush (BC-202D-5010, Olympus EndoTherapy, Hamburg, Germany). Four consecutive brushings from an intrabronchial area were taken from the proximal part of one of the main bronchi. Epithelial cells were harvested from the brush by agitating it in 5 ml of RMPI 1640 medium. After centrifugation cells were lysed in TRI reagent (T-9424, Sigma, Taufkirchen, Germany) and RT-PCR analysis was performed (see below).
Treatment of cells
For stimulation with lipopolysaccharide (LPS; L-6261, Sigma, Taufkirchen, Germany) blood cells were incubated with 10 ng/ml, sputum macrophages with 1 μg/ml for 3 h each, or remained untreated as control. For particle-stimulation with ultrafine P90 (14 nm in diameter, Degussa, Frankfurt, Germany) and fine TiO 2 (220 nm in diameter, Degussa, Frankfurt, Germany), cells were incubated for 3 h with 32 μg/ml (each particle) or remained untreated as control. For particle concentration experiments CD14++ monocytes were incubated with ultrafine P90 for 3 h at doses of 0.32 μg/ml -1 mg/ml and 37°C. Using the trypan blue exclusion assay, cell viability was not found to be altered by ultrafine P90-treatment at any particle concen- 3 μl of cDNA were used for amplification in the SYBR Green format using the LightCycler-FastStart DNA Master SYBR Green I kit from Roche (2239264, Mannheim, Germany). For PCR analysis, the LightCycler system offers the advantage of fast and real-time measurement of fluorescent signals during amplification. The SYBR Green dye binds specifically to the minor groove of double stranded DNA. Fluorescence intensity is measured after each amplification cycle. During PCR, a doubling of template molecules occurs in each cycle only during the log-linear phase. Melting curves have been performed after amplification to ensure that primer dimers did not contribute to the fluorescence intensity of the specific PCR-product. Amplificates were run out on a 2% agarose gel and bands were observed on the expected molecular weight. As an internal control the housekeeping gene α-enolase was amplified.
For analyzing the LightCycler data all samples (tested gene and housekeeping gene) are processed in the Light-Cycler software with the same settings (e.g. same thresholds). Cycle number of the housekeeping gene was subtracted from the corresponding housekeeping gene and its absolute value was subsequently calculated to the power of 2. Genes with a higher cycle number than the corresponding housekeeping gene were plotted to the negative scale.
Statistics
For statistical analysis of the data, we used the Student's Ttest. Results were considered significant if p < 0.05. | 2018-05-08T18:34:23.868Z | 2009-10-16T00:00:00.000 | {
"year": 2009,
"sha1": "ac847a41c2af26ff6600886823771c20cbe3c0de",
"oa_license": "CCBY",
"oa_url": "https://particleandfibretoxicology.biomedcentral.com/track/pdf/10.1186/1743-8977-6-27",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ac847a41c2af26ff6600886823771c20cbe3c0de",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
9221318 | pes2o/s2orc | v3-fos-license | Infective Endocarditis: Identification of Catalase-Negative, Gram-Positive Cocci from Blood Cultures by Partial 16S rRNA Gene Analysis and by Vitek 2 Examination
Streptococci, enterococci and Streptococcus-like bacteria are frequent etiologic agents of infective endocarditis and correct species identification can be a laboratory challenge. Viridans streptococci (VS) not seldomly cause contamination of blood cultures. Vitek 2 and partial sequencing of the 16S rRNA gene were applied in order to compare the results of both methods. Strains originated from two groups of patients: 149 strains from patients with infective endocarditis and 181 strains assessed as blood culture contaminants. Of the 330 strains, based on partial 16S rRNA gene sequencing results, 251 (76%) were VS strains, 10 (3%) were pyogenic streptococcal strains, 54 (16%) were E. faecalis strains and 15 (5%) strains belonged to a group of miscellaneous catalase-negative, Gram-positive cocci. Among VS strains, respectively, 220 (87,6%) and 31 (12,3%) obtained agreeing and non-agreeing identifications with the two methods with respect to allocation to the same VS group. Non-agreeing species identification mostly occurred among strains in the contaminant group, while for endocarditis strains notably fewer disagreeing results were observed. Only 67 of 150 strains in the mitis group strains obtained identical species identifications by the two methods. Most VS strains belonging to the groups of salivarius, anginosus, and mutans obtained agreeing species identifications with the two methods, while this only was the case for 13 of the 21 bovis strains. Pyogenic strains (n=10), Enterococcus faecalis strains (n=54) and a miscellaneous group of catalase-negative, Gram-positive cocci (n=15) seemed well identified by both methods, except that disagreements in identifications in the miscellaneous group of strains occurred for 6 of 15 strains.
INTRODUCTION
Viridans streptococci (VS), enterococci and Streptococcus-like bacteria are frequent etiologic agents of infective endocarditis and correct species identification can be a big laboratory challenge [1]. Diagnostic tools for identification of bacteria have developed dramatically in the last decades. These techniques, especially sequencing of the genes coding for rRNA and other genes, have led to revolutionary insights into the phylogeny and taxonomy of streptococci and related taxons [2]. Accurate identification of ß-hemolytic or pyogenic streptococci is to a large extent done by phenotypic reactions, whereas newer molecular techniques including sequencing of genes are more and more often used for their typing [2]. Accurate identification of strains belonging to viridans streptococci is prerequisite for understanding the pathogenesis of these opportunistic infections, especially with regards to infective endocarditis [3].
The genus Streptococcus consists of more than 65 validly published species, and the taxonomic classification of these *Address correspondence to this author at the Department of Clinical Microbiology, Slagelse Hospital, Ingemannsvej 18, 4200 Slagelse, Denmark; Tel: +4558559421; Fax: +45 5855 9410; E-mail: jejc@regionsjaelland.dk members is not well defined, but they are divided into: the pyogenic group and the VS, which are grouped into 1) the anginosus group, formerly called " S. milleri", 2) the mitis group, 3) the salivarius group, 4) the mutans group, 5) the bovis group and 6) other undifferentiated streptococci [4]. Knowledge on identification and relationship among VS strains increases as well by use of phenotypic and molecular methods. The Vitek 2 system seems to represent an accurate and acceptable mean for performing characterization/ identification of many bacterial species [5]; however, identification of VS strains seems challenging also for this system [6]. Sequencing of the 16S rRNA gene is widely used in identification of bacteria and has revolutionized taxonomy over the last decades, though the discriminative power may call for additional molecular examinations when trying to separate VS strains and other closely related bacteria from each other [6,7]. However, phylogenetic analysis based on 16S rRNA gene sequences of type strains of VS reveal a clustering pattern that reflect their pathogenic potential and ecological preferences [8].
We therefore found it interesting to characterize a large collection of VS strains obtained from blood cultures from a well defined group of infective endocarditis patients and a second group of strains from blood cultures assumed to be contaminants. Vitek 2 examination and partial sequencing of the 16S rRNA gene (a 526 basepair stretch) and subsequent BLAST examination were applied in order to compare and describe the methods and groups of strains, respectively.
Bacterial Strains
All strains of streptococci, Streptococcus-like bacteria, and Enterococcus faecalis (it was the only Enterococcus species which had been isolated from endocarditis patients) isolated at the Department of Clinical Microbiology at Herlev University Hospital in the period of January 1 st . 2002-31 st. December 2006, from patients with definite infective endocarditis according to the modified Duke criteria [9] were included. In addition, VS strains from the same period, which were evaluated as blood culture contaminants, were included. Criteria for blood culture contamination were, if 1) only one or two of four blood culture bottles yielded growth within a given blood culture set and temperature, leucocyte(s) count(s) and C-reactive protein values of the patients were all within normal ranges, or 2) in the presence of another significant infection other than endocarditis with VS or another bacterial infection. All data were extracted from the laboratory information system (LIS) (AdBakt, Autonik, Sweden). Streptococcal isolates judged as contaminations were initially seldom attempted identified to the species level. All bacterial strains, including streptococcal isolates judged as contaminations were kept frozen at (-80ºC). The strains were taken from frozen, inoculated on 5% Danish Blood Agar and incubated at 35º C in ambient atmosphere supplemented with 5% CO 2 .
Phenotypic Identification
The Vitek 2 (BioMeriux) is an automated system for the species identification of bacteria [10]. After 6-8 hours of incubation, reactions were read automatically. A colorimetric GP card for the identification of Gram-positive cocci with 43 reactions [6] was used for identification of all isolates. When supplementary tests were recommended by the system, they were done, except for -N-Acetyl-Galactosaminidase.
Partial 16S rRNA Gene Sequence Analysis
Partial 16S rRNA gene sequencing and subsequent BLAST examination was performed as described previously [7]. Briefly, DNA was released by heating isolated bacteria at 95°C for 5 min. PCR amplification of part of the 16S rRNA gene (526 base pairs stretch) was performed using the primers BSF-8 and BSR-534. All edited sequences were compared to deposited sequences in the GeneBank with a standard nucleotide-nucleotide BLAST approach (see Table 1). Blast files were stored electronically and later evaluated.
RESULTS
In total 330 strains were included; 149 strains were from patients with definite infective endocarditis, and 181 strains from the same period, which had been evaluated as blood culture contaminants. Of the 330 strains, based on partial 16S rRNA gene sequencing results, 251 were VS strains, 10 were pyogenic streptococcal strains, 54 were E. faecalis strains and 15 strains belonged to a miscellaneous group of catalase-negative Gram-positive cocci.
In Table 1 results obtained by the two methods for VS strains with respect to allocation to the same VS groups are given. Of the 251 VS strains, 220 (87,6%) and 31 (12,3%), respectively, obtained agreeing and non-agreeing results on the group level. Within the mitis group of VS, 150 strains agreeingly were allocated to the same group, of which 89 strains achieved confirmed level by partial 16S rRNA gene sequence analysis with a range of Maxscore differences from 10 to 138 to the next best taxon match; most of the strains achieved excellent level of identification in the Vitek 2 system. Twenty of the 31 strains with non-agreeing results at the group level belonged to the mitis group of VS, judged by 16S rRNA gene sequence analysis. For the other VS groups mostly agreeing results were obtained.
Among the 220 VS strains allocated to the same VS group by both methods, non-agreeing species identifications mostly occurred among strains in the contaminant group (mitis group 66 / 83, salivarius group 3 / 4, bovis group 5 / 8, other streptococci 2 / 2 strains), while for endocarditis strains notably fewer disagreeing results were noticed (mitis group 17 / 83, salivarius group 1 / 4, bovis group 3 / 8). In Table 2a (n= 122) and 2b (n=98), data are given for strains obtaining and not obtaining, respectively, allocation to the same VS species by both methods. Only 67 of 150 mitis group strains obtained agreeing species identifications by both methods (Table 2a). Most VS strains belonging to the groups of salivarius, anginosus, bovis, and mutans obtained agreeing species identifications with the two methods, while this only was the case for 13 of the 21 bovis strains. Strains belonging to species in the mitis group had the highest number of disagreeing results being for 55,3% of strains (n = 83 /150) ( Table 2b); respectively, 59, 7, 4, 3, 3, 3, 2, 1 and 1 strain(s) of the following species (when taking starting point in the partial 16S rRNA gene sequence analysis results): S. mitis, S. sanguinis, S. infantis, S. parasanguinis, S. cristatus, S. oralis, S. pneumoniae, S.gordonii and S. sinensis). Among the nonagreeing S. mitis strains, 25 of 59 strains obtained species confirming results (score bits difference 10 to 98) by partial 16S rRNA gene sequencing/BLAST examination. All ten pyogenic streptococcal strains achieved agreeing results by the two methods. Five strains had a Maxscore difference between best and next best taxon match of 82-107 and 5 strains a difference of 0-25; by Vitek 2 8, 1 and 1 strain(s), respectively, were recorded as Excellent (Exc), Very Good (VG) and Low Discrimination (LD) level identifications. The following haemolytic species were identified: S agalactiae (n = 4), S. dysgalactiae. ssp. equisimilis (n = 4), S. pyogenes (n = 1), and S. equi ssp. zooepidemicus (n = 1). All 54 E. faecalis strains had Maxscore differences between best and next best taxon match of 121-171 with excellent identifications by Vitek 2.
DISCUSSION
In this study we compared the results obtained by Vitek 2, using the colorimetric GP card for the identification of Gram-positive cocci with 43 phenotypic reactions, and by partial sequencing of the 16S rRNA gene (a 526 basepair stretch) and subsequent BLAST examination on a large number of strains including VS strains (n = 251 isolates), haemolytic streptococci (n =10), E. faecalis (n = 54) and Streptococcus-like bacteria (n = 15). The strains were grown from blood-cultures from patients with verified infective endocarditis as well as from patients without infective endocarditis, where blood-culture contamination was suspected.
Notably, among VS strains allocated to the same VS group, most disagreeing results, with respect to allocation to the same VS species, occurred among assumed contaminant strains. These findings are in agreement with the findings by Haanperä et al. [6], who used a pyrosequencing method for the identification of streptococcal species based on two variable regions of the 16S rRNA gene. Identification of members of the S. mitis and S. sanguinis groups proved difficult for both the pyrosequencing method and the Vitek 2 system. Furthermore, the pyrosequencing analysis revealed great sequence variation, since only 43 (32.3%) of 133 strains analyzed by pyrosequencing had sequences identical to a type strain. The variation was highest in the pharyngeal strains, slightly lower in the blood culture strains, and nonexistent among invasive pneumococcal isolates (n = 17), that all had the Streptococcus pneumoniae type strain sequence. Whether this reflects a difference among disease provoking and contaminating VS strains seems worth to explore in more detail. VS groups may often be separated from other groups of Gram-positive cocci and from each other on basic phenotypic reactions [2,11,12]. They are alpha-or gammahemolytic, form chains, are catalase negative, produces leucine aminopeptidase, but are without pyrrolidonyl aminopeptidase activity. For VS group identification, results obtained for arginine decarboxylase production, esculin hydrolysis, Voges-Proskauer reaction, urease production and acid production from mannitol and sorbitol are of great help. Especially the high no. of strains from the mitis group species can be problematic [12]. Identification to the species level therefore in all times has been a challenge. Many attempts to solve species identification within VS strains have been made over time, including kit-based systems. Funke and Funke-Kissling [13] was the first to publish an evaluation of the new Vitek 2 colorimetric GP card and found all of the 18 alpha-hemolytic streptococcal species examined to be correctly identified. Haanperä et al. [6] used a pyrosequencing method (see above). Almost all studied streptococcal species (n = 51) represented by their type strains were differentiated, except some closely related species of the S. bovis or S. salivarius group. Pyrosequencing results of alphahemolytic strains from blood (n = 99) or from the normal pharyngeal microbiota (n = 25) were compared to the results obtained by the Vitek 2 with the colorimetric GP card. The results of the two methods did not completely agree, but 93 (75.0%) of the isolates were assigned to the same streptococcal group by both methods and 57 (46.0%) reached consistent results at the species level. These data are in line with our findings, that among VS strains, respectively, 220 (87,6%) and 31 (12,3%) obtained agreeing and non-agreeing identifications with the two methods with respect to allocation to the same VS group and of strains allocated to the same VS group, 55,5 % (122 of 220 strains) were allocated to identical species by the two methods. These obtained data, however, covers a continuum of certainty, as searched illustrated by dividing molecular data into confirmed, probable and possible. Though most strains typically are given in the confirmed column, the identification challenges within especially the mitis group is clearly illustrated by the great no. of strains given in the probable and possible columns. Likewise, the spread in identification level obtained by Vitek 2 illustrates this uncertainty. In the work by Haanperä et al. [6], 10 strains remained unidentified by Vitek 2, and 4 strains could not be assigned to any streptococcal group by pyrosequencing. Most of discrepant results were found in the mitis and sanguinis groups, being in agreement with our results. Inclusion of tests for acid production from inulin and production of dextran will probably be able to contribute significantly to discrimination within the mitis group. For the other frequently encountered VS groups, the anginosus groups, salivarius and bovis groups, there seems to be less discriminatory problems with the tests included in the system.
For correct molecular species identification of VS strains, sequencing of other genes or application of other methods therefore seems desirable. Simmons et al. [14] examined 94 VS strains from 94 patients with definite endocarditis to evaluate the phylogenetic relationships of VS with 16S rRNA, tuf, and rpoB gene targets. The used 16S rRNA primer pairs were 5F and 534R, which is very similar to the bp stretch used in this study. Overall, VS isolates demon-strated a high degree of variability for all three targets, which was not found a surprising observation since transfer of genetic material among streptococci has been well described [8,14,15]. In their study, classifications within groups were not always predictable or correlated with phenotype or phylogeny; an observation also noted by Hoshino et al. for isolates from patients with bacteremia and meningitis [4]. Kilian et al. [8] have looked at the evolution of S. pneumoniae and its close commensal relatives. A population genetic analysis of 118 strains tentatively assigned to S. pneumoniae, S. mitis, S. oralis, and S. infantis, aligning the sequences of the four housekeeping genes, ddl, gdh, rpoB, and sodA, plus eight sets of sequences extracted from S. pneumoniae genomes revealed, a remarkable sequence polymorphism and that S. pneumoniae is one of several hundred evolutionary lineages forming a cluster separate from S. oralis and S. infantis. The remaining lineages of this distinct cluster are S. mitis strains (commensals).
Also rnpB, sodA, and/or 16S-23S rRNA spacer targets have been used for characterization of strains illustrating the mentioned complexity [16][17][18]. For daily routine characterization and identification of VS the use of a combination of sequence analysis of the ITS region and of the partial gdh gene have proven promising [19].
For practical use, pyogenic streptococci are characterized by phenotypic and serological characteristics [2]. However, if reactions are found inconclusive, Vitek 2 can be very helpful or sequencing parts of the 16S rRNA gene.
All 54 E. faecalis strains seemed well identified by both methods.
CONCLUSION
In conclusion, the present study shows the usefulness of the Vitek 2 GP card and partial 16S rRNA gene sequence analysis for identification of VS strains to the group level, but also the problems occurring when searching to reach a species identification, which seems most problematic with the mitis group of species. Interestingly, among VS strains non-agreeing species identification mostly occurred among strains in the contaminant group, while for endocarditis strains notably fewer disagreeing results were observed. | 2014-10-01T00:00:00.000Z | 2010-12-31T00:00:00.000 | {
"year": 2010,
"sha1": "0b741581193b0635d84885e4b6d824cd529042e8",
"oa_license": "CCBYNC",
"oa_url": "http://benthamopen.com/contents/pdf/TOMICROJ/TOMICROJ-4-116.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "0b741581193b0635d84885e4b6d824cd529042e8",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
216467089 | pes2o/s2orc | v3-fos-license | Dental caries prevalence among Type 1 diabetes mellitus (T1DM) 6- to 12-year-old children in Riyadh, Kingdom of Saudi Arabia compared to non-diabetic children
Objective The aim of this study was to compare the prevalence of dental caries among groups of 6–12-year-old children with and without Type 1 diabetes mellitus (T1DM) in Riyadh, Saudi Arabia, taking into account oral health behaviour, diet, and salivary parameters. Methods The study was designed as a comparable study of dental caries experience between T1DM and non-diabetic groups of children. The total sample size of 209 participants consisted of 69 diabetic and 140 non-diabetic children. Oral hygiene, diet and socio-economic status were collected using a pre-tested questionnaire. Caries was recorded in terms of decayed and filled permanent and primary teeth (DFT/dft). Salivary microbial counts and pH levels were recorded using Caries Risk Test (CRT) kit. Student's t-test, the chi-squared test, linear regression and one-way analysis of variance were performed P-value of 0.05 considered significant. Results The mean dft scores for the diabetic and non-diabetic groups were 3.32 ± 0.78 and 3.28 ± 0.71 (mean ± SD), respectively (p = 0.458). The mean DFT scores for the diabetic and non-diabetic groups were 1.62 ± 0.65 and 1.96 ± 0.65, respectively (p = 0.681). Diabetic children visited dentists more often than non-diabetic children did (p = 0.04), and had lower consumption of both sweets (p = 0.003) and flavoured milk (p = 0.002) than the non-diabetic group. Furthermore, analysis showed that the diabetic children had medium oral pH levels (pH = 4.5–5.5), whereas the non-diabetic children tended to have high (pH ≥ 6.0) oral pH; this difference was statistically significant (p = 0.01). In addition, the diabetic group had higher Lactobacillus levels than the non-diabetic group (p = 0.04). Conclusion The difference in caries prevalence between the diabetic and non-diabetic children was not statistically significant. The CRT analysis revealed a higher frequency of “critical” pH values (pH = 4.5–5.5) and higher Lactobacillus counts in diabetic children than in non-diabetic children, which indicated a higher caries risk in the former group.
Introduction
The upward trend in non-communicable diseases worldwide, specifically in developing countries, is a major public health concern that calls for thorough investigation and appropriate preventive interventions. Diabetes mellitus (DM) and dental caries are widespread non-communicable diseases that impose burdensome costs on governments and individuals. These two diseases share a common risk factor (i.e., sugar consumption), and if their association were better understood, practical interventions could be developed to reduce the burden of both diseases (Sheiham & Watt, 2000). Dental caries is a prevalent chronic disease across all age groups, including children, especially in developing countries. Although the disease is associated with specific bacteria (Streptococcus mutans and Lactobacillus) that exist as part of the normal flora of the oral cavity, the increased consumption of carbohydrates may have been a significant force behind its wide-reaching global spread (Moye et al., 2014).
DM, on the other hand, is a chronic metabolic disease, characterized by blood-sugar imbalance that has become a global public health concern in recent years due to a drastic increase in its prevalence (Awuti et al., 2012). Saudi Arabia reportedly has the seventh highest prevalence of diabetes of any country worldwide (Al-Rubeaan, 2015). Additionally, a study reported that most of the diabetic Saudi children surveyed were unaware of their condition (Al-Rubeaan, 2015). Type 1 DM (T1DM), also called insulin-dependent diabetes, usually affects children and young adults. A recent systematic review has revealed that the prevalence of T1DM in Saudi Arabia is highest in the city of Riyadh and lowest in the eastern and rural regions of the country (Alotaibi et al., 2017).
Saudi Arabia also ranks highly among the nations of the world in prevalence of dental caries (Al Agili, 2013). Many physical, biological, environmental, behavioural, and lifestyle-associated factors contribute to dental caries. More specifically, high numbers of cariogenic microbes, insufficient salivary flow, inadequate fluoride exposure, poor oral hygiene, incorrect ways of feeding infants, excessive sugar consumption, and low socio-economic status are all primary risk factors for the disease (Fejerskov and Kidd, 2009).
The common risk factor approach (CRFA) indicates that some contributing risk factors are common to many chronic diseases. For example, sugar consumption is a risk factor for both DM and dental caries (Sheiham & Watt, 2000). However, research reports have presented inconsistent findings regarding the association between diabetes and dental caries, with the results of studies differing substantially. Some studies have shown a higher prevalence of caries in subjects with diabetes than in those without (Latti et al., 2018;Lo´pez et al., 2003), while other studies found either lower (Kirk & Kinirons, 1991;Wyne et al., 2016), or similar caries prevalence in diabetic and non-diabetic subjects (Harrison & Bowen, 1987;Ismail, et al., 2017;Twetman et al., 1989).
The saliva of diabetic patients shows both quantitative and qualitative changes in several parameters (Javed et al., 2009;Lo´pez et al., 2003;Moreira et al., 2009;Siudikiene et al., 2008;Zalewska et al., 2013) with diabetic adults reportedly having elevated levels of S. mutans in their plaque and saliva (Hintao et al., 2007). Inconclusive findings have been reported in the literature, where some studies reported no significant difference in the level of cariogenic bacteria was found in the saliva of diabetic versus non-diabetic children (Harrison & Bowen, 1987;Siudikiene et al., 2008). While a recent study revealed a higher bacterial load in the saliva and dental biofilm of of diabetic study sample (Coelho et al., 2018). Studies have reported that the salivary flow rate of children with T1DM is lower than that of non-diabetic children (Javed et al., 2009;Siudikiene et al., 2008). Although, there was no difference between children with poorly controlled and well-controlled T1DM in terms of salivary flow (Javed et al., 2009). In addition to the decrease in quantity, the saliva of diabetic individuals was characterized by low buffering capacity and pH, higher viscosity, and increased levels of carbohydrates, glucose and total protein (Moreira et al., 2009;Siudikiene et al., 2008).
Studies in Saudi Arabia exploring the relationship between dental caries and T1DM in children have been relatively scarce. Hence, this study could serve as a good model for a baseline study of oral health status in Saudi Arabia. The aim of this study was to compare the prevalence of dental caries among 6-to 12-year-old children with and without T1DM in Riyadh, Saudi Arabia, taking into account oral hygiene and diet.
Materials and methods
This research was designed as a comparative study to investigate the prevalence and association between dental caries and T1DM among 6-to 12-year-old children in Saudi Arabia by comparing the risk of caries between samples of diabetic and non-diabetic children. This was achieved by evaluating the numbers of decayed and filled teeth, adjusted for demographic variables, among a sample of diabetic children compared with a sample of schoolchildren matched by age and gender.
Study sample
At alpha 0.05 with estimated proportional 0.6 and 0.4 and powered 0.86 the total sample size was calculated to be at least 200. The diabetic children were selected from King Khalid University Hospital Pediatric Clinic and King Abdulaziz University Hospital, located in the city of Riyadh. While non-diabetic children were sampled from two public primary schools that were selected randomly from all Riyadh school districts.
Subjects in both groups were excluded from participating if they met any of the following criteria: 1. Active orthodontic treatment. 2. The presence of any systemic disease (other than diabetes). 3. The use of antibiotics within the past month (which might affect the mouth flora). 4. Children who had a meal within an hour before examination were excluded from CRT analysis. 5. Lack of written consent from the parents for their children to participate in the study.
Ethical review & approval
Letters of approval were sent to the participating schools and healthcare facilities. Approval from the College of Dentistry Research Center (CDRC No. FR0313), approval from the ethical subcommittee of the Institutional Review Board of King Khalid University Hospital (KKUH No. 16-0391-IRB), and informed consent from each subject's parent/guardian were obtained before the commencement of the study.
Study data
The data for the study were collected using a pre-tested questionnaire, oral examinations, and salivary samples. The questionnaire was filled by parent/guardian and collected a variety of information including sociodemographic factors, oral hygiene practices, dietary habits, dental visits, medical history, type of diabetes and family history. The consent forms and questionnaire were distributed by schoolteachers to be filled out by the parents/guardians of the participants and collected within a week, just before clinical examination. While consent forms and questionnaire of diabetic children were completed by their parents/guardians just before examination.
Examiner calibration & inter-examiner reliability
The oral examinations in this study were performed by four examiners (two male and two female dentists) along with four recorders. All examiners were calibrated to a gold-standard examiner according to the WHO Basic Surveys Calibration Protocol for caries detection, coding findings, and recording, which consists of a theoretical training session followed by oral examination of five patients (not part of the study sample) at the College of Dentistry Pediatric Clinic at King Saud University. The inter-examiner kappa statistic was assessed and was found to be 0.873, with a p-value of <0.05.
Dental caries exam
The selected schools and clinics were informed in advance, prior to the visit of the examiners. Then, the investigators conducted a clinical examination for dental caries utilizing the corresponding WHO diagnostic criteria (WHO, 2013). Because the ages of the children in this study ranged from 6 to 12 years, the subjects were expected to have mixed dentitions; hence, decayed and filled permanent and primary teeth (DFT/dft) instead of decayed, missing, and filled permanent and primary teeth (DMFT/dmft) were used to assess the frequency of caries for the permanent and primary dentitions, respectively. Only the permanent teeth were recorded when both the primary and permanent components of the same tooth/teeth were present.
The examination was carried out using light mounted on the examiner's forehead and a disposable plane mouth mirror (Aesculap AG). The teeth were dried with a cotton roll to remove any plaque or debris where necessary, the surfaces of each tooth were examined. The examiners checked the final form to detect any possible mistakes before dismissing the child for salivary testing.
CRT salivary testing
The collection of salivary samples was performed utilizing a CRT (Caries Risk Test) kit for testing the microorganism content and buffering capacity of saliva (Ivoclar Vivadent AG, Schaan, Liechtenstein). The test included the following components:
Microbial assessment
Each child chewed on a piece of paraffin wax to stimulate the flow of saliva. The child was asked to sit in an upright, relaxed position, with the head slightly inclined, and to drool saliva from the floor of the mouth into a sterile plastic cup.
CRT buffer test
Using a pipette, a saliva sample was taken, and one drop was placed on each of the three test pads: orange (low pH), green (medium pH), and blue (high pH). The test pads began to change colour immediately, but the final colour was assessed only after two minutes and recorded as the final saliva pH level. The results were categorized into three groups: high pH (i.e., blue), greater than or equal to 6.0, representing low caries risk; intermediate pH (i.e., green), ranging from 4.5 to 5.5, representing critical caries risk; and low pH (i.e., orange), less than or equal to 4.0, representing the highest caries risk.
Statistical analysis
Once the data was collected and stored in a computer spreadsheet file, they were statistically analysed using SPSS software (IBM SPSS version 20). Descriptive statistics such as frequency, mean and standard deviations were determined. For inferential statistics, a t-test (a = 0.05) was used to analyse the difference in mean DFT/dft values between groups. The chi-squared test, linear regression, and one-way analysis of variance (ANOVA) were also performed to evaluate the association between diabetes, DFT/dft, and other variables. A p-value below 0.05 was considered statistically significant.
The diabetic and non-diabetic groups showed no significant difference in caries experience. Table 2 shows that the mean dft values were 3.32 (±0.78) and 3.28 (±0.71), respectively for the diabetic and non-diabetic children (p = 0.458), while the mean DFT values were 1.62 (±0.65) and 1.96 (±0.65), respectively for the diabetic and non-diabetic children (p = 0.681).
The data also showed that diabetes had statistically significant associations with the education level of the child's father and the monthly income of the child's household. Paternal education level (p = 0.003) and monthly income (p = 0.002) were significantly and markedly higher in the diabetic group than in the non-diabetic group. A similar association was also seen with maternal education level, where mothers of diabetic children were more likely to have a higher education than the mothers of non-diabetic children in this study, although this association was not statistically significant (p = 0.076), Table 2.
The findings on Table 3 showed non-significant differences between diabetic & non-diabetic children on their brushing habits. But showed that diabetic children were significantly more likely than non-diabetic children to have visited a dentist in the preceding 12 months (p = 0.04). The findings also showed associations between non-diabetic status and the frequency of consumption of sweets and flavoured milk. The diabetic group was recorded as having significantly lower consumption of sweets (p = 0.003) and flavoured milk (p = 0.002) than the non-diabetic group (Table 3).
Table 4 findings showed that higher Lactobacillus CRT levels were found among the diabetic children compared to the non-diabetic children (p = 0.04).The CRT analysis was conducted on a smaller sample, consisting of 59 (out of 69) diabetic & 135 (out of 140) non-diabetic children, because some subjects were excluded for having eaten a meal just before attending the examination. The CRT testing of the oral flora in the samples indicated that medium oral pH levels (4.5-5.5) were found more among diabetic children than their non-diabetic counterparts, whereas the latter group had high pH (6.0) (p = 0.01).
Discussion
The study aimed at comparing the prevalence of dental caries among groups of 6-to 12-year-old children with and without Type 1 diabetes mellitus in Riyadh, Saudi Arabia. No statistically significant association was found between the prevalence of dental caries and T1DM among 6-to 12-year-old children.
The extent of dental caries in the primary teeth was slightly higher in the diabetic group (3.32 ± 0.78) than in the nondiabetic group (3.28 ± 0.71), while caries in the permanent dentition was slightly lower in the diabetic children (1.62 ± 0.65) than in the non-diabetic children (1.96 ± 0.65). These findings contrast with a previous study, which reported a lower rate of caries in the primary teeth and a higher rate in the permanent teeth of diabetic children than in those of healthy control children (Wyne et al., 2016). These small differences in caries rates between the diabetic and non-diabetic children were not statistically significant. Given the multifactorial nature of dental caries, some factors may increase its risk among diabetic individuals, while others might reduce that risk (Ismail et al., 2015;Novotna et al., 2015).
Our findings are in line with several studies that examined the association between dental caries and T1DM. A systematic review summarizing this relationship found a total of 20 published studies that reported caries experience among children with T1DM, of which 15 were case-control and 5 were longitudinal studies (Ismail et al., 2015). Among the 15 studies that reported caries experience in the permanent dentition (DMFT or DMFS), 8 of them contrasted with our findings: 5 studies found significantly elevated caries experience among children with T1DM compared to healthy control children, while 3 studies reported the opposite. In contrast, the findings of this study were aligned to 7 other studies that reported no difference between the two groups; similarly, among the studies that reported on caries in the primary dentition, none reported a significant difference between children with T1DM and controls.
This study showed that diabetes was more prevalent among children of parents who were more affluent or of higher socioeconomic status. A better interpretation may be that parents with higher socio-economic indicators were more likely to have their children's diabetes condition diagnosed, given that a greater percentage (90%) of diabetic children in Saudi Arabia are unaware of their medical condition (Al-Rubeaan, 2015). Socio-economic factors may also explain the increased likelihood of dentist visits during the preceding 12 months among diabetic compared to non-diabetic children.
Diabetic children were less likely than the non-diabetic to be consumers of sweets and flavoured milk, which could be explained by the actions of the diabetic children's parents and their application of stricter dietary controls. A recently published paper attested to the above by indicating a difference in caries risk between children with well-controlled and poorly controlled diabetes (Lai et al., 2017).
The CRT changes associated with diabetes showed that diabetic children were more likely than non-diabetic children to have critical oral pH levels (4.5-5.5) and high Lactobacillus CRT levels. These findings differ from a previous report in which diabetics had reduced Lactobacillus levels (Twetman et al., 1989). Although the decreased pH values and increased Lactobacillus counts seen in diabetic children are indications of a heightened caries risk, it appears that the participation of diabetic children in healthy behaviours (e.g., diet) might have restored their caries risk to lower levels (Jeong et al., 2006). A limitation of this study was its comparable crosssectional design. Stronger evidence of associations could be achieved in future studies utilizing cohort/longitudinal design.
Conclusion
No statistically significant differences were found in caries prevalence between diabetic and non-diabetic children in this study, for both the primary and permanent dentitions.
Decreased pH values and elevated Lactobacillus counts in diabetic children was noted compared to non-diabetic children, which could indicate a higher caries risk in the former group; nevertheless, the increased probability of healthy behaviours (e.g., diet) among diabetic children may have restored their caries risk to lower levels.
This study could be used as a baseline for future studies, which should attempt to overcome the limitations and explore in greater depth the different risk factors influencing caries risk among T1DM, including socioeconomic status, metabolic control, preferably utilizing longitudinal study designs.
Statement of contributions
All, author and co-authors, contributed in planning and conducting this study, and in revising this manuscript. Al-Badr, Halawany, Al-Jazairy, and Al-Jameel were the examiners of study subjects, but also contributed in writing the manuscript. Alhadlaq, Al-Sharif, Jacob, and Abraham were the recorders during the study, while the first two also performed the CRT laboratory testing. Mr. Al-Maflehi contributed in data setup, statistical analysis and presentation.
Ethical & conflict of interest statement
We confirm that this study was conducted following ethical guidelines; letters of approval were sent to the participating schools and healthcare facilities. Approval from the College of Dentistry Research Center (CDRC No. FR0313), approval from the ethical subcommittee of the Institutional Review Board of King Khalid University Hospital (KKUH No. 16-0391-IRB). Informed consent from each subject's parent/guardian were obtained before commencing the data collection for the study. Furthermore, subjects were deidentified to protect their privacy during data entry and analysis. Authors also confirm that there are no known conflicts of interest associated with this publication. Although the research received some direct funding from the Dental Caries Research Chair of the King Saud University as well as indirect funding from the College of Dentistry of the same University, the authors believe that such funding would not influence findings. | 2020-03-26T10:38:31.814Z | 2020-03-13T00:00:00.000 | {
"year": 2020,
"sha1": "1a3c558eb8deb984a478ecf19e232471af35aa9f",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.sdentj.2020.03.005",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fe15ba5e2f377c089fec42ba0e2cf918ed9c4d24",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55462631 | pes2o/s2orc | v3-fos-license | Application of the apparent diffusion coefficient in magnetic resonance imaging in an assessment of the early response to treatment in Hodgkin’s and non-Hodgkin’s lymphoma – pilot study
Purpose Lymphoproliferative neoplasms are the largest and most frequently diagnosed entities in the group of haematological malignancies. The aim of the study was to assess whether apparent diffusion coefficient (ADC) measured on the first day of the second cycle of chemotherapy could be a predictor of prognosis and of the final treatment’s outcome. Material and methods The study included 27 patients with diagnosed Hodgkin’s and non-Hodgkin’s lymphoma, who had magnetic resonance (MR) performed with diffusion weighted imaging/apparent diffusion coefficient (DWI/ADC) before and on the first day of the second cycle of chemotherapy. Imaging was performed using a 1.5 T MR scanner. ADC was measured in lymphoma infiltration in the area of the lowest signal in the ADC map and the highest signal on β 800 images in post-treatment study. After that, the corresponding area was determined in a pre-treatment study and an ADC value was measured. Results The difference between ADC values in pre-treatment (ADC = 720 mm2/s) and post-treatment (ADC = 1059 mm2/s) studies was statistically significant (p < 0.001). Cutoff values for estimating response to treatment were established at the level of ADC 1080 mm2/s, and ADC to muscle ratio at 0.82 in post-treatment study. Patients with ADC > 752 mm2/s before treatment manifested lower probability of progression than patients with ADC < 752 mm2/s. Conclusions ADC measurement’s before treatment and on the first day of the second cycle of chemotherapy can be used as a prognostic marker in lymphoma therapy. ADC values lower than 1080 mm2/s and an increase of the ratio after the treatment can be considered as a marker of disease progression.
Introduction
Lymphoproliferative neoplasms are the largest and most frequently diagnosed entities in the group of haematolog ical malignancies [1] staging, and response assessment of patients with Hodgkin lymphoma (HL). Over the years an increasing trend of incidence rate of lymphomas has been noticed. Currently it amounts to approximately 2022 new cases per 100,000 persons per year, according to various regi sters [2,3].
Introducing new diagnostic tools to everyday prac tice allows more precise evaluation of disease. One of the most important aspects is the evaluation of response to treatment. Accurate assessment of early response is crucial to the diagnosis of lymphoma. This allows patients to be distinguished within the highrisk groups and to modi fy ineffective treatment in the early stages. This is of ut most importance in the context of individualisation and optimisation of treatment, contributing to positive effects for both the patients and for the entire health care system (economic effect).
Positron emission tomography computed tomog raphy (PET/CT) scanning is currently considered to be the reference method for the assessment of response in the majority of lymphomas, especially in the evaluation of early response to treatment in HL [4] who underwent both [ 18 F]FDGPET/CT and wholebody MRI (including T1 and diffusionweighted sequences). There are several studies describing the potential role of diffusion weight ed imaging (DWI) in the diagnosis and evaluation of re sponse to treatment of lymphomas [5] 31 females, medi an age -42 years, range 1586 years. DWI is a technique in which the image contrast reflects the in vivo changes in the motion of water molecules (Brownian motion) in tissues. A supplemental tool in DWI is the apparent diffusion coefficient (ADC) map, acquired by postpro cessing of the obtained DWI images [6]. ADC allows for quantitative definition of diffusion parameters (in mm 2 /s) ( Figure 1). The applicability of DWI has been confirmed e.g. in the detection of ischaemic stroke or in the evalua tion of breast or prostate gland abnormalities. Numerous recent studies associated with DWI have focused on uti lisation of its tools to evaluate response to treatment in oncological patients. Haematological diseases seem to be a very perspective area for the DWI tools, e.g. due to high cellularity of lymphoma infiltration [7].
The purpose of this study was the assessment of the DWI/ADC imaging protocol in the evaluation of the ear ly response to treatment of Hodgkin's and nonHodgkin's lymphomas. Additionally, we analysed whether the ADC measured on the first day of the second cycle of chemo therapy could be a predictor of prognosis and of the final outcome of the treatment.
Material and methods
The study included the final group of 27 patients with Hodgkin's and nonHodgkin's lymphoma diagnosed (Table 1). They underwent MRI of the area in question be fore the treatment and on the first day of the second cycle of chemotherapy. All examinations were performed using a 1.5 T MR unit with a conventional phased array body coil. The DWI was performed using a standard protocol, namely the singleshot spinechoplanar imaging (EPI) in the axial plane, with the following parameters: TR 5200 6000 ms, TE -72 ms, voxel size 2 × 2 × 5, Bw 1448 Hz/px, b values 50, 400, and 800, 3045 slices, duration ~6 min.
ADC maps were calculated with a dedicated work station. ADC values were measured in lymphoma infil tration in the area of the lowest signal in the ADC map images in posttreatment study, paying particular atten tion to avoid areas that could affect the DWI signal, e.g. haematomas. The corresponding area was determined in the pretreatment study, and the pretreatment ADC val ues were measured afterwards. Only ovalshaped ROIs were used to measure the ADC values, and the size thereof was adjusted to the size of the area with the lowest ADC signal. The ADC values were analysed as an independent value and as a ratio -dorsal muscles were used as the ref erence organs (Figure 2). A Wilcoxon test was performed to verify the difference between the ADC values before and after the treatment. The ROC curve was used to determine the cutoff values, and the odds ratio was calculated.
Results
There was a statistically significant difference between the ADC values in the pretreatment (ADC = 720 mm 2 /s) and posttreatment (ADC = 1059 mm 2 /s) studies ( Figure 3). The ADC value increased significantly in both groups (Table 2, Figure 4). In the group of patients with diagnosed HL the ADC increased by 344 mm 2 /s on average, and by 206 mm 2 /s in patients with nonHodgkin's lymphoma, respectively. The cutoff values used for estimation of the response to the treatment were established at the level Hodgkin's lymphoma (n) 8 Non-Hodgkin's lymphoma (n) of ADC 1080 mm 2 /s, and the ADC to muscle ratio 0.82 in the posttreatment study ( Figure 3). The patients with ADC > 752 mm 2 /s before treatment demonstrated low er probability of progression than the patients with ADC < 752 mm 2 /s (p = 0.046). Considering the changes be tween the studies, an increase of the ADC by 34.5% and an increase of the ratio of by 32.5% were determined as the cutoff values. The highest odd ratios were calcu lated proving that the preexamination ADC or the ratio itself would serve best for an assessment of the low re sponse risk.
Discussion
Advanced imaging techniques play an important role in the diagnosis, evaluation, and staging of lymphomas [1] staging, and response assessment of patients with HL. De spite the fact that the sensitivity and specificity of PET/ CT with 18 FFDG depends on the histological lymphoma subtype, the Lugano classification of malignant lympho mas recommends the use of PET/CT with 18 FFDG as the reference imaging technique combined with bone mar row biopsy (BM) [1,8] staging, and response assessment of patients with HL. There are several studies describing the role of wholebody MRI, with the DWI/ADC meas urement as a diagnostic tool in the evaluation of patients with lymphoma [912] metabolic tumor volume (MTV). The potential role of the measurements of DWI/ADC in patients with nonHodgkin's lymphoma was well de scribed in 2012 by Chen and Zhong. The authors reported that WBDWI can be adopted to detect morphological changes of lesions, but moreover it provides important functional information about the growth and decline pro cess of tumour cells [13]. The effectiveness of PETCT and MRI DWI/ADC in the initial stages of malignant lympho ma was analysed in another study. The authors compared these two methods in a pretherapeutic context and agree ment for Ann Arbor staging. They reported high repeat There were weak inversed correlations be tween the SUV max and ADC min in all cases, but it was not repeated in subgroups [9] MTV. The prognostic feature of the DWI/ADC was the subject of the study performed on the group of 28 patients with primary central nervous system lymphoma. It was investigated which DWI/ADC rank or parameter is a better biomarker of the response to treatment. It was revealed that DWI/ADC 5 th percentiles are good predictors for progressionfree survival. The early response to treatment is an important in dicator of a patient's condition and prognosis [15]. Ap propriately quick assessment enables modification of the treatment protocol and if necessary adjustment to pa tient's needs. In this study we evaluated weather the ADC measured on the first day of the second cycle of chemo therapy could be a predictor of prognosis and of the final treatment's outcome. There was a statistically significant difference between the DWI/ADC values in groups with Hodgkin's and nonHodgkin's lymphomas. Patients with HL had higher values of DWI/ADC before and after the treatment (respectively, p = 0.027 and p = 0.029). Sim ilar results were obtained in other studies [1,16,17] i.e., indolent versus aggressive lymphoma, and also to assess the prognostic value of different quantitative parameters of wholebody. Our results indicated that patients with ADC > 752 mm 2 /s before treatment demonstrated low er probability of progression than the patients with ADC < 752 mm 2 /s (p = 0.046). Mosavi et al. similarly reported a significant relationship between higher mean ADC and longer overall survival (p = 0.006) [16]. An increase of the ADC by 34.5% after the second cycle of chemotherapy correlates with a better prognosis. This result has not been confirmed in other cancers. Multivariate analysis in head and neck cancer revealed that lower pretreatment ADC was associated with a better response to treatment [16].
There were some limitations to this pilot study, such as the small number of patients with diagnosed nonHodg kin's lymphoma, or lack of histopathological results.
Conclusions
Measurements of the ADC values before treatment and on the first day of the second cycle of chemotherapy can be used as a prognostic marker in the therapy of lympho mas. The most promising tool for assessing response to treatment seems to be the ratio between the ADC value measured in the area of infiltration and the ADC value of the reference organ (in our case -dorsal muscles). We calculated that an increase of the ratio lower than 32.5% could serve as a poor prognostic factor and could lead to modification of treatment. Early DWI/ADC measure ments enable shortening of the diagnostic process, thus obtaining a quicker assessment of prognosis. An early re sponse to treatment can influence further therapy and can potentially increase the chances for regression of lympho ma. The results seem to be promising, but further studies with larger groups of patients and longterm followup are essential to prove the usefulness of the DWI/ADC meas urements in the evaluation of lymphoma. | 2018-12-11T05:51:31.602Z | 2018-05-12T00:00:00.000 | {
"year": 2018,
"sha1": "50ed4ff30af64b7428291b8444336f2d3ce70c2b",
"oa_license": "CCBYNCND",
"oa_url": "https://www.termedia.pl/Journal/-126/pdf-32875-10?filename=Application%20of%20the%20apparent.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "50ed4ff30af64b7428291b8444336f2d3ce70c2b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119293148 | pes2o/s2orc | v3-fos-license | Interaction of strangelets with ordinary nuclei
Strangelets (hypothetical stable lumps of strange quarkmatter) of astrophysical origin may be ultimately detected in specific cosmic ray experiments. The initial mass distribution resulting from the possible astrophysical production sites would be subject to reprocessing in the interstellar medium and in the earth's atmosphere. In order to get a better understanding of the claims for the detection of this still hypothetic state of hadronic matter, we present a study of strangelet-nucleus interactions including several physical processes of interest (abrasion, fusion, fission, excitation and de-excitation of the strangelets), to address the fate of the baryon number along the strangelet path. It is shown that, although fusion may be important for low-energy strangelets in the interstellar medium (thus increasing the initial baryon number A), in the earth's atmosphere the loss of the baryon number should be the dominant process. The consequences of these findings are briefly addressed.
Introduction
The hypothesis of stability for strange quark matter (SQM) [1,2,3,4], a cold form of QCD plasma composed of u, d, and s quarks, naturally leads to the consideration of its existence in the interior of neutron stars [5,6,7,8,9]. It has been conjectured that if this is indeed the case, then some astrophysical events could eject finite pieces of SQM, called strangelets, in the Galaxy. Thus, the latter would "contaminate" the cosmic ray flux in the interstellar medium (ISM), forming an exotic component among ordinary nuclei (however, see [10,11] for definite counterexamples from strange stars merging simulations).
Although the density of matter in the ISM is very low (on average about 1 particle/cm 3 ), the confinement times of charged particles in the galactic magnetic fields are rather large, so that collisions between cosmic rays and particles composing the medium may become a relevant factor. Particularly, the passage of strangelets through regions in which the density of the ISM is substantially higher, for example, HII regions and supernovae remnants, could exert a measurable influence on the ultimate detection of the flux at a fixed mass value.
On the other hand, the density in the terrestrial atmosphere is at least fifteen orders of magnitude higher than the average one found in the ISM. Cosmic rays, thus, must travel a thick air layer before hitting the earth surface. If the reprocessing of the mass distribution in the ISM seems likely, it is unavoidable in the atmosphere, at least for strangelets penetrating deeply in the atmosphere.
There have been attempts of explaining some rare events in terms of the presence of strangelets in the cosmic ray flux [12,13,14,15,16,17,18]. Those events presented one or more exotic features such as low charge-to-mass ratio, high penetration of the primary in the atmosphere, absence of neutral pion production, transverse moment of secondaries much higher than the typical values of ordinary nuclear fragmentation, and exotic secondary production.
Broadly speaking, there are two proposed explanations for the processing of strangelets (assumed to be primaries, see [19]) that travel deep in the atmosphere, particularly those arriving with ultrarrelativistic energies.
It has been suggested that strangelets of high baryon number lose mass in successive interactions with ordinary nuclei from the top of the atmosphere during their propagation towards the earth's surface. When their baryon number reaches a value for which their mass is lower than the minimum one associated with stability, A crit , they decay into ordinary hadrons. In this way, it would be possible to conciliate the small mean free path (of same order of ordinary nuclei) and the high penetration in the atmosphere [20].
On the other hand, in contrast to ordinary nuclei which tend to fragment in collisions, it has been suggested that strangelets could become more bound through the absorption of matter. Also, the electric charge of strangelets allows their trajectories to be affected by the geomagnetic field, causing an increase in the true length of the path taken to reach a given altitude. If this is the case, then the number of interactions strangelets would suffer with atmospheric nuclei when reaching a given altitude would increase, probably resulting in its complete evaporation before reaching the desired altitude. Therefore, it has been proposed that strangelets with baryon number slightly above A crit when reaching the top of the atmosphere would increase its mass instead of decreasing it, due to successive fusion reactions with atoms in the atmosphere [21]. The possibility of competing fission was not considered in these models.
Since there are still controversies about these phenomena by different authors, a more complete analysis is necessary to provide a definitive answer on what process of interaction could lead to an event with characteristics similar to the Centauro and a few other abnormal events.
In this paper we will discuss by which means the initial fragmentation spectra for strangelets coming from astrophysical sources could be reprocessed through collisions with the matter that compose the ISM. We will also discuss how the same hadronic interactions to which strangelets would be subject in the ISM can be also responsible for experimental signatures, possibly detectable after their passage through the earth's atmosphere.
Hadronic Interactions
For the analysis of nuclear interactions between strangelets and other nuclei, we considered the following processes possibly affecting the initial baryon number of the strangelet: fusion, abrasion and fission. We also considered the processes of energy loss (de-excitation), which relieves the excitation energy acquired in the collision. Each one is described as follows:
Abrasion
The abrasion model proposed by Wilson et al [22] is a simple model built to describe spallation qualitatively. It is based on geometric arguments rather than on details of particle-particle interaction. The abrasion process consists in the sheer-off of the region of overlap in the target by the projectile, possibly leading to the removal of all matter affected by the collision.
In spite of the evident loss of details, this model is very useful when there is lack of experimental data or when other models fail to give a consistent analysis. The use of this simplified model largely based on geometric arguments in this study becomes a natural choice since strangelets have not been detected yet and, consequently, models cannot be improved by comparison with experimental data.
The quantity of matter in the projectile that is abraded is given by where F is the fraction of the projectile in the interaction zone, λ is the mean free path of nucleon-nucleon interaction, A p is the baryonic mass of the projectile and C T is the maximum chord in the projectile-target interface (for details, see [22] and references therein).
To study the interaction of strangelets with ordinary nuclear matter, we considered parametrically the formation of "pseudo-baryons", or clusters of three quarks temporarily bound in the interior of a strangelet, in a similar procedure to that used in the literature for the study of the difference in the binding energy between the nuclei of 3 He and 3 H (see, for instance [23]). It is assumed that quarks in the strangelet will maintain their identities as long as the distance between them is higher than a certain length scale, r 0 . Whenever there is a superposition of the quarks (r < r 0 ), they are treated as confined to the same spherical region. We did not consider the formation probability of clusters other than those made of three quarks, for we are interested in analysing the change in baryon number of strangelets. This would not happen in the case of abrasion of mesons nor with the ejection of a single quark (due the effect of the colour field). We also did not take into account the abrasion of clusters containing a higher number of quarks (for example, a pentaquark) for they would have an extremely small formation probability, thus being irrelevant for the present analysis.
We considered that at a given time, a fraction of the quarks composing the strangelet are grouped in three quarks temporarily bound. We analyse the results taking this fraction of clusters f as a free parameter. The mean free path is taken according to a parametrisation used for the nucleon-nucleon cross section ‡ [24] and the abrasion model is used to estimate the change in the baryon number for each collision process.
Fusion
The fusion process is represented generically by A + B → C, where A and C are strangelets and B, the incident nucleus.
For a fusion reaction to occur, the interacting nuclei must have enough energy to overcome the repulsive Coulombian barrier between them, or it can also penetrate the barrier through the well-known quantum tunnelling effect.
For centre of mass energies lower than the Coulombian barrier of the system, we simply used the Gamow parametrisation. For energies above the Coulombian barrier, we followed the proposal of [25] and consider that when the energy deposited by the projectile into the strangelet (in the reference frame of the latter) is of order of the binding energy of the projectile in the strange quark matter, then fusion occurs. We also consider that fusion will occur only for central collisions, according to the geometric parameters defined for the abrasion process. In this way, for the smallest chord moved along by the interacting nucleus in the strangelet in a central collision (equal to 2 2R str r p − r 2 p , where R str and r p are the strangelet's and nucleus' radius, ‡ In the additive quark model, the cross section for a given particle is proportional to the number of valence quarks which compose it. It is known that different baryon compositions affect the value of the cross section. However, this first approximation provides the correct order of magnitude for the mean free path. respectively) we considered the maximum amount of energy deposited to be equivalent to the binding energy of the strangelet that fuses ( where M n is the mass of the neutron and M(A), the mass of the strangelet of baryon number A). A scaling is taken for higher lengths (up to the strangelet's diameter). This construction allows to associate each chord of interaction with a step function in energy for fusion, and is also consistent with the overall geometric approach adopted from the beginning.
Fission
The fission of strangelets may follow after processes transferring enough excitation energy in a collision. As in previous studies [26], we considered, in analogy to ordinary nuclei, a liquid drop model to address this issue.
When the distance between fragments 1 and 2 coming from the possible fission of a strangelet is r = 0 (the initial state of the spherical drop), E 0 is the difference in the rest energy (quantity of energy available for fission) given by where (A,Z) represents the strangelet which may come to fission. The smallest energy necessary for a system to fission (activation energy) is the one leading to the fragmentation of the system in two fragments of equal mass. This is easier for strangelets than for heavy nuclei because the Coulomb energy does not play an important role. In fact, we neglected the contribution of Coulomb energy in our calculations, since it is much smaller than the masses of strangelets themselves.
Excitation of strangelets
Even peripheral collisions not stripping baryon number from the strangelet can be significant for the excitation of the latter. The mean energy transferred to a nucleon by unit of intersected path is of order 13 MeV/fm. As strangelets are composed by quarks, it is reasonable to assume that the interaction of these particles with ordinary nuclear matter behaves similarly to that between two nuclei. In this way, the energy deposited per unit path must be of the same order of magnitude of the one between nucleons. In this work, the excitation energy due to the transfer of kinetic energy through the surface of the interacting system was taken as the nuclear value and scaled by the longest chord in the surface of projectile (for more details, see [22]).
In cases in which abrasion occurs, the excitation energy coming from the distortion of the nucleus must also be considered (the parametrisation used is detailed in [22]).
The SQM hypothesis suggests that the fusion of a proton with SQM leads to the additional liberation of energy for each nucleon absorbed. In the fusion process the excitation energy can be computed using the "mass excess", E x , written as being A str and A N the baryon numbers of strangelet and the nucleus with which it interacts and M N (A) and M str (A), the masses of ordinary nuclear matter and strange matter for a given A, respectively.
De-excitation
A strangelet may suffer de-excitation through surface evaporation of nucleons. For the emission of neutrons, we followed the procedure detailed by Berger and Jaffe [27]. The emission of neutrons through the weak force leaves a strangelet with parameters changed by ∆A = −1 and ∆Y = ∆Z = 0. This happens when On the other hand, the energy lost by the emission of pions can be calculated based on the chromoelectric flux tube model [28] and reads It is known that SQM is a poor emitter of thermal photons at energies below 20 MeV. When considering bremsstrahlung process, the emission from the surface of SQM is of four orders of magnitude smaller then the equilibrium black body emission at a given temperature T [29]. The equation to be considered is then where ζ ∼ 10 −4 , C v is the specific heat of SQM and σ is the Stefan-Boltzmann constant. Finally, cooling by neutrino emission produces a luminosity L ν = dE ν /dt given by where ǫ ν is the neutrino emissivity. The specific heat is taken from references [30,31] for SQM without pairing and from [32] for CFL SQM.
Interactions in the ISM
If produced in astrophysical sites, strangelets would interact with the ISM matter causing a reprocessing of the mass distribution injected at the sources.
To analyse all the possible processes of hadronic interaction described above between ordinary nuclei and strangelets and actual de-excitation channels, we built a computer code tracking interactions on an individual basis. Starting from a given strangelet of baryon number A and kinetic energy E, the following steps are taken: (i) Setup of the collision: Random sampling of the impact parameter, b ≤ R N + R str , where R N and R str are the nuclei and strangelet's radius, respectively. From this parameter, the distance of closest approach is calculated; Random sampling of the excitation energy for the strangelet resultant from the transference of energy in the collision (in half of the events, the target is considered to be in a excited state and the projectile in the other half); (ii) Hadronic interaction: If the distance of closest approach is higher than the sum of the radius of the two particles, Coulombian scattering is assumed to happen; On the other hand, for energies below the Coulomb barrier, there is a probability of quantum tunnelling, sampled numerically. For energies above barrier, the criteria for energy deposition are checked. If conditions are met, fusion is assumed to occur and the corresponding excitation energy is obtained; If the conditions for fusion are not fulfilled, then abrasion is considered according to equation 1 and the corresponding excitation energy is obtained. If there are no baryons extracted in the interaction, scattering is said to have taken place.
(iii) Fission: Fission can only happen if the total excitation energy is above the activation energy.
If this is the case, then the most likely channel for fission is considered, i. e., the strangelet will fragment in two daughters with same baryon number and the kinetic and excitation energies are equally divided between the fragments; (iv) De-excitation: The strangelet's temperature is obtained according to the First Law of Thermodynamics so that the de-excitation processes can be evaluated (emission of photons, neutrinos and pions and neutron evaporation). Figure 1 presents the probabilities of abrasion, fusion and scattering processes, as described in section 2, for the collision with protons in the ISM as a function of the incident energy of the strangelet §.
As discussed above, the fraction of "baryons" inside a strangelet is taken as a free parameter in the calculations and is related to the probability (impossible to calculate reliably) that at a given moment there is a certain quantity of clusters of three quarks formed internally. This value is relevant for verifying the relative importance of the processes of abrasion and scattering.
For energies below the Coulomb barrier, fusion occurs due to quantum tunnelling and for energies above barrier, it is the ultimate result of energy deposition of the projectile in a central collision. As the energy increases, the probability of a central collision in which the projectile deposits all its kinetic energy (in the strangelet's referential) progressively decreases until the kinetic energy is above the maximum associated to fusion (the one correspondent to the deposition along the strangelet's diameter), where the probability goes to zero.
Due to the decrease of the fusion probability as the energy increases, the abrasion process becomes important. Also the mean free path decreases with the increase of § In this analysis we considered the dependence on the strangelet's mass with the baryon number according to [33]. energy, causing the collision between the proton and the pseudo-baryons inside the strangelet to be more probable . Nevertheless, if the fraction of clusters is too low, the abrasion process becomes dominant only at high energies, since the passage of the proton through the strangelet does not change its baryon number.
Although the dependence of the electric charge is less important for CFL strangelets (which would in turn influence the determination of the distance of closest approach), for high enough energies this effect is indeed irrelevant, and this explains why there are no significant differences between the two states (unpaired and CFL) in Fig. 1. This fact is mainly due to the assumption about the mean free path for the ordinary nucleus with the pseudo baryons formed inside the strangelet, considered to be the same for both states in this study. As the number of clusters increases, the influence of the abrasion process becomes more important with a raise in the number of baryons abraded. For energies per baryon number high enough (above ≈ 10 5 MeV), the relation between abrasion and scattering processes tends to an asymptotic value. Figure 2 shows the mean baryon number abraded when abrasion becomes relevant. There are no important differences in this parameter for non-CFL and CFL strangelets. As the strangelet energy increases, the quantity of abraded matter increases until it approaches an asymptotic value for very high energies. Obviously, this value is also affected by the fraction f of clusters temporarily bound in the strangelet. The higher this fraction, the higher the loss of strangelet mass per collision.
In a collision of a strangelet with ordinary nuclei the excitation energy must be higher than the activation energy in order to fission to happen. As mentioned above, the most likely channel for the fission of strangelets (apart of possible shell effects, not taken into account here) is the one with two fragments of same baryon number (note the difference with ordinary nuclei influenced by the Coulomb terms). For strangelets interacting with protons in the ISM the highest energies acquired in the interaction in this analysis were not enough (by a factor of at least 10) to cause fission of strangelets of relatively low mass. Moreover, for high-mass strangelets, fission is again disfavoured because, although there is a small difference in binding energy between the strangelet and its fragments, the high baryon number pumps the activation energies to even higher values than the ones for low-mass strangelets, as expected.
We can estimate which processes dominate the de-excitation of strangelets from their mean temperatures in the collisions (shown in Figure 3). From the latter, we conclude that the process of neutron evaporation due to the raise in the temperature is not likely to be relevant unless the temperatures do rise above tens of MeV. The emission of neutrons might still be possible during the pre-equilibrium configuration, while the energy released during fusion is not still uniformly distributed inside the strangelet. Nevertheless, and even if this is the case, this process would probably contribute with the emission of very few baryons since thermal equilibrium must be reached very quickly.
In addition, cooling by neutrino emission is hardly the main mechanism for energy At relativistic energies, it mimics full stopping of the projectile in the target. loss since the temperatures associated with the collisions of protons and strangelets are always below a few MeV, i. e., before neutrino emission dominates the photon emission. Those temperatures are obviously not enough for pion emission either. Therefore, we conclude that the dominant de-excitation mode must be photon emission.
Interactions in the earth atmosphere
The interactions of strangelets with the main atmospheric component, the nitrogen molecule, has been analysed afterwards. We have not considered the possibility of partial fusion. The same procedure of section 3 was performed to evaluate the relative importance of the abrasion, fusion, fission and scattering processes of strangelets travelling in the earth's atmosphere. The analysis shows that the fusion process only happens for strangelet energies lower than those in the ISM, due to the substantial difference in the rest masses of the proton and nitrogen (see figure 5). The fusion probability increases with A because the increase of the strangelet radius leads to a longer path in central collisions for the deposition of kinetic energy of the nucleus with which the strangelet interacts (in the reference frame of the latter).
The probability for abrasion remains low for low energies, due to the competition with the fusion process, and increases with the raise in energy because, with the reduced influence of the Coulombian sheer off, scattering becomes less likely to happen. These observations are weakly dependent on A.
The abrasion process probability increases with energy and the quotient abrasion/scattering tends asymptotically to a constant value (close to 1) for relativistic strangelets due to the dependence of the mean free path of nucleon-nucleon interaction with energy. This observation is independent of both A and the different fractions of clusters temporarily formed inside the strangelet.
The mean excitation energy for each process after collisions is up to ∼ 800 MeV for abrasion (depending weakly on A and independent of the state of SQM within the considerations adopted in this work) and up to ∼ 1.3 GeV and ∼ 3 GeV for strangelets without pairing and CFL, respectively, after fusion of the strangelet with nitrogen. These values justify the observation of induced fission for CFL strangelets of low mass (see figure 6). For low energies, fission occurs due to the liberation of energy from fusion (both probability curves can be superimposed, see figures 5 and 6). For higher energies, fission of strangelets with and without pairing happens due to the excitation energy available from the deformation of the strangelet as the result of abrasion of a quite high baryon number A * . Although the mean excitation energy in the abrasion process is smaller than the activation energy, a fraction of those events can reach energies for which fission is allowed. In these cases, the probability is smaller for lower fractions of pseudo-baryons since it would result in a small quantity of abraded matter (i. e., less distortion of the strangelet). Nevertheless, this result must be taken with caution because the abrasion model, established for the description of ordinary nuclear matter, certainly did not predict the abrasion of such high amounts of baryons A * . For high-mass strangelets, the total decrease in baryon number can reach values of order 10 2 , which in turn might lead to overestimated excitation energies.
The possibility of fission for high-mass strangelets is not reached for the excitation energies in the collision of these particles with nitrogen are never sufficient to overcome their activation energies.
The mean abraded matter is higher in the atmosphere than in the ISM since the interaction overlap between the target and projectile is substantially higher (see figure 7). It also increases with the fraction of clusters of baryons f , but shows significant differences only for f = 0.1 for high energies due to the difference in the mean free path. For low energies, the difference in the amount of abraded material with the change in f is more pronounced. These conclusions hold for different baryon numbers of strangelets and is weakly dependent on the pairing state of SQM (unpaired or CFL).
The temperatures associated with the fusion process in the atmosphere are higher than those for fusion in the ISM because, with complete fusion, a large amount of energy is liberated per nucleon of the nitrogen. This allows fission of CFL strangelets of low baryon number. The temperatures obtained after fusion are lower than ∼ 6 MeV for unpaired strangelets and when fusion is followed by fission, up to ∼ 18 MeV for CFL strangelets. For strangelets without pairing and CFL with A > 1000, temperatures from the fusion process (not followed by fission), which are strongly dependent on the baryonic content, are always lower than ∼ 3 MeV and ∼ 14 MeV, respectively.
The temperatures obtained in the present analysis indicate that the most likely channel for de-excitation for strangelets interacting in the atmosphere is photon emission. For the highest possible temperatures, emission of pions is also possible, but with a less effective contribution than photon de-excitation. Even at the highest temperatures, neutron emission is not important after thermal equilibration.
Atmospheric showers initiated by strangelets
Strangelets penetrating deeply in the atmosphere may lead to the generation of showers. Note that we have not considered the formation of "SQM" (possibly metastable due to the high temperatures) in collisions of ordinary cosmic rays with components of the atmosphere [19].
To evaluate the possible development of the interaction of strangelets penetrating the atmosphere from the results shown in the previous section, it is also necessary to address the geomagnetic field influence on those particles.
Strangelets with kinetic energies per baryon number lower than hundreds of MeV to some GeV will have their flux at the surface of the earth lowered due to the local geomagnetic cutoff. Also, if the rigidity is above the local cutoff but E/A is of order of tens to few hundreds of MeV, depending on the strangelet mass, it is more likely that they will become trapped in the geomagnetic field lines, if their pitch angles are adequate [34]. Therefore, the possibility of showers generated by low energy strangelets is substantially reduced. Also, the possibility that the fusion process is the one allowing penetration of strangelets to low altitudes seems very unlikely because the results of the previous section indicate that fusion should be important precisely for these low energy strangelets affected by the magnetic field.
If we assume that the column density travelled between collisions of strangelets with nuclei in the atmosphere is of order 30 g/cm 2 , a value appropriate when R strang ∼ R ar ¶, and imposing that in a typical collision the energy loss is ∆E/E ∼ 3% [14], we obtained a crude evaluation of the evolution in baryon number for strangelets as a function of the column density traversed, shown in Figure 8.
As expected, the steepness of the mass loss increases for higher baryon number and/or higher energies at fixed A.
It is worth to remark that fission is important for low mass strangelets. Although less likely than abrasion, this process contributes to prevent these particles from penetrating deeply the atmosphere before reaching the minimum A for which SQM should be stable.
The analysis presented in [26] models the possibility for fission following the process of fusion with air nuclei of strangelets in the atmosphere taking into account the contribution of rotational energy. Particles with high values of deformation by rotation have higher probability of fission in a collision than spherical particles (without any deformation). Nevertheless, their spallation estimate is not modelled in detail, rather considering that the strangelet will lose the same mass number as the air nucleus in each ¶ For strangelets with high baryon number, the values obtained for the final mass after a given column density traversed must be overestimated then, since their radius are big when compared to the nitrogen radius. In this sense, the mean distance between collisions in the atmosphere should be lower than the one equivalent to 30 g/cm 2 of column density. collision. Besides that, our results are not too different in absolute values (although the curves of mass evolution for strangelets do have opposite curvatures), which leads us to believe that the curves shown in Figure 8 should be actually considered as upper limits.
When considering the celebrated Centauro events, our results indicate that the events might be explained by a high-mass and high energy strangelet penetrating the upper atmosphere and suffering successive baryon number losses until it reaches ground experiments. Nevertheless, further analysis related to the kinematics and multiplicities of the secondary production are necessary for a firmer conclusion.
Conclusions
The most important feature of this work is to propose a consistent approach which accounts simultaneously for different processes of interaction between strangelets and ordinary nuclei. The use of the abrasion method allows the evaluation of the abraded matter for strangelets, rather than assuming a fixed baryon number extracted by the interacting nuclei or any other oversimplified assumption (as in, for example, [20,21,26,35]).
We have shown that the reprocessing in mass number of strangelets in the ISM is a process that must be effectively operating for long periods due to the high mean free path for interactions caused by the low-density of nuclei in the Galaxy.
For the analysis of spallation we used the abrasion model, strongly rooted on geometrical arguments. In spite of the generality of the assumptions, this has the advantage of making the results quite independent of experimental data, which are non-existent in this present case. This adaptation of the existing models obviously does not describe the details found in experiments of nuclear matter collision, but is qualitatively acceptable, and thus we expect the results presented here to point towards a general trend for strangelet-nucleus interactions.
We have shown that important differences in the results arise with the assumptions of the fraction of clustering of quarks between f = 0.1 and 1. Obviously, f = 1 is not realistic at all, since in this case one could consider the strangelet as a kind of gas of Λ particles, something which is inconsistent with the strange quark matter stability hypothesis from the very beginning [36]. However, and in spite of this uncertainty, the analysis suggests that adopting spallation with nuclear parameters as the mechanism for reprocessing of strangelets in the ISM should overestimate the change in baryon number of the primaries. In addition, the treatment of spallation presented here can overcome the problems of considering that this specific process of interaction simply destroys the strangelet (as assumed, for example, by Madsen [35]). Also, we contend that fusion should be considered as an important process for interaction with protons at low energies. The estimates of the reprocessing of the initial mass distribution of strangelets in the ISM should be reanalysed for the better prediction of the most likely channels for their ultimate detection and evaluation of existing upper limits to their flux. The consistent framework of relevant interactions facing the unknown physics of strangelets presented here can provide elements for such reanalysis.
On the other hand, we believe that in the terrestrial atmosphere, the dominant mechanism to which strangelets are subject is the loss of baryon number, mostly due abrasion, but also featuring a contribution from the fission process.
If strangelets are part of the cosmic ray flux, it would be possible to detect them, especially in experiments located at the top of mountains [37] since the mass loss due to interaction with atmospheric particles tend to be catastrophic for high column densities crossed. It is not ruled out that Centauro events may have their origin in strangelets, although these suggestive results are still to be analysed in further details. | 2010-03-25T14:12:56.000Z | 2009-08-05T00:00:00.000 | {
"year": 2010,
"sha1": "6dcd690689d2cd06e5cd1cccbbe1244646adf934",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1003.4901",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6dcd690689d2cd06e5cd1cccbbe1244646adf934",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
253418768 | pes2o/s2orc | v3-fos-license | Associations between breastfeeding intention, breastfeeding practices and post‐natal depression during the COVID‐19 pandemic: A multi‐country cross‐sectional study
Abstract Associations between breastfeeding intention, duration and post‐natal depression (PND) have been shown in pre‐COVID‐19 studies. However, studies during COVID‐19 have not examined the associations between breastfeeding intention, breastfeeding practices, and PND in an international sample of post‐natal women, taking into consideration COVID‐19 related factors. This is the first study to address this gap as both PND and breastfeeding may be affected by COVID‐19, and have important long‐term effects on women's and infant's health. A cross‐sectional internet‐based survey was conducted with 3253 post‐natal women from five countries: Brazil, South Korea, Taiwan, Thailand, and the United Kingdom from July to November 2021. The results showed that women who intended to breastfeed during pregnancy had lower odds of having PND than women who did not intend to. Women who had no breastfeeding intention but actually breastfed had greater odds (AOR 1.75) of having PND than women who intended to breastfeed and actually breastfed. While there was no statistical significance in expressed breast milk feeding in multivariable logistic regression models, women who had shorter duration of breastfeeding directly on breast than they planned had greater odds (AOR 1.58) of having PND than those who breastfed longer than they planned even after adjusting for covariates including COVID‐19‐related variables. These findings suggested the importance of working with women on their breastfeeding intention. Tailored support is required to ensure women's breastfeeding needs are met and at the same time care for maternal mental health during and beyond the pandemic.
With the advent of the COVID-19 pandemic, infection preventative measures such as self-isolation, social distancing, facemask wearing and restricted hospital visits from partners or relatives have been put in place in hospitals and 'hotspot' areas to control the transmission of the virus in many countries. Studies undertaken during the first year of COVID-19 pandemic have reported high rates of PND.
For example, a survey conducted with 614 post-natal women between April 2020 and May 2020 in the UK found that 43% of women had an Edinburgh post-natal Depression Scale (EPDS) score ≥13, indicating a major post-natal depressive disorder (Fallon et al., 2021). About 35% of 162 post-natal women in London between May 2020 and June 2020 were assessed to have EPDS ≥ 13 (Myers & Emmott, 2021). A survey of 184 post-natal women from two hospitals in Brazil between 8th June 2020 and 23rd December 2020 reported a 38.8% PND rate (EPDS ≥ 12; Galletta et al., 2022). An Italian study presented that 23.03% of 152 post-natal women in a hospital who filled in the EPDS questionnaire on the second post-natal day at hospital discharge between 22nd February 2020 and 18th May 2020 had an EPDS score ≥ 12 compared to 11.56% of 147 women from the nonconcurrent control group in 2019 (p < 0.001; Zanardo et al., 2021).
These findings are concerning due to the negative and long-term impact of PND on the woman, her infants and her family (Myers & Johns, 2018;Slomian et al., 2019;Tammentie et al., 2004).
The short-and long-term benefits of breastfeeding to the health of women and infants have been well documented (Victora et al., 2016).
Breastfeeding has also been found to provide protection against infectious respiratory diseases such as severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) which caused COVID-19 (Didikoglu et al., 2021;Verd et al., 2021). For women diagnosed or suspected of COVID-19, breastfeeding remains the recommended infant feeding practice but with precautions, such as wearing a facemask during feeding and good hand hygiene (WHO, 2020). A narrative review that included 12 studies between January 2020 and January 2021 reported that COVID-19 had both positive and negative impacts on women's breastfeeding plans (Pacheco et al., 2021). The positive impact included that some women were able to spend more time at home with their infants and increased breastfeeding duration. The negative impact included reduced breastfeeding duration and frequency, and earlier breastfeeding cessation due to increased childcare responsibilities at home and perceived lack of support from family and professionals (Pacheco et al., 2021).
Breastfeeding could be a protective factor for PND and reduce post-natal depressive symptoms (Figueiredo et al., 2014). However, unmet breastfeeding expectations may increase the risk of PND in some women (Gregory et al., 2015). Borra et al. (2015) reported that among women who were not depressed before childbirth, those who had intended to breastfeed and actually breastfed had the lowest risk of PND, while those who had intended to breastfeed but did not breastfeed had the highest risk of PND. On the other hand, PND has been indicative of early breastfeeding cessation (Brown et al., 2016;Dias & Figueiredo, 2015).
Limited studies conducted during the COVID-19 pandemic have investigated the relationships between PND and breastfeeding. Zanardo et al. (2021) study during the COVID-19 lockdown in an Italian hospital showed that women who breastfed their infants exclusively at hospital discharge on the second post-natal day had significantly lower EPDS scores compared to women who practised formula feeding and complementary feeding. In a study amongst Malaysian post-natal women with premature infants at the beginning of the pandemic, Yahya et al. (2021) found women who were of a high risk of PND had less positive attitudes towards breastfeeding compared to those who had a low risk of PND.
However, none of the studies during COVID-19 explored the relationships between (un)changed breastfeeding plans and PND, and on an international level. Our study is the first that addresses this gap.
Key messages
• This study identified independent associations of breastfeeding intention and actual breastfeeding practices with PND after adjustment for COVID-19-related covariates.
• Although women who intended to breastfeed during pregnancy were less likely to have PND, those who did not intend to breastfeed but did breastfeed were more likely to have PND than those who intended to breastfeed and breastfed.
• Women with shorter (vs. longer) than planned duration of breastfeeding on breast were more likely to have PND.
• Health care providers and policymakers should ensure tailored support is available for women's infant feeding plans and mental health.
The aim of this article was to report the associations between breastfeeding intention, breastfeeding practices, and PND considering COVID-19-related factors among post-natal women in five countries. This article formed part of a larger multi-country project examining various aspects (including infant feeding, PND, social support, maternity care, COVID-19 infection and COVID-19 vaccination acceptance) of post-natal women's experiences of having a baby during the COVID-19 pandemic.
| Study design and participants
A cross-sectional online survey was conducted in five countries: Brazil, South Korea, Taiwan, Thailand, and the UK from July 2021 to November 2021. The choice of countries was a convenience sample of countries that had similar PND rates pre-COVID (around 21%), except Thailand (12.52%; Wang et al., 2021). A convenience sampling technique was used to recruit participants. Survey participants' inclusion criteria were post-natal women who were: (a) up to 6 months postpartum (b) living in one of the participating countries, (c) aged 18-49 years old (except in Taiwan 20-49 years old), and (d) literate in the residential country's official language. The recruitment information and the survey web link were advertised online and/or through hard copies of posters or flyers, with a quick response (QR) code where used, as planned by the study lead of each country. The researchers from each country distributed the survey information in the official language of the country (i.e., Portuguese, South Korean, Mandarin with traditional Chinese characters, Thai, and English) via various channels such as emails, social media (e.g., Twitter, Facebook, WhatsApp groups, Line groups, etc.), parenting online forums, personal networks, relevant health care services, and notfor-profit organisations, including services supporting breastfeeding, women, children and families.
| Data collection
Data were collected anonymously via an online Google Form in each country's official language. Except for the outcome variable mentioned below, the survey questions were initially developed in English and subsequently translated into the official language of the participating country. Back translations were also undertaken. Online informed consent was obtained from all participants before they started the survey. All data were anonymised. Ethical approval was granted from each country's relevant ethical approval body (detailed in Section 2.4).
| Outcome variable
The outcome variable in this study is depressive symptoms. The EPDS, a 10-item self-report scale, was used to assess post-natal women's mental health in the last 7 days, as stated in the EPDS, using Likert scales (scoring 0-3, with a total score ranging between 0 and 30; Cox et al., 1987;Khalifa et al., 2016). An EPDS cutoff point of 13 or above was used to classify those with depression. Validated EPDS versions in each country's official language were used.
| Independent variables
The independent variables were: (1) intention to breastfeed during pregnancy, (2) breastfeeding intention during pregnancy and actual breastfeeding practices during postpartum, (3) impact of COVID-19 on baby fed directly from breast, and (4) Impact of COVID-19 on feeding expressed breast milk.
1. Intention to breastfeed during pregnancy was asked with response option ('yes' or 'no' or 'don't know') and was categorised into ('yes' or 'no/don't know') in the analyses. 3. Impact of COVID-19 on baby fed directly on breast was asked using the question 'Does COVID-19 affect your infant feeding behaviour?' with response options (i) not intend to feed, (ii) shorter than intended, (iii) the same duration as intended, and (iv) longer than intended.
4. Impact of COVID-19 on baby fed expressed breast milk was asked using the question 'Does COVID-19 affect your infant feeding behaviour?' with response options (i) not intend to feed, (ii) shorter than intended, (iii) the same duration as intended, and (iv) longer than intended.
| Covariates
Socio-demographic, obstetric, health and support characteristics Socio-demographic variables of women included country, mother's age, education level, work status, education level, residence and marital status. Obstetric variables were pregnancy intention, mode of childbirth, parity, birthweight and preterm birth. Health and support variables were (a) health problem of mother during pregnancy, (d) social support post-birth (scores)measured using a six-item self-report scale-Maternity Social Support Scale (MSSS;Webster et al., 2000).
COVID-19 knowledge, attitudes and practices (KAP), and beliefs of breastfeeding in relation to COVID-19
Questions regarding KAP on infection prevention and control measures against COVID-19 were drawn and modified from previous studies (Hussain et al., 2020;Islam et al., 2020). Nine questions were included to assess knowledge of COVID-19 by answering if a statement (e.g., 'COVID-19 can NOT spread through respiratory droplets of infected individuals') was 'true', 'false' or 'do not know'.
A correct answer was awarded a score indicating 'adequate answer'.
A total score ranged from 0 to 9, with a higher score indicating better knowledge. Attitudes toward the severity and prevention of COVID-19 had seven questions (e.g., 'Social distancing is important to prevent COVID-19') with 5-point Likert scales ('Strongly disagree', 'disagree', 'undecided', 'agree' and 'strongly agree'. A total score ranged from 7 to 35. A higher score indicated a more positive attitude. Precaution practices of COVID-19 contain six questions including questions such as 'During the last 7 days, did you avoid touching eyes, nose and mouth with unwashed hand?' with 4-point Likert scales ('never', 'occasionally', 'sometimes' and 'always'). A total score ranged from 6 to 24, with higher scores indicating a more adequate practice.
Six questions (e.g., If the mother is confirmed or suspected to have COVID-19, the mother should not breastfeed) were developed for breastfeeding beliefs in relation to infection prevention and control measures for COVID-19 based on WHO's (2020) breastfeeding Q&A. The answer options were 'disagree', 'uncertain' and 'agree'.
A total score ranges from 0 to 12, with higher score showing a more positive breastfeeding belief.
COVID-19 impact variables
Impact of COVID-19 on food security was assessed before and during COVID-19 using two questions: 'Did you ever run out of food before the end of the month or cut down on the amount you ate to feed others in 2019 BEFORE COVID-19?' and 'Did you ever run out of food before the end of the month or cut down on the amount you ate to feed others DURING COVID-19 in 2020-2021?' The two variables were combined and categorised into (i) no change: insecure to insecure, (ii) worse: secure to insecure, (iii) better: insecure to secure and (iv) no change: secure to secure. Variables regarding COVID-19 positive diagnosis (yes or no), and COVID-19 vaccination uptake (yes or no) were also included.
| Statistical analysis
The data analyses were conducted using SAS statistical software package version 9.3 (SAS Institute Inc.). Descriptive statistics were used to analyse categorical variables (frequencies and percentages) and continuous variables (mean and standard deviation [SD]). To test the association between PND (outcome variable) and the four independent variables: (1) intention to breastfeeding during pregnancy, (2) breastfeeding intention during pregnancy and actual breastfeeding practices, 3) impact of COVID-19 on baby fed directly from breast, and (4) Impact of COVID-19 on feeding expressed breast milk, bivariate associations were assessed using chi-square test and simple logistic regression. Subsequently, to assess the impact of each independent variable on PND, adjusted for other variables, three multivariable logistic regression models were used: (i) model I adjusted for socio-demographic, obstetric, health and support characteristics; (ii) model II adjusted for the covariates in model I and COVID-19 related KAP and belief towards breastfeeding during COVID-19; and (iii) model III adjusted for the covariates in model II and COVID-19 impact variables: food security status before and during COVID-19, positive COVID-19 diagnose, COVID-19 vaccination uptake. Adjusted odds ratio (AOR) with a 95% confidence interval (CI) was used to assess the strength of these associations.
| Ethical statement
Online informed consent was obtained from all participants before they started the survey. All data were anonymised. Ethical
| RESULTS
A total of 3253 eligible responses were received from post-natal women living in five countries: Brazil, South Korea, Taiwan, Thailand and the UK. Table 1 shows the characteristics of the participants. The majority of all women were between 30 and 39 years old (61.6%), had a university or higher degree (75.8%), lived in urban areas (72.6%), were married (95.5%), were on either paid or unpaid maternity leave (59.3%), had vaginal birth (61.0%), were primiparous (57.0%), and had received at least one dose of COVID-19 vaccine (72.2%). About 74% (73.5%) of all women fed their baby directly on breast, 40.6% fed their baby with infant formula, and 38.3% with expressed breast milk in the past 24 h of survey completion (Table 1). did not intend to and actually breastfed, 2.2% intended but did not breastfeed and 1.7% did not intend and did not breastfeed. The chi-square test revealed that there was a significant association with PND in the pooled data (p < 0.0001). At the country level, a statistical significance was only seen in Thailand (p < 0.0001).
Women were asked 'Does COVID-19 affect your infant feeding behaviour?' with two types of breastfeeding: directly on breast and expressed breast milk. About 54% (53.7%) of all women responded to breastfeeding directly on breast for the same duration as planned, while 18.6% shorter than planned, and 12.7% longer than planned.
The bivariate analysis showed a significant association between the impact of COVID-19 on breastfeeding directly on breast and PND in the pooled model (p < 0.001) and by country in Taiwan, Thailand and the UK (p < 0.01). In terms of expressed breast milk feeding, 40.8% fed their baby expressed breast milk the same duration as planned, 20.1% shorter than planned and 11.5% longer than planned. The bivariate analysis revealed a significant association with PND in the pooled model (p < 0.0001) and in almost all countries (p < 0.05), except South Korea. Longer than intended 1.00 -1.00 -1.00 -1.00 -Note: Model: 1 included covariates of country (Brazil, South Korea, Taiwan, Thailand or the UK), maternal age (18-29, 30-39 or 40-49), intended pregnancy (yes or no), mode of childbirth (vaginal or caesarean section), health problem of mother during pregnancy, delivery or postpartum (yes or no), work status (employed, on paid maternity leave, on unpaid maternity leave or housewife/unemployed), residence (urban or rural), marital status (married or others), parity (1 or 2+), birthweight (<2.5, 2.5-3.5 or >3.5 kg), preterm birth (yes or no), breastfeeding directly on breast in the last 24 h (yes or no), expressed breast milk in the last 24 h (yes or no), infant formula in the last 24 h (yes or no), solid, semi-solid or soft foods in the last 24 h (yes or no), social support (score), number of post-natal care (never, 1-2, 3 or 4+) Model 2: covariates in Model 1+ COVID-19 knowledge (score), attitudes (score), and practices (score) and breastfeeding belief (score) Model 3: covariates in Model 2+ changes in food insecurity before and during covid-19 (no change (insecure), worse, better, or no change (secure)), ever diagnosed as COVID-19 positive (yes or no), and belief towards breastfeeding in relation to COVID-19 in model 2 and COVID-19 impact variables in model 3, the associations became a bit greater (1.59, 1.15-2.19 and 1.58, 1.15-2.18, respectively).
| DISCUSSION
To the best of our knowledge, this is the first international study investigating the associations between breastfeeding intention, breastfeeding practices and PND during the COVID-19 pandemic.
We focus the discussion on the associations of the outcome variable and independent variables stated in this article. Discussions specifically on (a) breastfeeding and (b) PND were presented in separate articles as separate topics Coca et al., 2022). Some key pooled results from the five countries in this study were that those who intended to breastfeed during pregnancy had lower odds of having PND (p < 0.0001) while those who had no breastfeeding intention but actually breastfed (p < 0.0001), and ceased breastfeeding directly on breast earlier than planned had higher odds of having PND (p < 0.0001), and to cease breastfeeding (directly on breast and expressed breast milk) earlier than they planned (p < 0.0001) compared to those with no PND.
Similar findings were shown in some pre-pandemic findings. For example, Gregory et al. (2015) found that women who met their breastfeeding expectation had lower odds of post-natal depressive symptoms compared to those who did not. However, our study did not establish causal relationships. For example, we do not know if women already had depressive symptoms before childbirth and then developed PND, which may affect their breastfeeding decisions; or women's decision to stop breastfeeding earlier than planned due to the impact of COVID-19 may trigger their PND, although previous pre-pandemic studies have reported bidirectional relationships between breastfeeding and PND (Dias & Figueiredo, 2015;Pope & Mazmanian, 2016). Nevertheless, our findings demonstrate the importance of supporting women to achieve their breastfeeding plans as well as when their infant feeding plans change while at the same time caring for their mental health.
We further analysed the association between the type of changes in breastfeeding plans and PND. We showed in pooled models statistically significant results before and after adjusting for confounders, including COVID-19-related factors, that women with no intention to breastfeed but actually breastfed had greater odds of having PND than women who intended to breastfeed and breastfed.
This showed the importance of understanding women's breastfeeding intention, working with women on their breastfeeding intention, and their subsequent breastfeeding practices which may be different from their intention. Borra et al. (2015) reported a similar finding that among women who did not have depressive symptoms during pregnancy, breastfeeding increased the risk of PND in women who had not intended to breastfeed and decreased the risk of PND in women who had intended to breastfeed. Other statistically significant results in our pooled models before and after adjusting for confounders were that women who breastfed directly on breast for shorter duration than they planned had greater odds of having PND than those who breastfed longer than they planned. Many prepandemic studies have shown that women who had PND were more likely to have a shorter breastfeeding duration than those who did not have PND (Butler et al., 2021;Dias & Figueiredo, 2015;Pope & Mazmanian, 2016). On the other hand, Costantini et al. (2021) conducted an online survey in the UK during the COVID-19 lockdown with women whose children were aged 0-3 years old and found no statistically significant difference in post-natal depressive scores as measured by Patient Health Questionnaire (PHQ-9) between women who breastfed more than 6 months and those who breastfed less than 6 months. However, Costantini et al. (2021) did not investigate if women changed their breastfeeding plans.
As mentioned above, the statistically significant associations PND. An online survey of UK post-natal women over 4 weeks in May 2020 and June 2020 found that women ceased breastfeeding due to COVID-19-related concerns, such as lockdown and lack of support (Brown & Shenker, 2021). Piankusol et al. (2021) survey conducted between 17th July 2020 and 17th October 2020 reported that lack of family support with infant feeding was the risk factor associated with changing breastfeeding practices (e.g., reduced breastfeeding fre- conducted to understand the specific aspects of COVID-19 that impacted on women's breastfeeding plans and mental health outcomes, especially when 'living with COVID' becomes a norm. The importance of receiving support from health care professionals, partners, family members, and friends for breastfeeding and PND has consistently been shown in studies before and during COVID-19 (da Silva Tanganhito et al., 2020;Myers & Emmott, 2021;Pacheco et al., 2021). The statistically significant associations found in our study between breastfeeding intention, breastfeeding practices and PND further illustrate the importance of providing effective breastfeeding support from health care professionals, partners and families to tailor infant feeding support to women's infant feeding needs and decisions, and to minimise the risk of PND. A Canadian prospective pre-COVID study reported that women who experienced breastfeeding challenges scored lower EPDS scores when they did not report a negative experience with breastfeeding support (Chaput et al., 2016). This highlighted the need of positive and high-quality breastfeeding support for post-natal women to reduce the risk of PND (Chaput et al., 2016). There are few interventions to support women at risk of PND with breastfeeding. Reach Out, Stand Strong, Essentials for new mothers (ROSE) study which utilised a group interpersonal therapy to promote social support and self-care in lowincome pregnant women at risk of PND has indicated a positive outcome of increased breastfeeding duration among women receiving the therapy, although further evidence is needed (Kao et al., 2015). need support to be able to support women (Chang et al., 2021).
Breastfeeding peer supporters were found to be beneficial in not only providing practical support for breastfeeding but also emotional/ psychological support, as well as decreasing social isolation , which was reported as a challenge for post-natal women during the pandemic due to restrictive measures and lack of support (Ipsos MoRI, 2021). Some peer support, including face-to-face in-person interactions, continued to be provided during the COVID-19 lockdown (Hann et al., 2021). Evaluating and learning from these support services may help inform the development of improved breastfeeding and mental health support with integration into health services for women and their families.
| Limitations
This study has several limitations. Due to the use of convenience sampling, generalisability may not be appropriate to all post-natal women in the participating countries and other countries. Other limitations of this study include the nature of assessing breastfeeding and PND as we used self-report assessments with retrospective data, which increase the risk of eliciting social desirability bias and may lead to recall bias. We used the EPDS scale with a cutoff point of 13, in which previous research has shown that identifying PND using a tool, such as EPDS, may be insufficient to recognise women who need support (Fellmeth et al., 2019). Further, we did not ask if women had antenatal depression or previous mental illness or received support for previous mental illness. Despite the limitations of this study, this study has certain strengths. Our study is the first that addresses the relationships between (un)changed breastfeeding plans and PND.
Furthermore, being an international study across five countries on this important topic during COVID-19 is a strength and the results can also inform practices and policies for future pandemics.
| CONCLUSIONS
Our study highlighted that breastfeeding intention at pregnancy and change of breastfeeding plans were associated with PND during the COVID-19 pandemic. Further investigation is needed to identify effective breastfeeding interventions in preventing PND and reducing post-natal depressive symptoms, combining 'living with COVID' desired preventative measures. Working with women on their breastfeeding intention during pregnancy is important. post-natal care should include supporting women's breastfeeding decisionmaking and identify breastfeeding/infant feeding support needs for women more likely to be at risk of PND. It is also essential that policymakers and health care providers provide guidance and take actions on mitigating the long-term effects of unmet breastfeeding plans and PND, and preventing/reducing occurrences of PND and post-natal depressive symptoms and negative breastfeeding outcomes for women during the pandemic and beyond. | 2022-11-10T06:17:00.168Z | 2022-11-09T00:00:00.000 | {
"year": 2022,
"sha1": "f390c547b3e1c4106efd6b07301c93d6b5979324",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/mcn.13450",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4fd1e964a292cbcd29ea3258893578088e8a794a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55547852 | pes2o/s2orc | v3-fos-license | The effective supergravity of Little String Theory
In this work we present the minimal supersymmetric extension of the five-dimensional dilaton-gravity theory that captures the main properties of the holographic dual of little string theory. It is described by a particular gauging of $\mathcal{N}=2$ supergravity coupled with one vector multiplet associated to the string dilaton, along the $U(1)$ subgroup of $SU(2)$ R-symmetry. The linear dilaton in the fifth coordinate solution of the equations of motion (with flat string frame metric) breaks half of the supersymmetries to $\mathcal{N}=1$ in four dimensions. The non-supersymmetric version of this model was found recently as a continuum limit of a discretised version of the so-called clockwork setup.
Introduction
Besides its own theoretical interest, Little String Theory provides a framework with interesting phenomenological consequences. On one hand, it offers a way to address the hierarchy when the string scale is at the TeV scale [1,2,3], without postulating large extra dimensions (in string units) but instead an ultra-weak string coupling [4,5]. On the other hand, LST appeared recently as a continuum limit of the so-called clockwork models [6] which address the hierarchy in an apriori different way [7,8].
Little String Theory (LST) corresponds to a non-trivial weak coupling limit of string theory in six dimensions with gravity decoupled and is generated by stacks of (Neveu-Schwarz) NS5-branes [9]. Its holographic dual corresponds to a seven-dimensional gravitational background with flat string-frame metric and the dilaton linear in the extra dimension [10]. Its properties can be studied in a simpler toy model by reducing the theory in five dimensions. Introducing back gravity weakly coupled, one has to compactify the extra dimension on an interval and place the Standard Model on one of the boundaries, in analogy with the Randall-Sundrum model [11] on a slice of a five-dimensional (5d) anti-de Sitter bulk [1].
Since we know that the bulk LST geometry preserves space-time supersymmetry, in this work we study the corresponding effective supergravity which in the minimal case is N = 2. In principle, there should be a generalisation with more supersymmetries, or equivalently in higher dimensions. The N = 2 gravity multiplet contains the graviton, a graviphoton and the gravitino (8 bosonic and 8 fermionic degrees of freedom), while the heterotic (or type I) string dilaton is in a vector multiplet containing a vector, a real scalar and a fermion. The corresponding supergravity action [13] admits a gauging of the U(1) subgroup of the SU(2) R-symmetry, that generates a potential for the single scalar field [13,14]. This potential depends on two parameters allowing a multiple of possibilities with critical or non critical points, or even flat potential with supersymmetry breaking. Here, we observe that the vanishing of one of the parameters generates the runaway dilaton potential of the non-critical string. This potential has no critical point with 5d maximal symmetry but it leads to the linear dilaton solution in the fifth coordinate that preserves 4d Poincaré symmetry. We show that this solution breaks one of the two supersymmetries, leading to N = 1 in four dimensions.
The outline of the paper is the following. In Section 2, we review the gauged N = 2 supergravity in five dimensions, based on the references [12,13,14], and specialize in the case of one vector multiplet using the results of the string effective action of ref. [15]. In Section 3, we present the 5d graviton-dilaton toy model that describes the holographic dual of LST and identify it with a particular choice of the gauging of the N = 2 supergravity. We also show that the linear dilaton solution preserves half of the supersymmetries, i.e. N = 1 in four dimensions. In Section 4, we write the complete Lagrangian, including the fermion terms, depending on three constant parameters. In Section 5, we derive the spectrum classified using the 4d Poincaré symmetry and we conclude with some phenomenological remarks. Finally, there are three appendices containing our conventions, the equations of motion with the linear dilaton solution, and some explicit calculations that we use in the study of supersymmetry transformations.
Gauged N = 2, D = 5 Supergravity
The references used in the following are [13], [12] and [14], while our conventions may be found in the Appendix (A). In D = 5 spacetime dimensions, the pure N = 2 supergravity multiplet contains the graviton e m M , the gravitino SU(2)-doublet ψ i M , where i is the SU(2) index, and the graviphoton, while the N = 2 Maxwell multiplet contains a real scalar φ, an SU(2) fermion doublet λ i and a gauge field. Upon coupling n Maxwell multiplets to pure N = 2, D = 5 supergravity, the total field content of the coupled theory can be written as where I = 0, 1, . . . , n , a = 1, . . . , n and x = 1, . . . , n. The real scalars φ x can be seen as coordinates of an n-dimensional space M that has metric g xy that is symmetric for our purposes, while the spinor fields λ ia transform in the n-dimensional representation of SO(n), which is the tangent space group of M, so that where f a x is the corresponding vielbein. The bosonic part of the Lagrangian is where e = det(e m M ), ω is the spacetime spin-connection, G IJ is the symmetric gauge kinetic metric, C IJK are totally symmetric constants and the gravitational coupling κ has been set equal to 1. The supersymmetry transformations of the fermions of the theory are where ǫ i is the supersymmetry spinor parameter and the dots stand for terms that vanish in the vacuum. In fact, the n-dimensional M can be seen as a hypersurface of an (n+1)-dimensional space E with coordinates where F is the additional coordinate of E compared to M. It can be shown that F is a homogeneous polynomial of degree three and, more precisely, that where β = 2/3. It can also be shown that, on M, the scalars φ x satisfy the constraint Moreover, where ∂ I = ∂ ∂ξ I and ∂ x = ∂ ∂φ x . Finally, we note that the symmetric third-rank tensor T xyz on M is covariantly constant for the symmetric M that we will be concerned with and thus satisfies the algebraic constraint The gauging of the U(1) subgroup of SU(2) generates a scalar potential P , with where P 0 and P a are functions of the scalars φ x that satisfy the following constraints due to supersymmetry where the symbols "," and ";" denote differentiation and covariant differentiation respectively and P x = f a x P a . The functions P 0 and P a also appear in the fermion transformations that get deformed due to the gauging, namelỹ whereδ denotes the supersymmetry transformation after the gauging (under which the deformed action is invariant), g is the U(1) coupling constant, Γ µ is the Γ-matrix in five spacetime dimensions and the dots stand again for terms that vanish in the vacuum. Now let us consider the case in which there is only one real physical scalar s. In the following, we use t to denote the additional coordinate on E, namely ξ I = ξ I (s, t) , I = 0, 1. The effective supergravity related to the 5-dimensional model for the gravity dual of LST is given by where a is a constant parameter. Indeed in the graviton-dilaton system obtained from string compactifications in five dimensions, the first term corresponds to the tree-level contribution (identifying t with the inverse heterotic string coupling) and the second term to the one-loop correction [15]. 1 The solution of the constraint (7) is then and the components of the gauge kinetic metric are 1 Note a change of notation between s and t compared to Ref. [15].
We then find that the scalar metric, the Christoffel symbols and the third-rank tensor (that have only one component each) are respectively where we have used (9) to compute T sss . The system (11) takes thus the form whose solution is where A, B are constant parameters. Using (10) we then find the potential to be so that the kinetic term and the potential for s take the form we obtain the Lagrangian for the canonically normalized Φ 3 The 5D dual of LST The holographic dual of six-dimensional Little String Theory can be approximated by a five-dimensional model, in which the Lagrangian in the bulk takes the following form in the Einstein frame, whereΦ is the dilaton and Λ is a constant. Upon redefining 2 We neglect the remaining spectator five dimensions of the string background which play no role in the properties of the model relevant for our analysis. and setting the gravitational coupling κ in five dimensions equal to one (κ 2 = 1/M 3 5 , where M 5 is the Planck mass in five dimensions), we obtain the Lagrangian for the canonically normalized dilaton Φ We thus observe that the potential that arises from LST is equal to the potential in (22) for a scalar that belongs to a gauged N = 2, D = 5 Maxwell multiplet coupled to supergravity, upon making the identification We then have Moreover, it is known that the dilaton potential in (25) exhibits a runaway behaviour and does not have a 5-dimensional maximally symmetric vacuum, but has a 4-dimensional Poincaré vacuum in the linear dilaton background where y > 0 is the fifth dimension and C a constant parameter. The background bulk metric is then where η µν is the Minkowski metric of four-dimensional space, under the fine-tuning condition (see Appendix B) To have at least one unbroken supersymmetry, the fermion transformations must vanish in the vacuum for at least one linear combination of the supersymmetry parameters. Using equations (27), the fermion transformations (12) take the following form on the four-dimensional brane (in the vacuum) 3 Upon diagonalizing the second of equations (31) and using (30), we find that N = 2 supersymmetry is partially broken to N = 1, with We thus identify λ 1 + iΓ 5 λ 2 with the fermion residing in a multiplet of the unbroken N = 1 supersymmetry and iΓ 5 λ 1 + λ 2 with the Goldstino of the broken N = 1 supersymmetry. To determine the dependence of ǫ i on y, we consider the 5-th component of the first of the equations (12) in the vacuum 4 which gives whereǫ is a constant symplectic spinor. The above relations are consistent with the direction of the unbroken supersymmetry ǫ 2 = −iΓ 5 ǫ 1 from eq. (32).
Final Lagrangian
The Lagrangian of ungauged N = 2, D = 5 supergravity is where Ω ab x is the spin-connection of the scalar manifold and h I , h x I and Φ Ixy are functions of the scalars that will be defined later.
Upon gauging U(1), the Lagrangian aquires the additional terms and the derivatives become where υ I is an arbitrary constant vector and Using (16) and (27) we find that for a single scalar Consequently, In addition, after the gauging, the following equations hold [13] P 0 = 2h I υ I , P a = √ 2h Ia υ I , so using (27) we find that where we have assumed that υ I υ I = 1 for simplicity. It thus follows that where we have used the fact that G IJ raises and lowers I, J indices. Moreover, using which we find that for a single scalar Using (15), we find that the final LagrangianL = L + L ′ takes the form Φλ i λ j δ ij + (4-fermion terms) .
where A 0 M and A 1 M correspond to the graviphoton and the gauge field of the vector multiplet respectively and we have setυ = υ 0 + aυ 1 . Since the parameter A appears only through the combination gA in the additional terms L ′ induced by the gauging, we choose to set A = 1. Moreover, at tree-level we may set a = 0, as discussed in section 2. The final Lagrangian then takes the form Φλ i λ j δ ij + (4-fermion terms) .
This Lagrangian has three free parameters: g, υ 0 and υ 1 .
Spectrum and concluding remarks
The spectrum of the above model can be decomposed using the 4d Poincaré invariance of the linear dilaton vacuum solution and should form obviously N = 1 supermultiplets. It is known that every 5d field should give rise to a 4d zero mode and a continuum starting from a mass gap fixed by the linear dilaton coefficient C = g/ √ 2. Using the results of Ref. [1] and the correspondence (24), one finds that the parameter α of [1] is given by α = √ 3C and that the mass gap M gap is The continuum becomes an ordinary discrete Kaluza-Klein (KK) spectrum on top of the mass gap, when the fifth coordinate y is compactified on an interval [1], allowing to introduce the Standard Model (SM) on one of the boundaries. This spectrum is valid for the graviton, dilaton and their superpartners by supersymmetry. Notice that the 5d graviton zero-mode has five polarisations that correspond to the 4d graviton, a KK vector and the radion. For the rest of the fields, special attention is needed because of the gauging that breaks half of the supersymmetry around the linear dilaton solution.
Indeed, one of the 4d gravitini acquires a mass fixed by g, giving rise to a massive spin-3/2 multiplet together with two spin-1 vectors. These are the 5d graviphoton and the additional 5d vector that have non-canonical, dilaton dependent, kinetic terms, as one can see from the Lagrangian (47). Using the background (28), (29), one finds that the y-dependence of the vector kinetic terms at the end of the first line of (47) is exp {± √ 3C} with the plus (minus) sign corresponding to the 5d graviphoton I = 0 (extra vector I = 1). It follows that they both acquire a mass given by the mass gap.
We conclude with some comments on some possible phenomenological implications of the above lagrangian. One has to dimensionally reduce it from D = 5 to D = 4, upon compactification of the y-coordinate. Moreover, one has to introduce the SM, possibly on one of the boundaries, a radion stabilization mechanism and the breaking of the leftover supersymmetry. An interesting possibility is to combine all of them along the lines of the stabilisation proposal of [3] based on boundary conditions. There are several possibilities for Dark Matter (DM) candidates in this gravitational sector. There are two gravitini that, upon supersymmetry breaking can recombine to form a Dirac gravitino [16] or remain two different Majorana ones. Depending on the nature of their mass, the exact freeze-out mechanism will be different. There are three possible dark photons A 0 µ , A 1 µ and the KK U(1) coming from the 5d metric that could also be DM or their associated gaugini could also play a similar role, again depending on the compactification of the extra coordinate, on how supersymmetry breaking is implemented, as well as on the radion stabilisation mechanism. In general there could be a very rich phenomenology in the gravitational sector.
Regarding LHC or FCC phenomenology it is going to depend on how the SM fields are included in this setup, we will leave that to a forthcoming publication [17]. In general this theory will have KK massive resonances that could be strongly coupled to the SM in a similar fashion as in Randall-Sundrum [11] models.
Note Added
After the completion of this work, we received the paper [18] which contains very similar results.
A Conventions
Our convention for the five-dimensional Minkowski metric is η mn = diag(−, +, +, +, +) , (A.1) where m, n, . . . are inert indices and m = 1, . . . , 5. For Γ-matrices we write We also have that where γ 5 is the standard γ 5 in four-dimensions, such that in the Dirac representation The five-dimensional bulk metric of the LST dual is given by
B Einstein equation in 5D
In our conventions, the Einstein equation takes the form where G M N and T M N are the Einstein and the energy-momentum tensor respectively. Moreover, we have that Cy in our case. This gives In addition, Since ∂ µ ǫ i1 = 0, we have that (in the vacuum) on the brane Then, using the first of the equations (27), the first of the equations (12) takes the following form on the branẽ while the 5-th component of the first of the equations (12) takes the form | 2017-10-16T08:45:56.000Z | 2017-10-16T00:00:00.000 | {
"year": 2018,
"sha1": "659a60c1980cc715917c7dfd507c8a1642a5ee3a",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-018-5632-4.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "659a60c1980cc715917c7dfd507c8a1642a5ee3a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
235481627 | pes2o/s2orc | v3-fos-license | Circ_0008542 in osteoblast exosomes promotes osteoclast-induced bone resorption through m6A methylation
With an increasing aging society, China is the world’s fastest growing markets for oral implants. Compared with traditional oral implants, immediate implants cause marginal bone resorption and increase the failure rate of osseointegration, but the mechanism is still unknown. Therefore, it is important to further study mechanisms of tension stimulus on osteoblasts and osteoclasts at the early stage of osseointegration to promote rapid osseointegration around oral implants. The results showed that exosomes containing circ_0008542 from MC3T3-E1 cells with prolonged tensile stimulation promoted osteoclast differentiation and bone resorption. Circ_0008542 upregulated Tnfrsf11a (RANK) gene expression by acting as a miR-185-5p sponge. Meanwhile, the circ_0008542 1916-1992 bp segment exhibited increased m6A methylation levels. Inhibiting the RNA methyltransferase METTL3 or overexpressing the RNA demethylase ALKBH5 reversed osteoclast differentiation and bone resorption induced by circ_0008542. Injection of circ_0008542 + ALKBH5 into the tail vein of mice reversed the same effects in vivo. Site-directed mutagenesis study demonstrated that 1956 bp on circ_0008542 is the m6A functional site with the abovementioned biological functions. In conclusion, the RNA methylase METTL3 acts on the m6A functional site of 1956 bp in circ_0008542, promoting competitive binding of miRNA-185-5p by circ_0008542, and leading to an increase in the target gene RANK and the initiation of osteoclast bone absorption. In contrast, the RNA demethylase ALKBH5 inhibits the binding of circ_0008542 with miRNA-185-5p to correct the bone resorption process. The potential value of this study provides methods to enhance the resistance of immediate implants through use of exosomes releasing ALKBH5.
INTRODUCTION
Maximizing osseointegration is the key factor in successful implant placement. At the interface where osseointegration occurs, osteoclast-induced bone resorption and osteoblast-induced bone formation are in a dynamic balance. However, osteoclast differentiation and bone resorptions enhance, and disrupt the balance of bone remodeling during some pathological conditions [1,2]. It is well known that immediate implantation of a single tooth or partial multitooth loss will increase the difficulty of stress load control and increase the possibility of osseointegration failure. Previous studies have found that immediate loading by implants can easily lead to bone resorption at the edge of the implant neck [3]. This indicates that the stress concentration of the implant neck has a potential relationship with bone resorption at the implant neck edge.
Exosomes are widely distributed, contain proteins and nucleic acids and are involved in various physiological and pathological processes [4]. Since communication between osteoblasts and osteoclasts is crucial for regulating environmental stability in bone tissues, exosomes can be transported to adjacent or distant cells to send a series of signals affecting environmental stability in bone tissues and regulate bone growth and homeostasis [5]. The exosome levels released by cells are significantly increased under pathological conditions, and the contents are significantly different from those in physiological conditions. This demonstrates that the donor cells exhibit precise targeted regulation of exosomal contents and reflects the significance of differential molecules in the process of disease formation. In recent years, in addition to the proven classical signaling pathways RANKL-RANK-OPG and Ephrin-Eph, interactions between osteoblasts and osteoclasts regulated by exosomes have attracted increasing attention, revealing many mechanisms related to bone metabolism and bone diseases [6][7][8][9].
Exosomes contain a variety of noncoding RNAs which become a new target for clinical diagnosis and prognostic assessment of diseases [10,11]. CircRNA is a noncoding RNA with a closed circular structure and is not easily degraded by RNA enzymes [12,13]. Current research on circRNA has primarily focused on regulatory cell transcription, translation, coding proteins, and molecular sponge activity with miRNAs [14][15][16]. As circRNA plays an important role in a variety of diseases, it is of great research value [17,18]. In the skeletal system, current research has confirmed that circRNAs participate in the regulation of bone remodeling through the miRNA-mRNA axis in the form of molecular sponges [19][20][21].
M6A methylation is the most common form of RNA methylation, and as an epigenetic modification, it participates in and regulates many important functions of RNA [22]. M6A methylation is a dynamic and reversible process. The METTL3/ METTL14 methyltransferase complex and the WTAP cofactor are primarily involved in methylation of m6A, and the process of demethylation is primarily completed by the methylase FTO and ALKBH5 [23][24][25][26][27]. At present, most studies have focused the role of m6A modification on mRNA in posttranscriptional regulation; M6A modification on pri-miRNA promotes the generation of mature miRNA after transcription; M6A modification on circRNA also promotes circRNA translation [28][29][30]. Recent studies have confirmed that m6A methylation regulates the molecular sponge effect between lincRNA1281 and the let-7 miRNA family, thus affecting the normal differentiation of mouse embryonic stem cells. This RNA-RNA interaction is m6A methylationdependent [31]. Another study performed a motif search among m6A regions of multiple cell types. Over two-thirds of identified RRACH motifs were reversely complementary to the seed sequences of one or more miRNAs, indicating that the m6A peak regions might be targeted by miRNAs [32]. However, there are currently no reports of m6A modification regulating the binding of circRNA or miRNA.
Based on the above theories, this study examined the molecular mechanism of the interaction between osteoblasts and osteoclasts in the osseointegration microenvironment. After improper tension stimulation, abnormal molecular signals within osteoblast exosomes promoted osteoclast-induced bone resorption, leading to osseointegration destruction around the implant.
Characterization of circ_0008542
Exosomes were separately collected from MC3T3-E1 cell supernatant with or without tension stimulation (Flexcell culture with 20% amplitude/1 Hz/24 h). Total RNA extracted from the abovementioned two groups was subjected to high-throughput sequencing. We observed different circRNAs between the two groups as shown in the heat map ( Fig. 1A and Table S1). Circ_0008542 was highly expressed in exosomes with tension stimulation. Bioinformatics prediction results showed that circ_0008542 targets and binds to the miRNA-185-5p-RANK axis (Fig. 1B). Circ_0008542 is derived from the host gene Rrp15. It is located on chromosome 1: 188, 559, 994-188, 563,742 (2475 nt), and the genomic structure suggests that circ_0008542 consists of three exons and one intron from the Rrp15 gene locus (Fig. 1C). Circ_0008542 was demonstrated to form head-to-tail splicing and matched the sequence reported for circ_0008542 in circRase, as obtained from Sanger sequencing ( Fig. 1D and Table S2). In addition, divergent and convergent primers were designed for circ_0008542, and complementary DNA (cDNA) and genomic DNA (gDNA) were used as templates. Results showed that a single and distinct product of expected size was amplified using circ_0008542 divergent primers (a 127 bp fragment) from only cDNA, while there was no amplification product from gDNA (Fig. 1E). Ribonuclease R (RNase R) digestion results showed that circ_0008542 exhibited a stable feature after RNase R treatment (Fig. 1F, G).
Exosomes from MC3T3-E1 cells regulate osteoclast differentiation and bone resorption Representative exosomes were observed under transmission electron microscopy. The size distribution of exosomes indicated that the diameter of particles was mostly concentrated at 100 nm ( Fig. 2A). Regardless of the presence or absence of additional tension stimulation, cytomembrane characteristic proteins of HSP70, TSG101, and CD63 were positive in MC3T3-E1 cell lysates and exosomes; cell nucleus characteristic proteins of TFIIB and Lamin A/C were positive in MC3T3-E1 cell lysates but negative in exosomes (Fig. 2B). Expression levels of circ_0008542 in exosomes increased with extened tension time as shown from the RT-qPCR results (Fig. 2C). Next, the immunofluorescence assay demonstrated that exosomes marked with PKH26 could be absorbed by osteoclast precursor cells (Fig. 2D). After adding exosomes from the tension stimulation group, expression levels of circ_0008542 increased in RAW 264.7 cells in the RT-qPCR results. Regardless of whether exosomes were from the groups with or without tension stimulation, there was no change in expression of Rrp15, the host gene of circ_0008542, in RAW264.7 cells, indicating that the increased circ_0008542 was derived from the exosomes (Fig. 2E). Alizarin red S (ARS) and alkaline phosphatase (ALP) staining were applied to detect osteoblast bone formation ability after different tension stimulation times. Short-term tension stimulation (less than 18 h) promoted osteoblast bone formation, but long-term tension stimulation (more than 24 h) had the opposite effect in both MC3T3-E1 cells and bone marrow stromal cells (BMSCs) (Fig. 2F). In addition, expression levels of the bone formation marker genes ALP, Runx2, Bglap, and Col1α1 in MC3T3-E1 cells and BMSCs showed the same tendency based on the RT-qPCR results (Fig. 2G). We evaluated the direct effects of exosomes on RANKL-induced osteoclast formation. From the results of tartrateresistant acid phosphatase (TRAP) and F-actin band staining, an increasing number of osteoclasts with TRAP-positive large cell bodies, more nuclei, and enlarged F-actin bands were notably identified in the tension stimulation group compared to the no tension stimulation group. To determine the effect of exosomes on osteoclastic activity, we employed a pit formation assay. In response, the area of resorption pits remarkably enlarged in the tension stimulation group (Fig. 2H). Western blotting revealed that expression levels of c-fos, NFATc1, RANK, and NFκB p-P65 induced by RANKL were upregulated in response to exosomes in the tension stimulation group (Fig. 2I). From the results of the histograms coverage rate, number and nuclei of TRAP-positive osteoclasts were significantly increased after the addition of exosomes with tension stimulation. The opposite tendency was observed in the bone resorption area rate (Fig. 2J). RT-qPCR analysis revealed that expression levels of Ctsk, MMP9, and TRAP mRNA induced by RANKL were increased in response to exosomes in the tension stimulation group (Fig. 2K). These results implied that exosomes in the tension stimulation group positively regulate RANKL-induced osteoclastic differentiation and function.
Circ_0008542 in exosomes upregulates osteoclast differentiation and bone resorption
We first transfected circ_0008542 in RAW264.7 cells or added exosomes containing circ_0008542 overexpression in RAW264.7 cells. As shown in the results of abovementioned experiments ( Fig. 3A-E), circ_0008542 significantly increased osteoclast differentiation and bone resorption compared to the NC groups. Next, the circ_0008542 fragment or RANK gene fragment with wild-type or mutant complementary binding sites was inserted into the luciferase reporter, separately, with miRNA-185-5p mimic and NC constructed. Results showed that the luciferase activity of circ_0008542-wt or RANK-wt was significantly inhibited in the miRNA-185-5p mimic group compared to the NC group (Fig. 3F). In addition, circ_0008542 was predominantly localized in the cytoplasm as evaluated by cytoplasmic and nuclear fractionation assay (Fig. 3G). RNA induced silencing complexes (RISCs) are formed by miRNA ribonucleoprotein complexes (miRNPs), which are present in anti-AGO2 immunoprecipitates. Therefore, anti-AGO2 immunoprecipitates contain miRNAs and their interacting RNA components. RNA immunoprecipitation (RIP) assay revealed that circ_0008542 was enriched in miRNPs containing AGO2 compared to anti-IgG immunoprecipitates (Fig. 3H). RNA pulldown assay proved a significant enrichment of circ_0008542 in the miRNA-185-5p captured fraction compared with the NC group (Fig. 3I).
Only the segment of circ_0008542-9 reveals a high level of m6A methylation We applied the SRAMP website to predict the abundance of m6A methylation loci of circ_0008542. Ten potential loci were observed in circ_0008542's overall length (Fig. S2A). To determine the effective m6A methylation segments in circ_0008542, we B Bioinformatics prediction of the circ_0008542/miRNA-185-5p-RANK axis. C Schematic diagram showing the genomic location and backsplicing pattern of circ_0008542. D Sanger sequencing of circ_0008542. The arrow shows the "head-to-tail" splicing site. E Divergent primers detected circ_0008542 from cDNA by PCR and agarose gel electrophoresis, rather than from gDNA. F Circ_0008542 from cDNA was amplified with divergent primers, even after treatment with RNase R digestion, and the opposite results were observed for gDNA. G Relative expression level of circ_0008542 in exosomes between two groups with or without RNase R digestion. Data were representative of three independent experiments expressed as the mean ± SD (*p < 0.05).
designed seven pairs of primers that amplified seven sections of the circ_0008542 sequence. From the results of m6A-RT-qPCR, only the circ_0008542-9 segment revealed a high level of m6A methylation (Fig. S2B). To determine the effect of methyltransferase or demethyltransferase on circ_0008542 m6A methylation, we transfected si-METTL3 or ALKBH5 into ME3T3-E1 cells. The previously high levels of m6A methylation of circ_0008542-9 were decreased when disturbed METTL3 or overexpressed ALKBH5 in MC3T3-E1 cells based on m6A-RT-qPCR results ( Fig. S2C-F), indicating that both METTL3 and ALKBH5 regulate m6A methylation levels through the segment of circ_0008542-9.
Inhibiting METTL3 or overexpressing ALKBH5 in MC3T3-E1 cells reverses the effects of osteoclast differentiation and bone resorption induced by adding exosomes with circ_0008542 overexpression We transfected circ_0008542 or circ_0008542+si-METTL3/ALKBH5 into MC3T3-E1 cells, and then added different exosomes containing circ_0008542 overexpression to RAW264.7 cells. The same experiments were performed in this part. Results showed that increased osteoclast differentiation and bone resorption were notably identified in the circ_0008542 group but sharply decreased in the circ_0008542+si-METTL3/ ALKBH5 group ( Fig. 4A-E, I-M). Meanwhile, the RIP assay demonstrated that circ_0008542 or circ_0008542-9 was not enriched in miRNPs containing AGO2 after si-METTL3/ALKBH5 (Fig. 4F, N). Next, circ_0008542 was predominantly localized in the cytoplasm as evaluated by cytoplasmic and nuclear fractionation assay after si-METTL3/ALKBH5 (Fig. 4G, O). Compared with the NC group the enrichment of circ_0008542 or circ_0008542-9 showed no change in the miRNA-185-5p captured fraction after si-METTL3/ALKBH5 ( Fig. 4H, P). In addition, we transfected RANK then added exosomes (circ_0008542 overexpression from MC3T3-E1 cells after si-METTL3/ALKBH5 treated) into RAW264.7 cells. The experiment results showed that increased osteoclast differentiation and bone resorption were notably identified in the circ_0008542+si-METTL3/ALKBH5 + RANK group compared to the circ_0008542 +si-METTL3/ALKBH5 group ( Fig. S3E-L). It indicated that circ_0008542 m6A methylation level, which is regulated by METTL3 and ALKBH5, is closely related to its function in promoting osteoclast differentiation and bone resorption. Next, after adding exosomes with tension stimulation (Flexcell culture with 20% amplitude/1 Hz/24 h), the same tendency was presented in the condition of miRNA-185-5p mimic transfection or si-METTL3/ ALKBH5 ( Fig. S4A-I).
MUT1956 circ_0008542 in exosomes loses its promoting function on osteoclast differentiation and bone resorption We next applied site-directed mutagenesis to change the 1956 bp "A" to "G" on circ_0008542. The mutation site matched the sequence from Sanger sequencing (Fig. 5A). In addition, RT-PCR results showed that a single and distinct product of expected size was amplified using divergent primers from only cDNA, while there was no amplification product from gDNA. RNase R digestion results showed that MUT1956 circ_0008542 exhibited a stable feature after RNase R treatment (Fig. 5A, B). After adding exosomes containing MUT1956 circ_0008542 overexpression, expression levels of MUT1956 circ_0008542 increased in exosomes or RAW 264.7 cells as shown by the RT-qPCR results (Fig. 5B). Futhermore, MUT1956 circ_0008542 was predominantly localized in the cytoplasm as evaluated by cytoplasmic and nuclear fractionation assay (Fig. 5C). We next transfected MUT1956 circ_0008542 into RAW264.7 cells or added exosomes containing MUT1956 circ_0008542 overexpression into RAW264.7 cells. The same experiments were performed in this part. Results showed no change of osteoclast differentiation and bone resorption in the MUT1956 circ_0008542 groups (Fig. 5D-H). Meanwhile, RIP assay demonstrated that MUT1956 circ_0008542 was not enriched in miRNPs containing AGO2 compared with anti-IgG immunoprecipitates (Fig. 5I). In addition, the level of m6A methylation of circ_0008542-9 showed no change in the anti-m6A group compared to the anti-IgG group after MUT1956 circ_0008542 treatment based on the m6A-RT-qPCR results (Fig. 5J), indicating that after the mutation of 1956 bp "A" to "G", circ_0008542 loses its m6A methylation modification which is relevant to its promotion of osteoclast differentiation and bone resorption through the miR-185-5p/RANK axis. Next, in order to exclude the binding efficacy was not altered when the mutation was at 1956 bp, we applied MUT988 circ_0008542 to detect osteoclast function. The mutation at 988 bp was the eighth locus and had a same sequence GGACA with the 1956 bp locus but revealed a low level of m6A methylation. As shown in the results, MUT988 circ_0008542 significantly increased osteoclast differentiation compared to the NC group ( Fig. S4J-O).
ALKBH5 in exosomes displays a clear rescue effect on bone loss induced by circ_0008542 in vivo
Circ_0008542, MUT1956 circ_0008542, or ALKBH5 was transfected into MC3T3-E1 cells and collected from cell supernatant exosomes separately or simultaneously according to the different groups. Micro-CT and hematoxylin and eosin (H&E) staining were applied to measure bone histological parameters. After 8 weeks of exosome injection, the circ_0008542 group displayed clear bone loss by histology compared to the other four groups, including decreased trabecular bone number, thinner metaphyseal trabecular, and increased trabecular spacing. The circ_0008542 + ALKBH5 group displayed a clear rescue effect on bone loss compared to the circ_0008542 group (Fig. 6A). Compared to the other four groups, the circ_0008542 group observably decreased bone volume/total volume (BV/TV), bone mineral density (BMD), trabecular thickness (Tb.Th), and trabecular number (Tb.N) and significantly increased trabecular separation (Tb.Sp). Compared to the circ_0008542 group, the circ_0008542 + ALKBH5 group observably increased BMD, BV/TV, Tb.Th, and Tb.N and significantly decreased Tb.Sp (Fig. 6B). H&E staining also exhibited the same alteration in mice after exosome injection (Fig. 6C). There was no body weight change in mice among the five groups (Fig. 6D). As expected from TRAP staining, circ_0008542 considerably increased the number of TRAP-positive osteoclasts compared to the other four groups. However, the circ_0008542 + ALKBH5 group observably reversed this phenomenon (Fig. 6E). Several significant differences among the five groups with regard to the number of TRAP positive osteoclasts and integrated optical density of TRAP are presented in the histograms (Fig. 6F). A schematic diagram of this study is presented in Fig. 6G.
DISCUSSION
Compared to traditional oral implants, immediate implants could shorten the unloaded healing process, improve satisfaction with treatment, and may become the most commonly used treatment. Fig. 2 The effects of exosomes in vitro. A Electron microscopy images of exosomes. Scale bar, 100 nm. Size distribution of exosomes secreted by MC3T3-1 cells with or without tension stimulation. B Protein levels of TFIIB, Lamin A/C, HSP70, TSG101, and CD63 in MC3T3-E1 cell lysates or exosomes secreted by MC3T3-E1 cells with or without tension stimulation analyzed by western blot. C Relative expression level of circ_0008542 in exosomes at different tension stimulation times. D Immunofluorescence images of exosomes from MC3T3-E1 cells to RAW264.7 cells. Exosomes were labeled with PKH26. E Relative expression level of circ_0008542 or Rrp15 in RAW 264.7 cells after addition of different exosomes. F After different tension stimulation times, Alizarin Red S or ALP staining was applied to MC3T3-E1 cells and BMSCs after 21 days or 7 days of osteogenic induction. G Relative expression levels of ALP, Runx2, Bglap, and Col1α1 in MC3T3-E1 cells and BMSCs at different tension stimulation times. H TRAP staining, pit formation assay, and F-actin band staining were applied to detect osteoclast differentiation and bone resorption ability between the two groups with or without tension stimulation exosomes. I Protein levels of c-fos, NFATc1, RANK, and NFκB p-P65 in RAW 264.7 cell lysates between the two groups with or without tension stimulation exosomes were analyzed by western blot. J Histograms of coverage rate, number and nuclei of TRAP-positive osteoclasts, and bone resorption area rate between the two groups. K Relative expression levels of Ctsk, MMP9, and TRAP in RAW 264.7 cells after adding different exosomes. Data were representative of three independent experiments expressed as the mean ± SD (*p < 0.05). Different letters (a, b, c, d, and e) indicate significant differences among multiple groups (p < 0.05).
However, clinical studies have shown that this loaded method simultaneously causes marginal bone resorption and increases the failure rate of osseointegration. For example, the previous study proved that after 3 years of follow-up period, the immediate loading group recorded significant vertical bone loss at distal and labial sites than the conventional loading group [33]. The degree of primary stability during immediate loading protocols is dependent on several factors including bone density and quality, Fig. 3 The effects of circ_0008542 on osteoclasts. After transfection of circ_0008542 in RAW264.7 cells or addition of exosomes containing circ_0008542 overexpression in RAW264.7 cells, A TRAP staining, pit formation assay, and F-actin band staining were applied to detect osteoclast differentiation and bone resorption ability between the two groups with or without circ_0008542. B Protein levels of c-fos, NFATc1, RANK, and NFκB p-P65 in RAW 264.7 cell lysates between two groups with or without circ_0008542 were analyzed by western blot. C Relative expression level of RANK in RAW 264.7 cells between two groups with or without circ_0008542. D Histograms of coverage rate, number and nuclei of TRAP-positive osteoclasts, and bone resorption area rate between the two groups. E Relative expression levels of Ctsk, MMP9, and TRAP in RAW 264.7 cells between the two groups. F After construction of the circ_0008542 fragment luciferase reporter or RANK gene fragment luciferase reporter with wild-type or mutant complementary binding sites, relative luciferase activity was detected between the miRNA-185-5p group and the NC group. G Cytoplasmic and nuclear fractionation assay was applied to detect localization of circ_0008542. H RIP assay was performed to detect the enrichment rate of circ_0008542 and miRNA-185-5p. I RNA pulldown assay with 3′-end biotinylated miRNA-185-5p. The binding activities of circ_0008542 to 3′-end biotinylated miRNA-185-5p with circ_0008542 overexpression. Data were representative of three independent experiments expressed as the mean ± SD (*p < 0.05).
implant shape, design and surface characteristics, and surgical technique [34]. Compared with conventional loading, immediate loading is associated with a higher incidence of implants failure [35]. And the risk of early loss of implants in the immediate loading group is higher than that in the delayed loading group [36]. Many clinical researches focus on bone resorption of immediate implants, but few basic researches about the mechanisms between immediate implants and bone resorption especially under tension stimulation. The current dominate view is that micromotion might hinder the proliferation of osteoblasts and lead to the formation of fibrous tissues at the bone-implants interface [34,37]. But the specific mechanism is still unknown. Therefore, it is important to further study mechanisms of tension stimulus on osteoblasts and osteoclasts at early stage of osseointegration to promote rapid osseointegration around oral implants.
Compared to traditional biomechanical research, this study transformed tension stimulation into circ_0008542, which was proven to be a novel pathogenic molecule in the osseointegration microenvironment. Specifically, exosomes were used as carriers for communication between osteoblasts and osteoclasts. Circ_0008542 is involved in the initiation and progression of disease as an important pathogenic molecule. RNA m6A methylation plays a key role as an important epigenetic modification in the posttranscriptional regulation of circ_0008542 which exerts a molecular sponge effect in osteoclasts through the miRNA-185-5p/RANK axis.
In this study, circ_0008542 was demonstrated to be a novel circular RNA contained in MC3T3-E1 cell exosomes under tension stimulation (Flexcell culture with 20% amplitude/1 Hz/24 h). It is characterized by a head-to-tail splicing form and remains stable under RNase R digestion. In addition, circ_0008542 increased with prolonged tension stimulation time in MC3T3-E1 cells. Therefore, its molecular sponge effect on miR-185-5p was gradually enhanced in RAW264.7 cells. Circ_0008542 gradually upregulates osteoclast differentiation and bone resorption through the miR-185-5p/RANK axis. This phenomenon is a novel molecular mechansim in the osseointegration microenvironment in response to unbalanced stress on the implant neck.
Next, inhibition of METTL3 or overexpression of ALKBH5 in MC3T3-E1 cells reversed the effect of osteoclast differentiation and bone resorption induced by adding exosomes with circ_0008542 overexpression. This means that the regulation of METTL3 or ALKBH5 occurs before circ_0008542 is processed and matured. Once circ_0008542 matured and was contained within exosomes, neither METTL3 nor ALKBH5 influenced its effect on osteoclasts. We postulate that METTL3 or ALKBH5 recognizes the local structure of circ_0008542 through the m6A functional site, changes its spatial structure, and affects the binding efficiency between circ_0008542 and miRNA-185-5p, thus impacting RANK expression and osteoclast function. If circ_0008542 is not processed by METTL3 or ALKBH5 during maturation, its spatial structure is not conducive to binding with miRNA-185-5p, thus losing its sponge effect. Pan T and Parisien M reached a similar conclusion in a previous study. M6A alters the local structure in mRNA and lncRNA to facilitate binding of heterogeneous nuclear ribonucleoprotein C (hnRNP C). Specifically, the 2577 m6A residue destabilizes lncRNA MALAT1 hairpin-stem to make its opposing Utract more single-stranded or accessible, enhancing its interaction with hnRNP C. They term this mechanism that regulates RNA-protein interactions through m6A-dependent RNA structural remodeling an "m6A-switch" [38]. In addition, Tavazoie SF et al. demonstrated that the m6A mark acts as a key posttranscriptional modification that promotes the processing of primary microRNAs. Specifically, pri-miRNAs are marked by METTL3-dependent m6A modification, and METTL3 expression is required for the appropriate processing of most pri-miRNAs to mature miRNAs. Therefore, the m6A mark plays an important role in the nucleus, allowing the microprocessor complex to recognize its specific secondary structures with methylation sites at the same time, as opposed to unintended secondary structures in mRNA [39]. The MUT1956 circ_0008542 study validated this conclusion. After the mutation of 1956 bp "A" to "G" on circ_0008542, MUT1956 circ_0008542 in exosomes lost its promoting function on osteoclast differentiation and bone resorption. Thus, even without interference of METTL3 or ALKBH5 during the process of circRNA maturation, mutation of the m6A functional site directly results in loss of its biological effects. Therefore, based on the above conclusions, m6A methylation determines the molecular sponge effect of circ_0008542. METTL3 or ALKBH5 may alter the spatial structure of genes (circ_0008542) through specific m6A functional sites, facilitating their binding with target genes without changing their nucleotide base sequence. The m6A functional site is the core of relevant biological effects. This "m6A-switch" at circ_0008542 1956 bp is closely related to osteoclast differentiation and bone resorption in osteoclast precursor cells. METTL3 or ALKBH5 appears to be a node that controls the "m6A-switch" to varying degrees.
From the results of the in vivo study, ALKBH5 overexpression in exosomes clearly rescued the bone loss induced by circ_0008542 after 8 weeks of exosome injection. Specifically, ALKBH5 acts on the circ_0008542 1956 bp "m6A-switch" through demethylation, and makes circ_0008542 unsuitable for binding to the miR-185-5p/RANK axis. After decreasing the molecular sponge effect of circ_0008542, osteoclast differentiation and bone resorption were also reduced. Combined with the clinical problem that needs to be solved in this study, we think that oral implants are widely used, and failure of osseointegration is the core reason for failure of oral implants. Therefore, the potential value of this study provides methods to enhance the resistance of immediate implants through use of exosomes releasing ALKBH5.
MATERIALS AND METHODS Cells culture, antibodies, Flexcell tension system application and ethics statement
Murine RAW264.7 monocytic and MC3T3-E1 cell lines were purchased from the Shanghai Cell Center (Shanghai, China). Recombinant soluble mouse RANKL was purchased from R&D Systems (Minneapolis, USA). Specific antibodies against NFκB p65, phospho-NFκB p65, horseradish peroxidaseconjugated goat anti-rabbit, NFATc1, and RANK were obtained from Cell Signaling Technology (Shanghai, China). Specific antibodies against HSP70, Fig. 4 The effects of METTL3 inhibition or ALKBH5 overexpression on the circ_0008542/miR-185-5p/RANK axis. We first transfected circ_0008542 or circ_0008542+si-METTL3 into MC3T3-E1 cells and then added different exosomes containing circ_0008542 overexpression to RAW264.7 cells. A TRAP staining, pit formation assay, and F-actin band staining were applied to detect osteoclast differentiation and bone resorption ability among the four groups. B Protein levels of c-fos, NFATc1, RANK, and NFκB p-P65 in RAW 264.7 cell lysates among the four groups were analyzed by western blot. C Relative expression level of RANK in RAW 264.7 cells among the four groups. D Histograms of coverage rate, number and nuclei of TRAP-positive osteoclasts, and bone resorption area rate among the four groups. E Relative expression levels of Ctsk, MMP9, and TRAP in RAW 264.7 cells among the four groups. F RIP assay was performed to detect the enrichment rate of circ_0008542, circ_0008542-9, and miRNA-185-5p after si-METTL3 treatment. G Cytoplasmic and nuclear fractionation assay was applied to detect localization of circ_0008542. H RNA pulldown assay with 3′-end biotinylated miRNA-185-5p after si-METTL3 treatment. The binding activities of circ_0008542 or circ_0008542-9 to 3′-end biotinylated miRNA-185-5p with circ_0008542 overexpression. We next transfected circ_0008542 or circ_0008542 + ALKBH5 into MC3T3-E1 cells and then added different exosomes containing circ_0008542 overexpression to RAW264.7 cells. I TRAP staining, pit formation assay, and F-actin band staining were applied to detect osteoclast differentiation and bone resorption ability among the four groups. J Protein levels of c-fos, NFATc1, RANK, and NFκB p-P65 in RAW 264.7 cell lysates among the four groups were analyzed by western blot. K Relative expression level of RANK in RAW 264.7 cells among the four groups. L Histograms of coverage rate, number and nuclei of TRAP-positive osteoclasts, and bone resorption area rate among the four groups. M Relative expression levels of Ctsk, MMP9, and TRAP in RAW 264.7 cells among the four groups. N RIP assay was performed to detect the enrichment rate of circ_0008542, circ_0008542-9, and miRNA-185-5p after ALKBH5 overexpression. O Cytoplasmic and nuclear fractionation assay was applied to detect localization of circ_0008542. P RNA pulldown assay with 3′-end biotinylated miRNA-185-5p after ALKBH5 overexpression. The binding activities of circ_0008542 or circ_0008542-9 to 3′-end biotinylated miRNA-185-5p with circ_0008542 overexpression. Data were representative of three independent experiments expressed as the mean ± SD. Different letters (a and b) indicate significant differences among multiple groups (p < 0.05).
TSG101, CD63, TFIIB, Lamin A/C, c-fos, METTL3, ALKBH5, β-actin, and GAPDH were obtained from Abcam (Shanghai, China). All animal experiments in this study were conducted according to the Guidelines for Animal Experimentation of Shanghai Jiao Tong University (Ethics number: SH9H-2020-A41-1). We isolated BMSCs from mice for cultured. Briefly, we first isolated the tibiae and femur from 8-week-old female C57BL/6 mice (Animal Center of Shanghai Jiao Tong University, Shanghai, China). D-Hanks was used to completely wash marrow cells. The lavage was first passed through a cell strainer and was then centrifuged (300 g for 5 min). A single cell suspension was collected in αMEM supplemented with 10% FBS and 1% penicillin/streptomycin. MC3T3-E1 cells were seeded into 6-well BioFlex Culture Plate-Untreated (BioFLEX, USA). The medium were replaced with fresh basal medium when the cell density reached 80-90% confluency. Then, cells were subjected to cyclic mechanical tension (20% amplitude/ 1 Hz/6-48 h) using a FX-5000T Flexcell Tension system (Flexcell, USA). Cells and supernatants were subsequently collected for further analysis.
Exosome isolation, transmission electron microscopy, and particle size analysis Cell supernatants were centrifuged at 2000xg for 15 min at 4°C to exclude cell debris. Supernatants were filtered through 0.22 μm filters (Millipore, USA) and then centrifuged at 110,000xg for 30 min in Amicon Ultra -3 KDa (Millipore, USA). Pellets were resuspended with appropriate PBS and stored at −80°C. Freshly prepared exosomes were resuspended in 100 μl of 2% PFA, absorbed on Formvar-carbon coated EM grids, washed with PBS, fixed with 1% glutaraldehyde, and washed with water. Thereafter, grid were stained with 4% uranyl-oxalate solution, embedded in 1% methyl cellulose-UA, and observed under electron microscopy at 80 kV. A ZetaView PMX 110 instrument (Particle Metrix, Germany) was used to analyze the distribution of exosome size. Freshly prepared exosome samples were resuspended in PBS and measured according to the manufacturer's instructions.
ALP staining and ARS staining
ALP staining was performed using a BCIP/NBT staining kit (Beyotime, China). After osteogenic induction for 7 days, cells were fixed and ALP staining was performed following the manufacturer's instructions. Mineralized nodule formation was determined by ARS staining. After osteogenic incubation for 21 days, cells were fixed and stained with 0.1% ARS (Sigma-Aldrich, USA) for 20 min.
Osteoclast differentiation, TRAP staining, F-actin band staining, and pit formation assay RAW264.7 cells were seeded into 48-well plates and cultured in DMEM containing 10% FBS and 1% penicillin/streptomycin. Cells were subjected to treatments according to experimental requirements. Cells were then stimulated with RANKL (20 ng/ml) for 7 days. Culture medium was replaced with fresh medium every other day. After 7 days, TRAP staining was used to evaluate osteoclast differentiation. Cells were fixed and subjected to TRAP staining. Briefly, cells were submerged in a mixture of 3.0 mg naphthol AS-BI phosphate, 18 mg red violet LB salt, and 100 mML (+) tartaric acid (0.36 g) diluted in 30 ml of 0.1 M sodium acetate buffer (pH 5.0) for 15 min at 37°C. Multinucleated TRAP-positive cells with at least three nuclei were scored as osteoclasts. RAW264.7 cells were seeded into 48-well plates and cultured for 7 days as previously described. Cells were fixed and stained for F-actin band staining. Briefly, cells were submerged in 5 μl/ml phalloidin (Yeasen, China) for 30 min after treatment with 0.5% Triton X-100 for 5 min. The pit formation assay was performed using Corning osteo assay surface multiple well plates (Corning, USA). RAW264.7 cells were seeded into 96-well plates and cultured for 10 days as previously described. Next, plates were stained with Von Kossa to increase the contrast between pits and surface coating and observed under a light microscope.
RT-qPCR analysis
Total RNA was extracted using TRIzol reagent (Takara, Japan) and was reverse transcribed using the PrimeScript TM RT regent Kit with gDNA Eraser and Mir-X miRNA First-Strand Synthesis Kit (Takara, Japan). Primer sequences were designed and synthetized by Sangon Biotech (Table S3). RT-qPCR was conducted with SYBR Premix Ex Taq TM II (Takara, Japan). Relative expression levels of the target genes were calculated by the 2 -ΔΔCt method. GAPDH or U6 was used for normalization, and the data were compared to normalized control values.
RNA isolation of nuclear and cytoplasmic fractions
RNA isolation of nuclear and cytoplasmic fractions was performed using the Nuclear/Cytoplasmic Isolation Kit (Biovision, USA). Expression levels of circ_0008542, U6, and GAPDH were analyzed by RT-qPCR.
Luciferase reporter assay
Luciferase reporters were generated by cloning circ_0008542, mutant-circ_0008542, wild-type-RANK-3′UTR, or mutant-RANK-3′UTR into GP-miRGLO vectors. Briefly, luciferase reporter plasmids were transfected into RAW264.7 cells together with miR-NC or miR-185-5p mimic using Lipofectamine 3000. After transfection for 24 h, Renilla and firefly luciferase activities were measured separately using the Dual Luciferase Reporter Assay System (Promega, USA) following the manufacturer's instructions. Renilla luciferase was used to normalize firefly luciferase activity to evaluate reporter translation efficiency.
RNA immunoprecipitation (RIP)
RIP assay was performed with a Magna RIP Kit (Millipore, USA) according to the manufacturer's protocol. Briefly, magnetic beads were mixed with anti-Argonaute 2 (AGO2) (Abcam, China) or anti-IgG (Cell Signaling Technology, China) before the addition of cell lysates. After the protein beads were removed, RNAs of interest were eluted from the immunoprecipitated complex and purified for further analysis using RT-qPCR. Relative enrichment was normalized to the input.
M6A immunoprecipitation (MeRIP)
MeRIP assay was conducted with the Magna MeRIP™ m6A Kit (Millipore, USA) to determine m6A modification of individual Fig. 5 The function of MUT1956 circ_0008542. A Schematic diagram shows the mutation site of MUT1956 circ_0008542. Sanger sequencing of MUT1956 circ_0008542. The arrow shows the mutation site. Divergent primers detected MUT1956 circ_0008542 from cDNA by PCR and agarose gel electrophoresis, rather than from gDNA. MUT1956 circ_0008542 from cDNA was amplified with divergent primers and even treated with RNase R digestion, and the opposite results were observed for gDNA. B Relative expression level of MUT1956 circ_0008542 in exosomes with or without MUT1956 circ_0008542 overexpression. Relative expression level of MUT1956 circ_0008542 in RAW264.7 cells treated with different exosomes. Relative expression level of MUT1956 circ_0008542 in exosomes with or without RNase R digestion. C Cytoplasmic and nuclear fractionation assay was applied to detect localization of MUT1956 circ_0008542. After transfection of MUT1956 circ_0008542 in RAW264.7 cells or addition of exosomes containing MUT1956 circ_0008542 overexpression in RAW264.7 cells. D TRAP staining, pit formation assay, and F-actin band staining were applied to detect osteoclast differentiation and bone resorption ability between the two groups with or without MUT1956 circ_0008542. E Protein levels of c-fos, NFATc1, RANK, and NFκB p-P65 in RAW 264.7 cell lysates between the two groups with or without MUT1956 circ_0008542 were analyzed by western blot. F Relative expression level of RANK in RAW 264.7 cells between the two groups with or without MUT1956 circ_0008542. G Histograms of coverage rate, number and nuclei of TRAPpositive osteoclasts, and bone resorption area rate between the two groups. H Relative expression levels of Ctsk, MMP9, and TRAP in RAW 264.7 cells between the two groups. I RIP assay was performed to detect the enrichment rate of MUT1956 circ_0008542 and miRNA-185-5p. J After transfecting MC3T3-E1 cells with MUT1956 circ_0008542, m6A-RT-qPCR assay was performed to detect the enrichment rate of the circ_0008542-9 segment between the anti-m6A group and the anti-IgG group. Data were representative of three independent experiments expressed as the mean ± SD (*p < 0.05).
transcripts. In brief, total RNA was isolated from pretreated cells and randomly fragmented into a size of 100 nucleotides. RNA samples were then immunoprecipitated with magnetic beads precoated with anti-m6A antibody (Millipore, USA) or anti-mouse IgG (Millipore). N6methyladenosine 5′-monophosphate sodium salt was applied to elute the m6A-modified RNA fragments for further RT-qPCR analysis. Specific primers were designed for RT-qPCR analysis according to the SRAMP website (m6A loci predictor http://www.cuilab.cn/sramp/) and were listed in Table S5. The relative enrichment of m6A was normalized to the input. Fig. 6 The effects of circ_0008542 in vivo. Circ_0008542, MUT1956 circ_0008542, or ALKBH5 was transfected into MC3T3-E1 cells and collected from cell supernatant exosomes separately or simultaneously according to the different groups. A, C After 8 weeks of exosome injection, micro-CT, and H&E staining were applied to measure the bone histological parameters. B, D Histograms of BV/TV, BMD, Tb.Th, Tb.N, Tb.Sp, and body weight among the five groups. E TRAP staining was applied to evaluate the osteoclast differentiation and bone resorption ability. F Histograms of TRAP positive osteoclast number and integrated optical density of TRAP among the five groups. G Schematic diagram of this study. Data were representative of five independent experiments expressed as the mean ± SD. Different letters (a and b) indicate significant differences among multiple groups (p < 0.05).
RNA pulldown assay
The 3′-end biotinylated miRNA-185-5p mimics (Ribio, China) was constructed for RNA pulldown. Streptavidin-coated magnetic beads (Invitrogen, USA) were used to incubate the probe at 25°C for 1 h to generate probe-coated magnetic beads. The cells were harvested in a lysis buffer and the lysate was incubated with probe-coated magnetic beads at 37°C for 4 h with constant rotation. After incubation, three washes with lysis buffer were performed and RNA was extracted using TRIzol reagent. The abundance of circ_0008542 in bound fraction was evaluated by RT-qPCR analysis.
Animals and exosome injection
Eight-week-old female C57BL/6 mice were used in this study. All mice were randomly distributed into five groups: NC, where rats were injected with exosomes containing mock vehicle into the tail vein for 8 weeks (exosomes 100 μg/week, for 8 weeks, n = 5); circ_0008542, where rats were injected with exosomes containing circ_0008542 into the tail vein for 8 weeks (same dose, n = 5); MUT1956 circ_0008542, where rats were injected with exosomes containing MUT1956 circ_0008542 into the tail vein for 8 weeks (same dose, n = 5); ALKBH5, where rats were injected with exosomes containing ALKBH5 into the tail vein for 8 weeks (same dose, n = 5), and circ_0008542 + ALKBH5, where rats were injected with exosomes containing circ_0008542 and ALKBH5 into the tail vein for 8 weeks (same dose, n = 5). Circ_0008542, MUT1956 circ_0008542, or ALKBH5 was transfected into MC3T3-E1 cells according to the corresponding experimental groups.
Micro CT, histological detection, and TRAP staining
Tibiae were scanned using Skyscan 1176 Micro-CT (Bruker, USA) with a scanning resolution of 9 μm, a voltage of 50 kV, and a current of 450 μA. Trabecular bone data were obtained at a region of interest along the long axis of the proximal tibiae and 1-3 mm away from the growth plate. The bone histomorphometry parameter analysis included BV/TV, BMD, Tb.Th, Tb.N, and Tb.Sp. Serial sections were made for subsequent histological analysis. H&E staining was applied to assess histological alterations. TRAP staining was performed as previously described.
Statistical analysis
Data analyses were performed using SAS version 9.4 (SAS Institute, USA). Normal tests were performed to know the normality of continuous data before appropriate ways of statistical description or analysis were chosen. Parametric test of Student's t-test or one-way ANOVA were used if the data were normally distributed. Instead, nonparametric test of Wilcoxon rank sum test or Kruskal-Wallis (H-test) would be used if the data did not meet the requirements of parametric test. P < 0.05 was considered statistically significant. | 2021-06-20T06:17:06.745Z | 2021-06-18T00:00:00.000 | {
"year": 2021,
"sha1": "fbd1866282f9a6ccd9c16a87d80bbb2b6b459dfe",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41419-021-03915-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8b91ef313b582eb5bfc45ced7ce605654a15cc00",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118502748 | pes2o/s2orc | v3-fos-license | Subgroup decomposition in $\text{Out}(F_n)$, Part III: Weak attraction theory
This is the third in a series of four papers (with research announcement posted on this arXiv) that develop a decomposition theory for subgroups of $\text{Out}(F_n)$. In this paper, given an outer automorphism of $F_n$ and an attracting-repelling lamination pair, we study which lines and conjugacy classes in $F_n$ are weakly attracted to that lamination pair under forward and backward iteration respectively. For conjugacy classes, we prove Theorem F from the research annoucement, which exhibits a unique vertex group system called the"nonattracting subgroup system"having the property that the conjugacy classes it carries are characterized as those which are not weakly attracted to the attracting lamination under forward iteration, and also as those which are not weakly attracted to the repelling lamination under backward iteration. For lines in general, we prove Theorem G that characterizes exactly which lines are weakly attracted to the attracting lamination under forward iteration and which to the repelling lamination under backward iteration. We also prove Theorem H which gives a uniform version of weak attraction of lines. v3: Contains a stronger proof of Lemma 2.19 (part of the proof of Theorem G) for purposes of further applications.
Introduction
Many results about the groups MCG(S) and Out(F n ) are based on dynamical systems. The Tits alternative ( [BFH00], [BFH05] for Out(F n ); [McC85] and [Iva92] independently for MCG(S)) says that for any subgroup H, either H is virtually abelian or H contains a free subgroup of rank ≥ 2, and these free subgroups are constructed by analogues of the classical "ping-pong argument" for group actions on topological spaces. Dynamical pingpong arguments were also important in Ivanov's classification of subgroups of MCG(S) [Iva92]. And they will be important in Part IV [HM13d] where we prove our main theorem about subgroups of Out(F n ), Theorem C stated in the Introduction [HM13a].
Ping-pong arguments are themselves based on understanding the dynamics of an individual group element φ, particularly an analysis of attracting and repelling fixed sets of φ, of their associated basins of attraction and repulsion, and of neutral sets which are neither attracted nor repelled. The proofs in [McC85,Iva92] use the action of MCG(S) on Thurston's space PML(S) of projective measured laminations on S.
The proof of Theorem C in Part IV [HM13d] will employ ping-pong arguments for the action of Out(F n ) on the space of lines B = B(F n ), which is just the quotient of the action of F n on the space B of two point subsets of ∂F n . The basis of those ping-pong arguments will be Weak Attraction Theory which, given φ ∈ Out(F n ) and a dual lamination pair Λ ± ∈ L ± (φ), addresses the following dynamical question regarding the action of φ on B: General weak attraction question: Which lines ℓ ∈ B are weakly attracted to Λ + under iteration of φ? Which are weakly attracted to Λ − under iteration of φ −1 ? And which are weakly attracted to neither Λ + nor Λ − ?
To say that ℓ is weakly attracted to Λ + (under iteration of φ) means that ℓ is weakly attracted to a generic leaf λ ∈ Λ + , that is, the sequence φ k (ℓ) converges in the weak topology to ℓ as k → +∞. Note that this is independent of the choice of generic leaf of Λ + , since all of them have the same weak closure, namely Λ + . Our answers to the above question are an adaptation and generalization of many of the ideas and constructions found in The Weak Attraction Theorem 6.0.1 of [BFH00], which answered a narrower version of the question above, obtained by restricting to a lamination Λ + which is topmost in L(φ) and to birecurrent lines. The answer was expressed in terms of the structure of an "improved relative train track representative" of φ.
In this paper we develop weak attraction theory to completely answer the general weak attraction question. Our theorems are expressed both in terms of the structure of a CT representative of φ, and in more invariant terms. The theory is summarized in Theorems F, G and H, versions of which were stated earlier in [HM13a]; the versions stated here are more expansive and precise. Theorem F focusses on periodic lines and on the nonattracting subgroup system; Theorems G and H are concerned with arbitrary lines. Each of these theorems has applications in Part IV [HM13d]. The nonattracting subgroup system: Theorem F. We first answer the general weak attraction question restricted to "periodic" lines in B, equivalently circuits in marked graphs, equivalently conjugacy classes in F n . The statement uses two concepts from Part I [HM13b]: geometricity of general EG strata (which was in turn based on geometricity of top EG strata as developed in [BFH00]), and vertex group systems.
(3) (Corollary 1.7) For each conjugacy class c in F n the following are equivalent: • c is not weakly attracted to Λ + φ under iteration of φ; • c is carried by A na (Λ ± ).
(5) (Corollary 1.10) For each conjugacy class c in F n , c is not weakly attracted to Λ + by iteration of φ if and only if c is not weakly attracted to Λ − by iteration of φ −1 .
Furthermore (Definition 1.2), choosing any CT f : G → G representing φ with EG-stratum H r corresponding to Λ + φ , the nonattracting subgroup system A na (Λ ± ) has a concrete description in terms of f and the indivisible Nielsen paths of height r (the latter are described in ( [FH11] Corollary 4.19) or Fact I.1.40). The description given in Definition 1.2 is our first definition of A na (Λ ± ), and it is not until Corollary 1.9 that we prove A na (Λ ± ) is well-defined independent of the choice of CT (item (4) above). Corollary 1.10 (item (5) above) shows moreover that A na (Λ ± ) is indeed well-defined independent of the choice of nonzero power of φ, depending only on the cyclic subgroup φ and the lamination pair Λ ± .
Notation: The nonattracting subgroup system A na (Λ ± ) depends not only on the lamination pair Λ ± but also on the outer automorphism φ (up to nonzero powers). Often we emphasize this dependence by building φ into the notation for the lamination itself, writing Λ ± φ and A na (Λ ± φ ).
The set of nonattracted lines: Theorems G and H. Theorem G, a vague statement of which was given in the introduction, is a detailed description of the set B na (Λ + φ ; φ) of all lines γ ∈ B that are not attracted to Λ + φ under iteration by φ. Theorem H is a less technical and more easily applied distillation of Theorem G, and is applied several times in Part IV [HM13d].
As stated in Lemma 2.1, there are three somewhat obvious subsets of B na (Λ + φ ; φ). One is the subset B(A na (Λ ± φ )) of all lines supported by the nonattracting subgroup system A na (Λ ± φ ). Another is the subset B gen (φ −1 ) of all generic leaves of attracting laminations for φ −1 . The third is the subset B sing (φ −1 ) of all singular lines for φ −1 : by definition these lines are the images under the quotient map B → B of those endpoint pairs {ξ, η} ∈ B such that ξ, η are each nonrepelling fixed points for the action of some automorphism representing φ −1 .
In Definition 2.2 we shall define an operation of "ideal concatenation" of lines: given a pair of lines which are asymptotic in one direction, they define a third line by concatenating at their common ideal point and straightening, or what is the same thing by connecting their opposite ideal points by a unique line.
Theorem G should be thought of as stating that B na (Λ + φ ; φ) is the smallest set of lines that contains B(A na (Λ ± φ ))∪B gen (φ −1 )∪B sing (φ −1 ) and is closed under this operation of ideal concatenation. It turns out that only a limited amount of such concatenation is possible, namely, extending a line of B(A na (Λ ± φ )) by concatenating on one or both ends with a line of B sing (φ −1 ), producing a set of lines we denote B ext (Λ ± φ ; φ −1 ) (see Section 2.2). Theorem G (Theorem 2.6). If φ, φ −1 ∈ Out(F n ) are rotationless and if Λ ± φ ∈ L ± (φ) then B na (Λ + φ ) = B ext (Λ ± φ ; φ −1 ) ∪ B gen (φ −1 ) ∪ B sing (φ −1 ) Note that the first of the three terms in the union is the only one that depends on the lamination pair Λ ± φ ; the other two depend only on φ −1 . For certain purposes in Part IV [HM13d] the following corollary to Theorem G is useful in being easier to directly apply. In particular item (2) provides a topologically uniform version of weak attraction: Theorem H (Corollary 2.17). Given rotationless φ, φ −1 ∈ Out(F n ) and a dual lamination pair Λ ± ∈ L ± (φ), the following hold: (1) Any line ℓ ∈ B that is not carried by A na (Λ ± ) is weakly attracted either to Λ + by iteration of φ or to Λ − by iteration by φ −1 .
(2) For any neighborhoods V + , V − ⊂ B of Λ + , Λ − , respectively, there exists an integer m ≥ 1 such that for any line ℓ ∈ B at least one of the following holds: γ ∈ V − ; φ m (γ) ∈ V + ; or γ is carried by A na (Λ ± ). 1 The nonattracting subgroup system Consider a rotationless φ ∈ Out(F n ) and a dual lamination pair Λ ± φ ∈ L ± (φ). Since φ is rotationless its action on L(φ) is the identity and therefore so is its action on L(φ −1 ); the laminations Λ + φ and Λ − φ are therefore fixed by φ and by φ −1 . In this setting we shall define the nonattracting subgroup system A na (Λ ± φ ), an invariant of φ and Λ ± φ . One can view the definition of A na (Λ ± φ ) in two ways. First, in Definition 1.2, we define A na (Λ ± φ ) with respect to a choice of a CT representing φ; this CT acts as a choice of "coordinate system" for φ, and with this choice the description of A na (Λ ± φ ) is very concrete. We derive properties of this definition in results to follow, from Proposition 1.4 to Corollary 1.8, including most importantly the proofs of items (1), (2) and (3) of Theorem F. Then, in Corollaries 1.9 and 1.10, we prove that A na (Λ ± φ ) is invariantly defined, independent of the choice of CT and furthermore independent of the choice of a positive or negative power of φ, in particular proving items (4) and (5) of Theorem F. The independence result is what allows us to regard the nonattracting subgroup system as an invariant of a dual lamination pair rather than of each lamination individually (but still with implicit dependence on φ up to nonzero power).
Weak attraction. Recall (Section I.1.1.5) 1 the notation B for the space of lines of F n on which Out(F n ) acts naturally, and recall that that a line ℓ ∈ B is said to be weakly attracted to a generic leaf λ ∈ Λ + φ ⊂ B under iteration by φ if the sequence φ n (ℓ) weakly converges to λ as n → +∞, that is, for each neighborhood U ⊂ B of λ there exists an integer N > 0 such that if n ≥ N then φ n (ℓ) ∈ U . Note that since any two generic leaves of Λ + φ have the same weak closure, namely Λ + φ , this property is independent of the choice of λ; for that reason we often speak of ℓ being weakly attracted to Λ + φ by iteration of φ. This definition of weak attraction applies to φ −1 as well, and so we may speak of ℓ being weakly attracted to Λ − φ under iteration by φ −1 . This definition also applies to iteration of a CT f : G → G representing φ on elements of the space B(G) (Section I.1.1.6), which contains the subspace B(G) identified with B by letting lines be realized in G, and which also contains all finite paths and rays in G. We may speak of such paths being weakly attracted to Λ + φ under iteration by f . Whenever φ and the ± sign are understood, as they are in the notations Λ + φ and Λ − φ , we tend to drop the phrase "under iteration by . . . ".
Remark 1.1. Suppose that φ is a rotationless iterate of some possibly nonrotationless η ∈ Out(F n ) and that Λ + φ is η-invariant. Then γ is weakly attracted to Λ + φ under iteration by η if and only γ is weakly attracted to Λ + φ under iteration by φ. Our results therefore apply to η as well as φ.
The nonattracting subgroup system
The Weak Attraction Theorem 6.0.1 of [BFH00] answers the "general weak attraction question" posed above in the restricted setting of a lamination pair Λ ± φ which is topmost with respect to inclusion, and under restriction to birecurrent lines only. The answer is expressed in terms of an "improved relative train track representative" g : G → G, a "nonattracting subgraph" Z ⊂ G, a (possibly trivial) Nielsen pathρ r , and an associated set of paths denoted Z,ρ r . The construction and properties of Z and Z,ρ r are given in [BFH00, Proposition 6.0.4].
In Definition 1.2 and the lemmas that follow, we generalize Z, Z,ρ r , and the nonattracting subgroup system beyond the topmost setting.
Notation for the nonattracting subgroup system. We use various notations in various contexts. Before Corollary 1.10 we will use the notation A na (Λ + φ ) for the subgroup system, presuming from the start what we shall eventually show in Corollary 1.7 regarding its independence from the choice of a CT representative, but leaving open for a while the issue of whether it depends on the choice of ± sign. After the latter independence is established in Corollary 1.10 we will switch over to the notation A na (Λ ± φ ). When we wish to emphasize dependence on φ we sometimes use the notation A na (Λ ± ; φ) or A na (Λ + ; φ); and when we wish to de-emphasize this dependence we sometimes use A na (Λ ± ) or A na (Λ + ).
For the remainder of Section 1.1 we fix a rotationless φ ∈ Out(F n ) and a lamination pair Λ ± φ ∈ L ± (φ). For a review of CTs, completely split paths, and the terms of a complete splitting, we refer the reader to Section I.1.5.1, particularly Definition I.1.28.
Definitions 1.2. The graph Z, the pathρ r , the path set Z,ρ r , and the subgroup system A na (Λ + φ ).
Consider a CT f : G → G representing φ such that Λ + φ corresponds to the EG stratum H r ⊂ G. We shall define the nonattracting subgraph Z of G, and we shall define a patĥ ρ r , either a trivial path or a height r indivisible Nielsen path if one exists. Using these we shall define a graph K and an immersion K → G by consistently gluing together the graph Z and the domain ofρ r . We then define A na (Λ + φ ) in terms of the induced π 1 -injection on each component of K. We also define a groupoid of paths Z,ρ r in G, consisting of all concatenations whose terms are edges of Z and copies of the pathρ r or its inverse, equivalently all paths in G that are images under the immersion K → G of paths in K.
Definition of the graph Z. The nonattracting subgraph Z of G is defined as a union of certain strata Remark. The nonattracting subgraph Z automatically contains every stratum H i which is a fixed edge, an NEG-linear edge, or an EG stratum distinct from H r for which there exists an indivisible Nielsen path of height i. For a fixed edge this is obvious. If H i is an NEG-linear edge E i then this follows from (Linear Edges) which says that f (E i ) = E i · u where u is a closed Nielsen path, because for all k ≥ 1 it follows that the path f k # (E i ) completely splits as E i followed by Nielsen paths of height < i, and no edges of E r occur in this splitting. For an EG stratum H i with an indivisible Nielsen path of height i this follows from Fact I.1.41 (3) which says that for each edge E ⊂ H i and each k ≥ 1, the path f k # (E) completely splits into edges of H i and Nielsen paths of height < i; again no edges of E r occur in this splitting.
Remark.
Suppose that H i is a zero stratum enveloped by the EG stratum H s and that H i ⊂ Z. Applying the definition of Z to H i it follows that H s ⊂ Z. Applying the definition of Z to H s it follows that no s-taken connecting path in H i is weakly attracted to Λ + φ . Applying (Zero Strata) it follows that no edge in Z is weakly attracted to Λ. Definition of the pathρ r . If there is an indivisible Nielsen path ρ r of height r then it is unique up to reversal by Fact I.1.40 and we defineρ r = ρ r . Otherwise, by convention we choose a vertex of H r and defineρ r to be the trivial path at that vertex.
Definition of the path set Z,ρ r . Consider B(G), the set of lines, rays, circuits, and finite paths in G (Definition I.1.1.5). Define the subset Z,ρ r ⊂ B(G) to consist of all elements which decompose into a concatenation of subpaths each of which is either an edge in Z, the pathρ r or its inverseρ −1 r . Definition of the subgroup system A na (Λ + φ ). Ifρ r is the trivial path, let K = Z and let h : K → G be the inclusion. Otherwise, define K to be the graph obtained from the disjoint union of Z and an edge E ρ representing the domain of the Nielsen path ρ r : E ρ → G r , with identifications as follows. Given an endpoint x ∈ E(ρ), if ρ r (x) ∈ Z then identify x ∼ ρ r (x). Given distinct endpoints x, y ∈ E(ρ), if ρ r (x) = ρ r (y) ∈ Z then identify x ∼ y (these points are already identified if ρ r (x) = ρ r (y) ∈ Z). Define h : K → G to be the inclusion on Z and to be the map ρ r on E ρ . By Fact I.1.40 the initial oriented edges of ρ r andρ r are distinct in H r , and since no edge of H r is in Z it follows that the map h is an immersion. The restriction of h to each component of K therefore induces an injection on the level of fundamental groups. Define A na (Λ + φ ), the nonattracting subgroup system, to be the subgroup system determined by the images of the fundamental group injections induced by the immersion h : K → G, over all noncontractible components of K.
Remark: The case of a top stratum. In the special case that H r is the top stratum of G, there is a useful formula for A na (Λ + φ ) which is obtained by considering three subcases. First, whenρ r is trivial we have K = Z = G r−1 . Second is the geometric case, whereρ r is a closed Nielsen path whose endpoint is an interior point of H r (Fact I.1.42 (2a)), and so the graph K is the disjoint union of Z = G r−1 with a loop mapping to ρ r . Third is the "parageometric" case, whereρ r is a nonclosed Nielsen path having at least one endpoint which is an interior point of H r (Fact I.1.42 (1a)), and so K is obtained by attaching an arc to Z = G r−1 by identifying at most one endpoint of the arc to G r−1 ; note in this case that union of noncontractible components of K deformation retracts to the union of noncontractible components of G r−1 . From this we obtain the following formula: where in the geometric case [ ρ r ] denotes the conjugacy class of the infinite cyclic subgroup generated by an element of F n represented by the closed Nielsen path ρ r . This completes Definitions 1.2.
Remark 1.3. In the special case that the stratum H r is geometric, the 1-complex K lives naturally as an embedded subcomplex of the geometric model X for H r (Definition I.2.4), as follows. By item (4a) of that definition, we may identify K with the subcomplex Z∪j(∂ 0 S) ⊂ G ∪ j(∂ 0 S) ⊂ X in such a way that the immersion K → G is identified with the restriction to K of the deformation retraction d : X → G. The subgroup system A na (Λ + φ ) = [π 1 K] may therefore be described as the conjugacy classes of the images of the inclusion induced injections π 1 K i → π 1 X ≈ F n , over all noncontractible components K i ⊂ X. Noting that j : S → X maps each boundary component ∂ 1 S, . . . , ∂ m S to G r−1 ⊂ Z ⊂ K and maps ∂ 0 S to j(∂ 0 S) ⊂ K, we have j(∂S) ⊂ K. It follows in the geometric case that Proposition I.3.3 applies to [π 1 K], the conclusion of which will be used in the proof of the following proposition.
Recall the characterization of geometricity of Λ + φ given in Proposition I.2.18, expressed in terms of the free factor support of the boundary components of S. Our next result, among other things, gives a different characterization of geometricity of a lamination Λ + φ ∈ L(φ), expressed in terms of the nonattracting subgroup system A na (Λ + φ ).
Proposition 1.4 (Properties of the nonattracting subgroup system). Given a CT f : G → G representing φ with EG stratum H r corresponding to Λ + φ , the subgroup system A na (Λ + φ ) satisfies the following: (1) A na (Λ + φ ) is a vertex group system.
(2) A na (Λ + φ ) is a free factor system if and only if the stratum H r is not geometric.
Proof. First we show that any subgroup A for which [A] ∈ A na (Λ + φ ) is nontrivial and proper, as required for a vertex group system. Nontriviality follows because only noncontractible components of K are used. To prove properness: ifρ r is trivial then any circuit containing an edge of H r is not carried by A na (Λ + φ ); ifρ = ρ r is nontrivial then any circuit containing an edge of H r but not containing ρ r is not carried by A na (Λ + φ ). We adopt the notation of Definition 1.2. By applying Fact I.1.42 and Proposition I.2.18, when Λ + φ is not geometric thenρ r is either trivial or a nonclosed Nielsen path, and when Λ + φ is geometric thenρ r is a closed Nielsen path. We prove (1)-(3) by considering these three cases ofρ r separately.
Case 1:ρ r is trivial. In this case K = Z, and A na (Λ + φ ) is the free factor system associated to the subgraph Z ⊂ G. Item (3) follows immediately.
Case 2:ρ r = ρ r is a nonclosed Nielsen path. We prove that A na (Λ + φ ) is a free factor system following an argument of [BFH00] Lemma 5.1.7. By Fact I.1.42 (1) there is an edge E ⊂ H r that is crossed exactly once by ρ r . We may decompose ρ r into a concatenation of subpaths ρ r = σEτ where σ, τ are paths in G r \ int(E). Let G be the graph obtained from G \ int(E) by attaching an edge J, letting the initial and terminal endpoints of J be equal to the initial and terminal endpoints of ρ r , respectively. The identity map on G \ int(E) extends to a map h : G → G that takes the edge J to the path ρ r , and to a homotopy inverseh : G → G that takes the edge E to the pathσJτ . We may therefore view G as a marked graph, pulling the marking on G back via h. Notice that K may be identified with the subgraph Z ∪ J ⊂ G, in such a way that the map h : G → G is an extension of the map h : K → G as originally defined. It follows that A na (Λ + φ ) is the free factor system associated to the subgraph Z ∪ J.
In this case, as in Case 1, item (3) follows immediately because of the identification of K with a subgraph of the marked graph G.
Case 3:ρ r = ρ r is a closed Nielsen path. In this case H r is geometric. Adopting the notation of the geometric model X for H r , Definition I.2.4, by Remark 1.3 we have A na (Λ + φ ) = [π 1 K] for a subgraph K ⊂ L containing j(∂S). Applying Proposition I.3.3 it follows that [π 1 K] is a vertex group system.
If A na Λ + φ = [π 1 K] were a free factor system then, since each of the conjugacy classes [∂ 0 S], . . . , [∂ m S] is supported by [π 1 K], it would follow by Proposition I.2.18 (5) that [π 1 S] ⊏ [π 1 K]. However, since S supports a pseudo-Anosov mapping class, it follows that S contains a simple closed curve c not homotopic to a curve in ∂S. By Lemma I.2.7 (2) we have [j(c)] ∈ [π 1 K] while [j(c)] ∈ [π 1 S]. This is a contradiction and so A na Λ + φ is not a free factor system.
Item (2) in the next lemma states that Z,ρ r is a groupoid, by which we mean that the tightened concatenation of any two paths in Z,ρ r is also a path in Z,ρ r as long as that concatenation is defined. For example, the concatenation of two distinct rays in Z,ρ r with the same base point tightens to a line in Z,ρ r . Lemma 1.5. Assuming the notation of Definitions 1.2, (1) The map h induces a bijection between B(K) and Z,ρ r .
(3) The set of lines carried by Z,ρ r is the same as the set of lines carried by A na (Λ + φ ). (4) The set of circuits carried by Z,ρ r is the same as the set of circuits carried by A na (Λ + φ ). (5) The set of lines carried by Z,ρ r is closed in the weak topology.
We make use of four evident properties of the immersion h : K → G. The first is that every path in K with endpoints, if any, at vertices is mapped by h to an element of Z,ρ r . The second is that h induces a bijection between the vertex sets of K and of Z ∪ ∂ρ r . The third is that for each edge E of Z, there is a unique edge of K that projects to E and that no other subpath of K has at least one endpoint at a vertex and projects to E. The last is that ifρ r is non-trivial then it has a unique lift to K (because its unique illegal turn of height r does). Together these imply (1) which implies (2) and (3). Item (4) follows from (3) using the natural bijection between periodic lines and circuits. Item (5) follows from (1) and Fact I.1.8. Item (6) follows from Proposition 3 and Fact I.1.2.
The following lemma is based on Proposition 6.0.4 and Corollary 6.0.7 of [BFH00].
Lemma 1.6. Assuming the notation of Definitions 1.2, we have: (1) If E is an edge of Z then f # (E) ∈ Z,ρ r .
(3) If σ ∈ Z,ρ r then σ is not weakly attracted to Λ + φ . (4) For any finite path σ in G with endpoints at fixed vertices, the converse to (3) holds: if σ is not weakly attracted to Λ then σ ∈ Z,ρ r .
(5) f # restricts to bijections of the the following sets: lines in Z,ρ r ; finite paths in Z,ρ r whose endpoints are fixed by f ; and circuits in Z,ρ r .
Proof. In this proof we shall freely use that Z,ρ r is a groupoid, Lemma 1.5 (2). Z,ρ r contains each fixed or linear edge by construction. Given an indivisible Nielsen path ρ i of height i, we prove by induction on i that ρ i is in Z,ρ r . If H i is NEG this follows from (NEG Nielsen Paths) and the induction hypothesis. If H i is EG then Fact I.1.41 (3) applies to show that H i ⊂ Z; combining this with Fact I.1.41 (3) again and with the induction hypothesis we conclude that ρ i ∈ Z,ρ r .
Since all indivisible Nielsen paths and all fixed edges are contained in Z,ρ r , it follows that all Nielsen path are contained in Z,ρ r , which immediately implies that Z,ρ r contains all exceptional paths.
Suppose that τ = τ 1 · . . . · τ m is a complete splitting of a finite path that is not contained in a zero stratum. Each τ i is either an edge in an irreducible stratum, a taken connecting path in a zero stratum, or, by the previous paragraph, a term which is not weakly attracted to Λ and which is contained in Z,ρ r . If τ i is a taken connecting path in a zero stratum H t that is enveloped by an EG stratum H s then, by definition of complete splitting, τ i is a maximal subpath of τ in H t ; since τ ⊂ H t it follows that m ≥ 2, and by applying (Zero Strata) it follows that at least one other term τ j is an edge in H s . In conjunction with the second Remark in Definitions 1.2, this proves that τ is contained in Z,ρ r if and only if each τ i that is an edge in an irreducible stratum is contained in Z if and only if τ is not weakly attracted to Λ + φ . We apply this in two ways. First, this proves item (4) in the case that σ is completely split. Second, applying this to τ = f # (E) where E is an edge in Z, item (1) follows in the case that f # (E) is not contained in any zero stratum. Consider the remaining case that τ = f # (E) is contained in a zero stratum H t enveloped by the EG stratum H s . By definition of complete splitting, τ = τ 1 is a taken connecting path. By Fact I.1.45 the edge E is contained in some zero stratum H t ′ enveloped by the same EG stratum H s . Since E ⊂ Z, it follows that H s ⊂ Z, and so H z s ⊂ Z, and so τ ⊂ Z, proving (1). Item (2) follows from item (1), the fact that f # (ρ r ) =ρ r and the fact that Z,ρ r is a groupoid.
Every generic leaf of Λ + φ contains subpaths in H r that are not subpaths ofρ r orρ −1 r and hence not subpaths in any element of Z,ρ r . Item (3) therefore follows from item (2).
To prove (5), for lines and finite paths the implication (2) ⇒ (5) follows from Corollary 6.0.7 of [BFH00]. For circuits, use the natural bijection between circuits and periodic lines, noting that this bijection preserves membership in Z,ρ r .
It remains to prove (4). By (5), there is no loss of generality in replacing σ with f k # (σ) for any k ≥ 1. By Fact I.1.35 this reduces (4) to the case that σ is completely split which we have already proved.
Applications and properties of the nonattracting subgroup system.
We now show that the nonattracting subgroup system A na (Λ + φ ) deserves its name. Corollary 1.7. For any rotationless φ ∈ Out(F n ) and Λ + φ ∈ L(φ), a conjugacy class [a] in F n is not weakly attracted to Λ + φ if and only if it is carried by A na (Λ + φ ). Proof. Let f : G → G be a CT representing a rotationless φ ∈ Out(F n ) and assume the notation of Definitions 1.2. By Lemma 1.5 (4), it suffices to show that a circuit in G is not weakly attracted to Λ + φ under iteration by f # if and only if it is carried by Z,ρ r . Both the set of circuits in Z,ρ r and the set of circuits that are not weakly attracted to Λ + φ are f # -invariant. We may therefore replace σ with any f k # (σ) and hence may assume that σ is completely split. After taking a further iterate, we may assume that some coarsening of the complete splitting of σ is a splitting into subpaths whose endpoints are fixed by f . Lemma 1.6 (4) completes the proof.
Corollary 1.8. For any rotationless φ ∈ Out(F n ) and Λ + φ ∈ L(φ), and for any finite rank subgroup B < F n , if each conjugacy class in B is not weakly attracted to Λ + φ then there exists a subgroup A < F n such that B < A and [A] ∈ A na (Λ + φ ). Proof. By Corollary 1.7 the conjugacy class of every nontrivial element of B is carried by the subgroup system A na (Λ + φ ) which, by Proposition 1.4 (1), is a vertex group system. Applying Lemma I.3.1, the conclusion follows.
Using Corollary 1.8 we can now prove some useful invariance properties of A na (Λ + φ ), for instance that A na (Λ + φ ) is an invariant of φ and Λ + φ alone, independent of the choice of CT representing φ.
Corollary 1.9. For any rotationless φ ∈ Out(F n ) and any lamination Λ + φ ∈ L(φ) we have: (1) The nonattracting subgroup system A na (Λ + φ ) is the unique vertex group system such that the conjugacy classes it carries are precisely those which are not weakly attracted to Λ + φ under iteration of φ.
(2) A na (Λ + φ ) depends only on φ and Λ + φ , not on the choice of a CT representing φ. ( is determined by the set of conjugacy classes of elements of F n that are carried by A na (Λ + φ ), and by Corollary 1.7 these conjugacy classes are determined by φ and Λ + φ alone, independent of choice of a CT representing φ, namely they are the conjugacy classes weakly attracted to Λ + φ under iteration of φ. This proves (1) and (2). Item (3) follows by choosing any CT f : G → G representing φ and changing the marking on G by the conjugator θ to get a CT representing θφθ −1 .
The following shows that not only is A na (Λ + φ ) is invariant under change of CT, but it is invariant under inversion of φ and replacement of Λ + φ with its dual lamination.
Corollary 1.10. Given φ, φ −1 ∈ Out(F n ) both rotationless, and given a dual lamination Notational remark. Based on Corollary 1.10, we introduce the notation A na Λ ± φ for the vertex group system Proof. For each nontrivial conjugacy class [a] in F n , we must prove that [a] is weakly attracted to Λ + under iteration by φ if and only if [a] is weakly attracted to Λ − under iteration by φ −1 . Replacing φ with φ −1 it suffices to prove the "if" direction. Applying Theorem I.1.30, choose a CT f : G → G representing φ having a core filtration element G r such that [G r ] = F supp (Λ + ), and so H r ⊂ G is the EG stratum corresponding to Λ + . We adopt the notation of Definitions 1.2.
Suppose that [a] is not weakly attracted to Λ + under iteration by φ. Then the same is true for all φ −k ([a]) and so φ −k ([a]) ∈ Z,ρ r for all k ≥ 0 by Corollary 1.7.
Arguing by contradiction, suppose in addition that [a] is weakly attracted to Λ − under iteration by φ −1 . Applying Corollary 1.5 (5) it follows that a generic line γ of Λ − is contained in Z,ρ r . However, since F supp (γ) = F supp (Λ − ) = [G r ], it follows that γ has height r. Ifρ r is trivial then γ is a concatenation of edges of Z none of which has height r, a contradiction. Ifρ r = ρ r is nontrivial then all occurences of edges of H r in γ are contained in a pairwise disjoint collection of subpaths each of which is an iterate of ρ r or its inverse. By Fact I.1.42, at least one endpoint of ρ r is disjoint from G r−1 . If ρ r is not closed then we obtain an immediate contradiction. If ρ r is closed then γ is a bi-infinite iterate of ρ r , but this contradicts [BFH00] Lemma 3.1.16 which says that no generic leaf of Λ + ψ is periodic.
We conclude this section with the following result, needed for the proof of Corollary 2.17, which generalizes Fact I.1.12 from free factor systems to nonattracting subgroup systems. Given an end E, we say that E is carried by Z,ρ r if some ray representing E is in Z,ρ r .
Lemma 1.11. Assume the notation of Definitions 1.2.
(1) Every sequence γ i of lines in G not carried by Z,ρ r has a subsequence that weakly converges to a line γ not carried by Z,ρ r .
(2) The weak accumulation set of every end not carried by Z,ρ r contains a line not carried by Z,ρ r .
Proof. If H r is non-geometric then A na (Λ ± ) is a free factor system and the lemma follows from Fact I.1.12 and Lemma III.1.5 (3).
Suppose that H r is geometric and so Z, Let Σ be the set of all paths of length ≤ 2L that occur as subpaths of an element of Z, ρ r . Note that for any path of the form α ′ * β ′ ∈ Σ such that Length(α ′ ) = Length(α) and Length(β ′ ) = Length(β) and such that d α is the terminal direction of α ′ and d β is the initial direction of β ′ , we have α ′ = α and β ′ = β so α ′ * β ′ = ρ r .
We claim that if γ ∈ B(G) and if every subpath of γ of length ≤ 2L is contained in Σ then there is a subpath γ ′ of γ that is contained in Z, ρ r and that contains all of γ with the possible exception of initial and terminal subpaths of length ≤ L.
To prove the claim, letγ be a lift of γ to the universal cover of G. Givent = {d α ,d β } an illegal turn inγ that projects to t = {d α , d β }, we say thatt is buffered ifγ contains at least L edges on each side of the turnt; if this is the case thenγ has a subpath which contains the turnt and is a lift of ρ r . Suppose thatt 1 andt 2 are buffered illegal turns iñ γ that project to t, and that there are no other illegal turns that project to t betweent 1 andt 2 . Letγ i ⊂γ be the lift of ρ r or ρ −1 r that containst i , and let σ be the projection of the subpath ofγ which starts at the turnt 1 and ends at the turnt 2 . If σ has length ≤ 2L then σ is a subpath of a path contained in Z, ρ r , in which case it follows thatγ 1 andγ 2 intersect in at most an endpoint; and the same evidently follows if σ has length > 2L.
We may therefore decomposeγ into subpathsγ i as follows: any subpath ofγ that projects to either ρ r or ρ −1 r is aγ i ; each remaining edge is aγ i . Ifγ has an initial vertex, then remove the initial segment preceding the first subpath that projects to ρ r or ρ −1 r if this segment has length < L and remove the initial segment of length L otherwise. Treat the terminal vertex, if there is one, similary. The resulting subpath ofγ projects to the desired subpath γ ′ of γ. This completes the proof of the claim.
If γ i ∈ B(G) is a sequence of lines not in Z, ρ r then it follows from the claim that each γ i has a subpath α i of edge length ≤ 2L that is not in Σ. There are only finitely many paths of edge length ≤ 2L, so by passing to a subsequence we may assume that α = α i is independent of i. It follows that this subsequence has a weak limit which is a line containing the path α, and this line is not in Z, ρ r .
If r is a ray in G no subray of which is contained in Z, ρ r then, focussing on subrays obtained by deleting initial paths of length > L, it follows from the claim that r has infinitely many pairwise nonoverlapping subpaths α i of edge length ≤ 2L that are not in Σ. Again, passing to a subsequence, we may assume that α = α i is independent of i. It follows that r has a weak limit which is a line containing the path α, and this line is not in Z, ρ r .
Nonattracted lines
In the previous section, given a rotationless φ ∈ Out(F n ) and Λ + φ ∈ L(φ), we described the set of conjugacy classes that are not weakly attracted to Λ + φ under iteration by φ -they are precisely the conjugacy classes carried by the nonattracting subgroup system A na (Λ ± φ ). In this section we state Theorem 2.6 that characterizes those lines that are not weakly attracted to Λ + φ under iteration by φ. Our characterization starts with Lemma 2.1 that lays out three particular types of such lines: lines carried by A na (Λ ± φ ); singular lines of φ −1 ; and generic leaves of laminations in L(φ −1 ). Theorem 2.6 will say that, in addition to these three subsets, by concatenating elements of these subsets in a very particular manner one obtains the entire set of lines not weakly attracted to Λ + φ . The proof of this theorem will occupy the remaining subsections of Section 1.
Theorem G -Characterizing nonattracted lines
From here up through Section 2.4 we adopt the following: Notational conventions: Assume φ, ψ = φ −1 ∈ Out(F n ) are rotationless and that Λ ± φ ∈ L ± (φ) is a lamination pair. We also denote Applying [FH11] Theorem 4.28 (or see Theorem I.1.30), choose f : G → G and f ′ : G ′ → G ′ to be CTs representing φ and ψ, respectively, the first with EG stratum H r ⊂ G associated to Λ + φ , and the second with EG stratum H ′ u ⊂ G ′ associated to Λ + ψ , so that To check that this is possible, after choosing f : G → G to satisfy the one condition The reader may refer to Section I.1.4 for a refresher on basic concepts regarding the set P (φ) of principal automorphisms representing φ ∈ Out(F n ), and on the set Fix( Φ) ⊂ ∂F n of points at infinity fixed by the continuous extension Φ : ∂F n → ∂F n of an automorphism Φ ∈ Aut(F n ).
We also recall/introduce some notations and definitions related to a rotationless outer automorphism ψ ∈ Out(F n ).
denotes set of all lines in B that are not weakly attracted to Λ + φ under iteration by φ. • B sing (ψ) denotes the set of singular lines of ψ: by definition, ℓ ∈ B is a singular line for ψ if there exists Ψ ∈ P (ψ) (= the set of principal automorphisms representing ψ) such that ∂ℓ ⊂ Fix N (Ψ).
• B gen (ψ) denotes the set of all generic leaves of all elements of L(ψ).
Proof. Case (1) is a consequence of the following: no conjugacy class carried by A na (Λ ± φ ) is weakly attracted to Λ + φ (Corollary 1.7); axes of conjugacy classes are dense in the set of all lines carried by A na (Λ ± φ ) (as is true for any subgroup system); and B na (Λ + φ ) is a weakly closed subset of B, which follows from the fact that being weakly attracted to Λ + φ is a weakly open condition on B, an evident consequence of the definition of an attracting lamination.
For Case (3), suppose that γ, and hence each φ i # (γ), is a generic leaf of some Λ − t ∈ L(ψ). Choose [a] to be a conjugacy class represented by a completely split circuit in G ′ such that some term of its complete splitting is an edge of H ′ t . By Fact I.1.59 (1), [a] is weakly attracted to γ under iteration by ψ. If γ were weakly attracted to Λ + φ under iteration by φ then, since the φ i # (γ)'s all have the same neighborhoods in B, the lamination Λ + φ would be in the closure of γ, and so [a] would be weakly attracted to Λ + φ under iteration by ψ = φ −1 , contradicting Fact I.1.59 (2).
For Case (2), choose Ψ ∈ P (ψ) and a liftγ of γ with endpoints in Fix N ( Ψ) = ∂ Fix(Ψ) ∪ Fix + ( Ψ). Assuming that γ is weakly attracted to Λ + φ under iteration by φ, we argue to a contradiction. Since γ is φ # -invariant, Λ + φ is contained in the weak closure of γ. Let ℓ be a generic leaf of Λ + φ . Since ℓ is birecurrent, ℓ is contained in the accumulation set of at least one of the endpoints, say P , ofγ. If P ∈ ∂ Fix(Ψ) then ℓ is carried by A na (Λ ± φ ) in contradiction to Case (1) and the obvious fact that ℓ is weakly attracted to Λ + φ . Thus P ∈ Fix + ( Ψ). By Lemma I.1.52, there is a conjugacy class [a] that is weakly attracted to every line in the weak accumulation set of P under iteration by ψ, and so is weakly attracted to Λ + φ under iteration by ψ. As in Case (3), this contradicts Fact I.1.59 (2).
As shown in Example 2.4 below, by using concepts of concatenation one can sometimes construct lines in B na (Λ) not accounted for in the statement of Lemma 2.1. In the next definition we extend the usual concept of concatenation points to allow points at infinity.
Definition 2.2. Given any marked graph K and oriented paths γ 1 , γ 2 ∈ B(K), we that γ 1 , γ 2 are concatenable if there exist liftsγ i ⊂ K with initial endpoints P − i ∈ K ∪ ∂F n and terminal endpoints P + i ∈ K ∪ ∂F n satisfying P + 1 = P − 2 and P − 1 = P + 2 . The concatenation ofγ 1 ,γ 2 is the oriented path with endpoints P − 1 , P + 2 , denotedγ 1 ⋄γ 2 . Its projection to K, denoted γ 1 ⋄ γ 2 , is called a concatenation of γ 1 , γ 2 . This operation is clearly associative and so we can define multiple concatenations. This operation is also invertible, in particular any concatenation of the form γ = α ⋄ ν ⋄ β can be rewritten as Notice that "the" upstairs concatenationγ 1 ⋄γ 2 is well-defined, but "the" downstairs concatenation γ 1 ⋄ γ 2 is not generally well-defined: this fails precisely when P + 1 = P − 2 is an endpoint of the axis of some element γ of F n and neither P − 1 nor P + 2 is the opposite endpoint, in which case one can replace either ofγ 1 ,γ 2 by a translate under γ to get a different concatenation downstairs. This is a mild failure, however, and it is usually safe to ignore.
A subset of B(K) is closed under concatenation if for any oriented paths γ 1 , γ 2 ∈ B(K), any of their concatenations γ 1 ⋄ γ 2 is an element of B(K).
Lemma 2.3. Continuing with the Notational Convention above, the set of elements of B(G) that are not weakly attracted to Λ + φ under iteration by φ is closed under concatenation. In particular, B na (Λ + φ ) is closed under concatenation. Proof. Consider a concatenation γ 1 ⋄ γ 2 with accompanying notation as in Definition 2.2. For each m ≥ 0, the path f m # (γ 1 ⋄ γ 2 ) is the concatenation of a subpath of f m # (γ 1 ) and a subpath of f m # (γ 2 ). Letting ℓ be a generic leaf of Λ + φ , by Fact I.1.57 we may write ℓ as an increasing union of nested tiles α 1 ⊂ α 2 ⊂ · · · so that each α j contains at least two disjoint copies of α j−1 . By assumption γ 1 has the property that there exists an integer J so that if α j occurs in f m # (γ 1 ) for arbitrarily large m then j ≤ J, and γ 2 satisfies the same property. This property is therefore also satisfied by γ 1 ⋄ γ 2 (with a possibly larger bound J) and so γ 1 ⋄ γ 2 is not weakly attracted to Λ + φ .
We account for these kinds of examples as follows. (See also Propositions 2.18 and 2.19.) where the union is taken over all A-related Ψ ∈ P (ψ). Let B ext (A, ψ) denote the set of lines that have lifts with endpoints in ∂ ext (A, ψ); this set is independent of the choice of A in its conjugacy class. Define Basic properties of B ext (Λ ± φ ; ψ) are established in the next section. We conclude this section with the statement of our main weak attraction result. The proof is given in Section 2.5.
Remark 2.7. The sets B ext (Λ ± φ ; ψ), B sing (ψ), and B gen (ψ) need not be pairwise disjoint. For example, every line carried by G ′ u−1 is in B ext (Λ ± φ ; ψ) and some of these can be in B sing (ψ) or in B gen (ψ).
or is a generic leaf of some element of L(ψ). This shows that Theorem 2.6 contains the Weak Attraction Theorem (Theorem 6.0.1 of [BFH00]) as a special case.
closed under concatenation
We continue with the notation for an inverse pair of rotationless outer automorphisms φ, ψ = φ −1 ∈ Out(F n ) established at the beginning of Section 2.1. Much of the work in this section is devoted to revealing details of the structure of B ext (Λ ± φ ; ψ). After a few such lemmas/corollaries, the main result of this section is that the union of the three subsets of B na (Λ + φ ) occurring in Theorem 2.6 is closed under concatenation; see Proposition 2.14.
We shall abuse notation for elements of the set A na (Λ ± φ ) as described in Section I.1.
is a malnormal subgroup system (Proposition 1.4), this notational abuse should not cause any confusion.
Proof. Suppose that for j = 1, 2 there exist A j ∈ A na (Λ ± φ ) and P j ∈ Fix N ( Ψ) ∩ ∂A j . The lineγ connecting P 1 to P 2 projects to a line γ ∈ B sing (ψ) that by Lemma 2.1 is not weakly attracted to Λ + . Since P j ∈ ∂A j , and since by Lemma 1.5 (3) each line that is carried by A j is contained in Z,ρ r , the ends of γ are contained in Z,ρ r , and so we may assume that γ = ρ − ⋄ γ 0 ⋄ ρ + where the rays ρ − , ρ + are in Z,ρ r . After replacing γ with a φ # -iterate we may also assume that the central subpath γ 0 has endpoints at fixed vertices. Since none of γ, ρ − , ρ + are weakly attracted to Λ + φ , by Lemma 2.3 neither is γ 0 = ρ − ⋄ γ ⋄ ρ + . Lemma 1.6 (4) implies that γ 0 is contained in Z,ρ r and Lemma 1.5 (2) then shows that γ is contained in Z,ρ r . By Lemma 1.5 (3) it follows that γ is carried by Proof. If Fix(Ψ) is trivial then by Lemma I.1.20 each point of Fix N ( Ψ) is an isolated attractor and we are done. Otherwise, noting that the conjugacy class of each nontrivial By Lemma 2.10 we have A ′ = A, and applying Lemma I.1.20 completes the proof.
Proof. We assume that Q ∈ ∂ ext (A 1 , ψ) ∩ ∂ ext (A 2 , ψ) and argue to a contradiction. After interchanging A 1 and A 2 if necessary, we may assume by Proposition 1.4 (3) and Fact I.1.2 that Q ∈ ∂A 1 and hence that Q ∈ Fix N ( Ψ 1 ) for some A 1 -related Ψ 1 ∈ P (ψ). Lemma 2.10 implies that Ψ 1 is not A 2 -related and so Q ∈ ∂A 2 . The only remaining possibility is that Q ∈ Fix N ( Ψ 2 ) for some A 2 -related Ψ 2 ∈ P (ψ). But then Lemma 2.9 implies that Q ∈ ∂A 3 for some A 3 ∈ A na (Λ ± φ ), and then Lemma 2.10 implies that A 1 = A 3 = A 2 . (A, ψ). If Q ∈ ∂A we're done so we may assume that Q ∈ Fix N ( Ψ ′ ) for some A-related Ψ ′ . If Ψ = Ψ ′ we are done. Otherwise, Lemma 2.9 implies that Q ∈ ∂A ′ for some A ′ ∈ A na (Λ ± φ ) and Corollary 2.12 implies that A ′ = A so again we are done.
More precisely, given liftsγ j with initial and terminal endpoints P − j and P + j respectively, if P + 1 = P − 2 and P − 1 = P + 2 then either there exists Ψ ∈ P (ψ) such that the three points P − 1 , P + 1 = P − 2 , P + 2 are in Fix N ( Ψ) or there exists A ∈ A na (Λ ± φ ) such that those three points are in ∂ ext (A, ψ).
Proof. The first sentence is an immediate consequence of the second, to whose proof we now turn.
This lemma extends Lemma 3.3 of [HM11] in which it is assumed that φ is irreducible.
Putting off the proof of Lemma 2.15 for a bit, we continue with the proof of Proposition 2.14. Having finished Case 1, by symmetry we may now assume that γ j ∈ B sing (ψ) for j = 1, 2. ψ) we are done. By Corollary 2.12, the only remaining possibility is that γ 2 is a generic leaf of an element of L(ψ). If P + 1 ∈ Fix N ( Ψ) for some Ψ ∈ P (ψ) then, as shown above, Lemma 2.15 implies that P + 2 ∈ Fix N ( Ψ) and hence that γ 2 ∈ B sing (ψ) which is a contradiction. Thus P + 1 ∈ ∂A. Since γ 2 is birecurrent, Fact I.1.8 implies that γ 2 is by carried by A na (Λ ± φ ). Applying Lemma 1.5 (6) it follows that P + 2 ∈ ∂A and we are done. By symmetry of γ 1 and γ 2 , the only remaining case is: Case 3: γ 1 and γ 2 are generic leaves of elements of L(ψ). Since they have asymptotic ends, they are leaves of the same element of L(ψ) so are singular lines by Lemma 2.15. As we have already considered this case, the proof is complete.
It remains to prove Lemma 2.15, but we first prove the following, which is similar to Lemma 5.11 of [BH92], Lemma 4.2.6 of [BFH00] and Lemma 2.7 of [HM11].
Lemma 2.16. Suppose that f : G → G is a CT, that H r is an EG stratum, that γ ⊂ G is line of height r with exactly one illegal turn of height r and that f Kγ (γ) is r-legal for some minimal K γ . Then K γ ≤ K for some K that is independent of γ.
Proof. If the lemma fails there exists a sequence γ i such that K i = K γ i → ∞. Write γ i =σ i τ i where the turn (σ i , τ i ) is the illegal turn of height r. After passing to a subsequence we may assume that σ i → σ and τ i → τ for some rays σ and τ . The line γ =στ has height r and f k # (γ) has exactly one illegal turn of height r for all k ≥ 0. Lemma 4.2.6 of [BFH00] implies that there exists m > 0 and a splitting f m # (γ) =R − · ρ · R + where ρ is the unique indivisible Nielsen path of height r. It follows that for all sufficiently large i, f m # (γ i ) has a decomposition into subpaths f m # (γ i ) =R − i ρR + i where the height r illegal turn in ρ is the only height r illegal turn in γ i . Since any such decomposition is a splitting, f k # (γ i ) has an illegal turn of height r for all k in contradiction to our choice of γ i .
Proof of Lemma 2.15. By symmetry we need prove only that ℓ ′ ∈ B sing (θ). By Fact I.1.61, each end of each generic leaf of an element of L(θ) has the same free factor support as the whole leaf, and so ℓ ′ and ℓ ′′ must be generic leaves of the same Λ ∈ L(θ).
Let f : G → G be a CT representing θ and let H r be the EG stratum corresponding to Λ. For each j ≥ 0, there are generic leaves ℓ ′ j and ℓ ′′ j of Λ such that f j # (ℓ ′ j ) = ℓ ′ and f j # (ℓ ′′ j ) = ℓ ′′ . Fixing a common end of ℓ ′ and ℓ ′′ , the corresponding common ends of ℓ ′ j and ℓ ′′ j determine a maximal common subray R j of ℓ ′ j and ℓ ′′ j . Denote the rays in ℓ ′ j and ℓ ′′ j that are complementary to R j by R ′ j and R ′′ j respectively. Let γ j =R ′ j R ′′ j . Suppose at first that each γ j is r-legal. Lemma 5.8 of [BH92] implies that no height r edges of γ j are cancelled when f j (γ j ) is tightened to γ 0 . Let E j , E ′ j and E ′′ j be the first height r edges of R j , R ′ j and R ′′ j respectively, let w j , w ′ j and w ′′ j be their initial vertices and let d j , d ′ j and d ′′ j be their initial directions. Let µ ′ j be the finite subpath of ℓ ′ j connecting w ′ j to w j . To complete the proof in this case we will show that µ ′ 0 is a Nielsen path, that R 0 is the principal ray determined by iterating d 0 (see Definition I.1.50) and that R ′ 0 is the principal ray determined by iterating d ′ 0 . If w j = w ′ j then d j , d ′ j determine distinct gates, and otherwise w j , w ′ j are each incident to an edge of height < r. A similar statement holds for w j , w ′′ j . In all cases it follows that w j , w ′ j and w ′′ j are principal vertices of f . Moreover, the following hold for all i, j ≥ 0: The first two of these equalities imply that w = w j , w ′ = w ′ j ∈ Fix(f ) are independent of j; the third and fourth imply that E = E j and E ′ = E ′ j are independent of j; in conjunction with Lemma I.1.57 (2), the last equality implies that µ = µ j is a Nielsen path that is independent of j. It follows that ℓ ′ is the increasing union of the subpaths f j # (Ē ′ )µf j # (E) and so ℓ ′ is a pair of principal rays connected by a Nielsen path. Applying Fact I.1.47 completes the proof that ℓ ′ ∈ B sing (θ) when each γ j is r-legal.
It remains to consider the case that that some γ l is not r-legal. Assuming without loss that γ 0 is not r-legal, each γ j is not r-legal. Lemma 2.16 implies that f k # (γ j ) has an illegal turn of height r for all k ≥ 0 and Lemma 4.2.6 of [BFH00] implies that there is a splitting γ j = τ ′ j · ρ j · τ ′′ j where some f # -iterate of ρ j is the unique indivisible Nielsen path ρ with height r. Since f i # (ρ j+i ) = ρ j for all i, j ≥ 0, ρ j = ρ for all j. Let E ′ be the first edge of height r in the rayτ ′ 0 and let E be the initial edge of ρ 0 . Both of these edges are contained in ℓ ′ and we let µ be the subpath of ℓ ′ that connects their initial vertices. Arguing as in the previous case, µ is a Nielsen path and ℓ ′ is the increasing union of the subpaths f j # (Ē ′ )µf j # (E) which proves that ℓ ′ ∈ B sing (θ).
Application -Proof of Theorem H
Before turning in later sections to the proof of Theorem 2.6 (Theorem G), we use it to prove the following: Corollary 2.17 (Theorem H). Given rotationless φ, ψ = φ −1 ∈ Out(F n ), a dual lamination pair Λ ± φ ∈ L ± (φ), and a line γ ∈ B, the following hold: (1) If γ is not carried by A na (Λ ± φ ) then it is either weakly attracted to Λ + φ under iteration by φ or to Λ − φ under iteration by ψ.
(2) For any weak neighborhoods V + and V − of generic leaves of Λ + φ and Λ − φ , respectively, there exists an integer m ≥ 1 (independent of γ) such that at least one of the following holds: Proof. For (1) we assume that γ is not weakly attracted to Λ + φ under iteration of φ and that γ is not weakly attracted to Λ + ψ = Λ − φ under iteration of ψ = φ −1 , and we prove that γ is carried by A na (Λ ± φ ). Applying Theorem 2.6 to both ψ and φ, we have and using this we proceed by cases.
This completes the proof of (1).
We prove (2) by contradiction. If (2) fails then there are neighborhoods V + , V − of generic leaves of Λ + φ , Λ − φ respectively, a sequence of lines γ i ∈ B and a sequence of positive integers m i → ∞, such that for all i we have: γ i ∈ V − ; φ 2m i (γ i ) ∈ V + ; and γ i is not carried by A na (Λ ± φ ). We may assume that V + has the property φ(V + ) ⊂ V + , because generic leaves of Λ + φ have a neighborhood basis of such sets. Similarly, we may assume that V − ⊂ φ(V − ). By Lemma 1.11, some subsequence of φ m i (γ i ) has a weak limit γ that is not carried by A na (Λ ± φ ). To contradict (1) we show that γ is not weakly attracted to Λ + φ or to Λ − φ . By symmetry we need show only the first, namely, that the sequence φ m (γ) does not weakly converge to Λ + φ . If it does then φ M (γ) ∈ V + for some M . Since V + is open there exists I such that φ m i +M (γ i ) ∈ V + for all i ≥ I. Since φ(V + ) ⊂ V + , it follows that φ m (γ i ) ∈ V + for all m ≥ m i + M and i ≥ I. We can choose i ≥ I so that m i ≥ M , and it follows that φ 2m i (γ i ) ∈ V + , a contradiction.
Nonattracted lines of EG height.
We continue the Notational Conventions established at the beginning of Section 2.1. By combining Proposition I.2.18 with the following equations of free factor systems we may conclude that the stratum H r is geometric if and only if the stratum H ′ u is geometric. The realizations of a line γ ∈ B in the marked graphs G, G ′ will be denoted γ G , γ G ′ respectively, or just as γ, γ ′ when we wish to abbreviate the notation, or even both just as γ when we wish for further abbreviation.
In this section we focus on the special case of case of Theorem 2.6 (Theorem G) concerned with those lines γ such that γ G has height r, equivalently γ G ′ has height u. We give necessary and sufficient conditions for γ to be weakly attracted to Λ + φ under iteration of φ, expressed in terms of the form of γ G ′ . This is the analog of Proposition 6.0.8 of [BFH00] which has the additional hypothesis that γ is birecurrent, and the proof of which is separated into geometric and non-geometric cases. In our present setting we drop the birecurrence hypothesis, and we also separate the proof into the non-geometric case in Lemma 2.18 and the geometric case in Lemma 2.19. The conclusions of two lemmas describe γ G ′ in explicit detail which, while more than we need for our applications, is included because it is needed for the proof and it helps clarify the picture. Although in this section we do not yet derive the conclusions of Theorem 2.6 (Theorem G) for the height r case, that will be done as part of the general derivation of those conclusions in Section 2.5.
Lemma 2.18 (Height r lines in the nongeometric case). Assuming that the strata H r , H ′ u are not geometric, and with the notation above, if γ ∈ B has realization γ G in G of height r and γ G ′ in G ′ of height u, and if γ is not weakly attracted to Λ + φ , then its realization γ G ′ , satisfies at least one of the following: (1) γ G ′ is a generic leaf of Λ − φ .
(2) γ G ′ decomposes as R 1 µR 2 where R 1 and R 2 are principal rays for Λ − φ and µ is either the trivial path or a nontrivial path of one of the forms α, β, αβ, αβᾱ, such that β is a nontrivial path of height < s and α is a height s indivisible Nielsen path.
(3) γ G ′ or γ G ′ −1 decomposes as R 1 µR 2 where R 1 is a principal ray for Λ − φ , R 2 is a ray of height < s, and µ is either trivial or a height s Nielsen path.
Proof. By Proposition I.2.18, since H r is not a geometric stratum, neither is H ′ u . Let α denote the unique (up to reversal) indivisible Nielsen path of height s in G ′ u , if it exists; by Fact I.2.3 and Fact I.1.42 (1) it follows that α is not closed and that we may orient α so that its initial endpoint v is an interior point of H ′ u . We adopt the abbreviated notation γ for γ G and γ ′ for γ G ′ . We first show that γ has infinitely many edges in H r by proving that if a line γ has height r and only finitely many edges in H r then γ is weakly attracted to Λ + . To prove this, write γ = γ − γ 0 γ + where γ − , γ + ⊂ G r−1 and γ 0 is a finite path whose first and last edges are in H r . Since f # restricts to a bijection on lines of height r − 1, it follows that f k # (γ) has height r for all k, and so f k # (γ 0 ) is a nontrivial path of height r for all k. By Fact I.1.35 there exists K ≥ 0 such that f K # (γ 0 ) completely splits into terms each of which is either an edge or Nielsen path of height r or a path in G r−1 , with at least one term of height r.
is the maximal subpath of f K # (γ 0 ) whose first and last edges are in H r , so γ ′ 0 is nontrivial and completely split. Since ] are in G r−1 . By Fact I.1.42 (1), each term in this splitting of f K # (γ) which is an indivisible Nielsen path of height r is adjacent to a term that is an edge in H r . It follows that at least one term in the splitting of f K # (γ) is an edge in H r , implying that γ is weakly attracted to Λ + . Since γ contains infinitely many edges in H r , and since [G r ] = [G ′ u ] and the graphs G r , G ′ u are both core subgraphs, the line γ G ′ contains infinitely many edges in H ′ u . In the part of the proof of the nongeometric case of Proposition 6.0.8 of [BFH00] that does not use birecurrence and so is true in our context, it is shown that there exists M ′ > 0 so that for every finite subpath γ ′ i of γ ′ there exists a line or circuit τ ′ i in G ′ that contains at most M ′ edges of H ′ s such that γ ′ i is a subpath of g k i # (τ ′ i ) for some k i ≥ 0. If G r−1 = ∅ then τ ′ i is a circuit; otherwise τ ′ i is a line. (This is proved in two parts. First, in what is called step 2 of that proof, an analogous result is proved in G. Then the bounded cancellation lemma is used to transfer this result to G ′ ; the case that G r−1 = ∅ is considered after the case that G r−1 = ∅.) Choose a sequence of finite subpaths γ ′ i of γ ′ that exhaust γ ′ and let τ ′ i and k i be as above so that γ ′ i is a subpath of g k i # (τ ′ i ) and so that τ ′ i contains at most M ′ edges of H ′ u . Since γ ′ contains infinitely many H ′ u edges, we have k i → +∞ as i → +∞. By Lemma I.1.54 there exists d > 0 depending only on the bound M ′ such that g d # (τ ′ i ) has a splitting into terms each of which is either an edge or indivisible Nielsen path of height s or a path in G ′ u−1 . By taking i so large that k i ≥ d we may replace each τ ′ i by g d # (τ ′ i ) and each k i by k i − d, and hence we may assume that τ ′ i has a splitting each of whose terms is an edge or Nielsen path of height s or a path in G ′ u−1 . The number of edges that τ ′ i has in H u is still uniformly bounded, and so l i is uniformly bounded. Passing to a subsequence, we may assume that l i and the ordered sequence of height s terms in τ ′ i are independent of i. We may also assume that l = l i is minimal among all such choices of γ ′ i and τ ′ i .
Case A: l = 1. In this case τ ′ i = E is a single edge of H ′ u and so, by I.1.58, γ ′ is a leaf of Λ − φ . If both ends of γ ′ have height s then γ ′ is generic by Fact I.1.61 and case (1) is satisfied.
Suppose one end of γ ′ , say the positive end, has height ≤ s−1. We have a concatenation γ ′ = R 1 R 2 where the ray R 1 starts with an edge of H ′ u and the ray R 2 is contained in G ′ u−1 . By Fact I.1.37, the concatenation point is a principal vertex. By Lemma I.1.57 (2), for each m there is an m-tile in γ ′ which is an initial segment of R 1 . By Corollary I.1.60 it follows that R 1 is a principal ray. This shows that (3) is satisfied with trivial µ.
Case B: l ≥ 2. Choose a subpath ν ′ i ⊂ τ ′ i , with endpoints not necessarily at vertices, such that g k i # (ν ′ i ) = γ ′ i . Let τ ′′ i be the subpath obtained from τ ′ i by removing the initial segment τ ′ i,1 and the terminal segment τ ′ i,l of the splitting ( * ), so either τ ′′ i = τ ′ i,2 · . . . · τ ′ i,l−1 or, when l = 2, τ ′′ i is the trivial path at the common vertex along which τ ′ i,1 and τ ′ i,2 are concatenated. After passing to a subsequence, we may assume that τ ′′ i ⊂ ν ′ i ; if no such subsequence existed then we could reduce l by removing either τ ′ i,1 or τ ′ i,l from τ ′ i . For the same reason, we may assume that γ ′ has a finite subpath that contains g k i # (τ ′′ i ) for all i. After passing to a subsequence, we may assume that µ = g k i # (τ ′′ i ) is independent of i. Since the sequence of height s terms in τ ′′ i is independent of i, it follows that µ is either trivial or has a splitting into terms each of which is either α, orᾱ, or a path in G ′ u−1 . Since the endpoints of α are distinct, no two adjacent terms in this splitting can both be α orᾱ, and so each subdivision point of the splitting is in G ′ u−1 . Since v is an interior point of H ′ u , for any occurence of α orᾱ as a term of µ the endpoint v must be an endpoint of η. It follows that µ can be written in one of the forms given in item (2), after possibly inverting γ ′ .
Write γ ′ as R 1 µR 2 . If τ ′ 1,1 is an edge E in H ′ u then E = τ ′ i,1 for all i, and the ray R 1 is the increasing union of g k 1 # (Ē) ⊂ g k 2 # (Ē) ⊂ · · · , so R 1 is a principal ray for Λ − φ . Otherwise τ ′ i,1 is a path in G ′ u−1 for all i and R 1 is a ray in G ′ u−1 . Using τ ′ i,l similarly in place of τ ′ i,1 , R 2 is either a principal ray for Λ − φ or a ray in G ′ u−1 . At least one of R 1 and R 2 is a principal ray. If they are both principal rays then item (2) holds, otherwise (3) holds.
For stating the geometric case, Lemma 2.19, we continue with the notation laid out in the opening paragraph of Section 2.4. Assuming the strata H r , H ′ u are geometric, we let ρ r , ρ ′ u be the closed indivisible Nielsen paths in G r , G ′ u of heights r, u, respectively; by applying Proposition I.2.18, up to reorienting these Nielsen paths we have Lemma 2.19 (Height r lines in the geometric case). Assuming that H r , H ′ u are geometric, and with notation as above, if γ ∈ B is a line of height r that is not weakly attracted to Λ + φ then its realization γ G ′ has at least one of the following forms: (1) γ G ′ orγ G ′ is the bi-infinite iterate of ρ ′ u .
(3) γ G ′ decomposes as R 1 µR 2 where R 1 and R 2 are principal rays for Λ − φ and µ is either the trivial path, a finite iterate of ρ ′ u or its inverse, or a nontrivial path of height < u.
(4) γ G ′ orγ G ′ decomposes as R 1 R 2 where R 1 is a principal ray for Λ − φ and the ray R 2 either has height < u or is the singly infinite iterate or ρ ′ u or its inverse.
Proof. By restricting f : G → G to the component of G r that contains H r , restricting f ′ : G ′ → G ′ to the component of G ′ u that contains H ′ u , and replacing φ, ψ with their restrictions to Out(π 1 G) = Out(π 1 G ′ ) (Fact I.1.4), we may assume that H r , H ′ u are the top strata.
Throughout this proof, to simplify notation we shall identify a downstairs abstract line γ ∈ B with its realization in G ′ , and an upstairs abstract lineγ ∈ B with its realization in the universal cover G ′ , and we shall elide the subscript G ′ from γ G ′ andγ G ′ .
Here is a rough outline of the proof. We use a geometric model for f ′ and H ′ u , in particular the surface S and the pseudo-Anosov mapping class θ ∈ MCG(S) which are part of the geometric model. We also use Proposition I.2.15 which identifies the unstable and stable laminations Λ u , Λ s of θ with Λ + φ , Λ − φ respectively. For each line γ of height u in G ′ , we shall define a canonical decomposition of γ as an alternating concatenation of "overpaths" and "underpaths". Roughly speaking an overpath is a subpath that begins and ends with edges of H ′ u , that has a homotopy pullback to the surface S, and that is maximal with respect to these properties. An overpath can be a finite subpath of γ, a subray, or the whole line. We show that overpaths have disjoint interiors, and we define underpaths to be the components of the complements of the interiors of the overpaths. We show that this "over-under" decomposition is natural with respect to the action of f ′ , which is captured by the slogan "the iterates of an overpath of γ are overpaths of the iterates of γ". Combining this with the hypothesis that γ is not weakly attracted to Λ + φ under iteration of φ, we shall conclude that the homotopy pullbacks to S of the overpaths of γ are not weakly attracted to Λ u under iteration of θ. Combining this with an application of Nielsen-Thurston theory (Proposition I.2.14) we will by a case analysis derive strong consequences on the structure of the overpath-underpath decomposition of γ, showing that this structure directly yields one of conclusions (1)-(4).
In order to formalize overpath-underpath decompositions we use the peripheral Bass-Serre tree F n T associated to a geometric model as a bookkeeping device. We begin with a review of these topics from Part I [HM13b].
Geometric model. In our present context where H ′ u is the top stratum, a geometric model for f ′ and H ′ u (given in Definition I.2.4) is the same thing as a weak geometric model (Definitions I.2.1).
To simplify notation we denote L = G ′ u−1 and its total lift to G ′ as L. We recall the following elements of the data comprising a (weak) geometric model for f ′ and H ′ u , separated into static data and dynamic data. The static data includes: a compact surface S with a distinguished upper boundary component ∂ 0 S and with lower boundary B = ∂S − ∂ 0 S; a quotient complex Y obtained by gluing S and L using an attaching map α : B → L which restricts to a homotopically nontrivial closed edge path on each component of B; an embedding G ′ ֒→ Y extending the embedding G ′ u−1 = L ֒→ Y ; and a deformation retraction d : Y → G ′ which takes ∂ 0 S, regarded as a closed curve based at the unique point p ′ u = G ′ ∩ ∂ 0 S, to the closed indivisible height u Nielsen path ρ ′ u . Let j : S ∐ L → Y denote the quotient map. The set int(S) is homeomorphically identified by j with its image, an open subset of Y . Fixing appropriate choices of base points and paths amongst them, the map j induces π 1 -injective maps to π 1 (Y ) = F n from the fundamental groups of S, of the components of B, and of the components of L. In the case of S we identify its fundamental group with the image under this injection, obtaining π 1 S < F n .
The dynamic data of the geometric model consists of a pseudo-Anosov mapping class θ ∈ MCG(S) represented by a homeomorphism Θ : The Bass-Serre tree F n T . We describe the Bass-Serre tree of the peripheral splitting associated to the geometric model Y (Definition I.2.10). Justifying this description is a straightforward consequence of Bass-Serre theory ( [SW79]) and Definition I.2.10.
Consider the following pushout diagram: where q is the universal covering map,S ∐L =Y is the subspace of the Cartesian product (S ∐L)× Y consisting of all pairs (x,ỹ) such that j(x) = q(ỹ), and the upper and left arrows are restrictions of the two projection maps of the Cartesian product. The group F n acts on Y by deck transformations and on S ∐ L trivially, inducing a diagonal action onS ∐L =Y , such thatj is F n -equivariant andq is a covering map with deck transformation group F n . The components ofS are called S-vertex spaces, the components ofL are L-vertex spaces, and the components of B = ∂S are edge spaces. These components are indexed as follows together with their respective images in Y and stabilizer groups: Note that theS s is connected and it is cocompact under the action of Γ s , so the same is true of S s ; similar statements hold for the actions of Γ l and Γ b . Also by restrictingq we get universal covering maps of eachS s over S with deck transformation group Γ s , and of each L l over some component of L with deck group Γ l , and of eachB b over some component of B with deck group Γ b . The domain and range restrictions ofj are also denoted with subscripts, e.g.j s :S s → S s . Althoughj l is a homeomorphism,j s andj b need not be injective. Howeverj s restricts to a homeomorphism from the manifold interior int(S s ) onto an open subset of Y contained in S s denoted by convention int( S s ), andj s restricts to a bijection of components between ∂S s and S s − int( S s ).
The Bass-Serre tree T is a bipartite tree with: one S-vertex denoted V s for eachS s ; one L-vertex denoted V l for eachL l ; and one edge denoted E b for eachB b that is attached to the unique V s and V l having the propertiesB b ⊂S s and B b ⊂ L l . The action of F n Y induces the action F n T . Note that T can be characterized algebraically: the conjugacy class [π 1 S] equals the set {Γ s } of S-vertex stabilizers, and the latter corresponds bijectively to {V s } since π 1 S is its own normalizer in F n (Lemma I.2.7 (2)). Also, the union of the conjugacy classes constituting the subgroup system [π 1 L] equals the set {Γ l } which corresponds bijectively to {V l } since [π 1 L] is malnormal (Lemma I.2.7 (1)). The tree T thus has one S-vertex V s for each Γ s , one L-vertex V l for each Γ l , with an edge connecting V s to V l if and only if Γ s ∩ Γ l is a nontrivial subgroup of F n .
Further remarks. We are abusing the terminology and notation of Definition I.2.10 by stripping away ∂ 0 S from the graph L, with the effect (see Remark I.2.11) that valence 1 vertices of the Bass-Serre tree of Definition I.2.10 have bren stripped away in forming T . Other valence 1 vertices may remain in T , namely those associated to "free lower boundary circles" of S.
The failure of the mapsj s :S s → S s andj b :B b → B b to be homeomorphisms stems from non local injectivity of the attaching map α : B → L. The map α factors on each component of B as a finite sequence of Stallings folds followed by a local injection, which lifts to equivariant factorizations of eachj s and eachj b , each term of which is a homotopy equivalence, and so eachj s andj b is a homotopy equivalence. In particular each S s is contractible.
Action of the subgroup Aut ψ (F n ) = Aut φ (F n ) on T . This subgroup of Aut(F n ) consists of all automorphisms representing all outer automorphisms ψ i = φ −i , i ∈ Z. The exponent of Ψ ∈ Aut ψ (F n ) is the integer i ∈ Z for which Ψ represents ψ i ; this defines an epimorphism Aut ψ (F n ) → Z with kernel Inn(F n ). For Ψ of non-negative exponent i ≥ 0 the lift of the CT (f ′ ) i # : G ′ → G ′ that corresponds to Ψ is denotedf ′ Ψ : G ′ → G ′ . We extend the action Inn(F n ) ≈ F n T to an action Aut ψ (F n ) T as follows. First, under the action of Out(F n ) on conjugacy classes of subgroups of F n , the group Aut ψ (F n ) preserves the conjugacy class [π 1 S] = {Γ s } inducing the S-vertex action Aut ψ (F n ) {Γ s } ↔ {V s }; we denote Ψ(V s ) = V Ψ(s) . Similarly, the group Aut ψ (F n ) preserves each component of [π 1 L] inducing the L-vertex action Aut ψ (F n ) {Γ l } ↔ {V l } which we denote Ψ(V l ) = V Ψ(l) . Finally, for each Ψ ∈ Aut ψ (F n ), if V s , V l are connected by an edge E b then Γ s ∩ Γ l is nontrivial, so Ψ(Γ s ) ∩ Ψ(Γ l ) is nontrivial, and so V Ψ(s) , V Ψ(l) are connected by an We need a topological characterization of the action Aut ψ (F n ) T expressed in terms of liftsf ′ Ψ defined for non-negative exponent. For this purpose we need some additional notation.
The embedding G ′ ⊂ Y and deformation retraction d : Y → G ′ lift to an F n -equivariant embedding G ′ ⊂ Y and deformation retractiond : S)); note that G ′ s is connected sinceS is connected, and Γ s acts cocompactly on G ′ s since it acts cocompactly onS; it follows that we may naturally identify ∂Γ s = ∂ S s = ∂ G ′ s ⊂ ∂F n , from which it follows in turn that each line in G ′ with idea endpoints in ∂Γ s is contained in the subgraph G ′ s . Let H ′ u,s ⊂ H ′ u be the subgraph of all edges of H ′ u ∩ G ′ s (the latter intersection may contain some isolated vertices which we avoid by defining H ′ u,s in this manner). Note that the components of the subgraph G ′ s \ H ′ u,s are precisely the components of S s − int( S s ), each a component of B b corresponding bijectively to edges E b incident to V s . Given an topological representative of an outer automorphism of F n , recall the usual correspondence between lifts of that topological representative to the universal cover and automorphisms representing that outer automorphisms, defined by inducing the same continuous extension on ∂F n (see e.g. Section I.1.5.3). If Ψ ∈ Aut ψ (F n ) represents ψ i with i ≥ 0 then the CT (f ′ ) i # : G ′ → G ′ is a topological representative of ψ i , and the lift of (f ′ ) i # that corresponds to Ψ is denotedf ′ Ψ : G ′ → G ′ . We use the same symbol for the continuous extension to the Gromov compactificationf ′ Ψ : G ′ ∪ ∂F n → G ′ ∪ ∂F n . Full height lines, realization in T , and over-under decompositions. Consider a lineγ in G ′ of full height, meaning thatγ contains an edge of H ′ u . We define the realizatioñ γ T ofγ in T , and in parallel we define the over-under decomposition ofγ in G ′ .
To start, we define the S-vertices inγ T and their associated overpaths inγ. Given an S-vertex V s ∈ T , we put V s inγ T if and only ifγ ∩ int( S s ) = ∅, equivalentlyγ ∩ H ′ u,s contains an edge. For any S-vertex V s ∈γ T its associated overpathγ s ⊂γ is defined to be the longest subpath ofγ having the property that each ideal endpoint ofγ s is in ∂Γ s and the edge ofγ s incident to each finite endpoint ofγ s is in H ′ u,s . We note thatγ s ⊂ G ′ s . We note also that distinct overpaths have disjoint interiors, that is, given overpathsγ s ,γ s ′ ⊂γ with s = s ′ their intersectionγ s ∩γ s ′ , which is a path in G ′ s ∩ G ′ s ′ ⊂ L, is either empty or a common endpoint, for if this were not true then: ifγ s =γ s ′ we would contradict the fact that L contains no edges of H ′ u ; whereas ifγ s =γ s ′ then the intersectionγ s ∩γ s ′ would have a finite endpoint x with incident edge E such that x is also a finite endpoint of one ofγ s or γ s ′ with incident edge E and hence E ⊂ H ′ u , also a contradiction. Next we define the underpaths ofγ and their associated T -vertices. The underpaths are the components ofγ − ∪ s int(γ s ), a disjoint union of possibly degenerate subintervals ofγ, each contained in L. Given an L-vertex V l ∈ T , we put V l inγ T if and only if one of the underpaths, denotedγ l , is contained in the L-vertex space L l .
Finally, given an edge B b ⊂ T with endpoints V s , V l , we put B b inγ T if and only if γ s ∩γ l is nonempty, in which case that intersection is a point that we denote p b .
This completes the definition ofγ T , although we must still check that it is indeed a path in the tree T . Choosing an orientation ofγ, by construction we have decomposed γ into an alternating concatenation of overpaths and underpaths, what we call the overunder decomposition ofγ, and associated to this decomposition we have an expression of γ T as a concatenation of edges of T . To prove thatγ T is a path it suffices to show that this concatenation is locally injective at each vertex. Supposing that inγ T the S-vertex V s is preceded by an edge E b and followed by an edge E b ′ , it follows that the overpath γ s is a finite path with endpoints p b ∈B b and p b ′ ∈B b ′ ; the desired inequality E b = E b ′ follows from the inequalityB b =B b ′ which is true because, otherwise, it would follow that γ s ⊂B b =B b ′ contradicting thatγ s contains in edge of H ′ u . And supposing that inγ T the L-vertex V l is preceded by an edge E b with opposite S-vertex V s and followed by an edge E b ′ with opposite S-vertex V s ′ , by construction the over-under decomposition has three successive termsγ sγlγs ′ , and so by constructionγ s andγ s ′ have disjoint interiors; butγ s contains every H ′ s edge inγ including at least one such edge, andγ s ′ contains every H ′ s ′ edge inγ including at least one such edge, and it follows that V s = V s ′ and so Aut ψ (F n )-equivariance. We show that realization of lines in T is equivariant under the actions of Aut ψ (F n ) on B and on T , in the sense that the following equation holds for each Ψ ∈ Aut ψ (F n ) and each top height lineγ ∈ B: Since this equation is obviously true for Ψ ∈ Inn(F n ) of exponent zero, it suffices to consider the case of positive exponent 1, meaning Ψ corresponds tof ′ Ψ : G ′ → G ′ which is a lift off ′ . In this case the line on the left hand side of this equation is the realization in T of the line (f ′ Ψ ) # (γ), which by the Bounded Cancellation Lemma (see Fact I.1.5) is characterized as the unique line in G ′ at finite Hausdorff distance from the imagef ′ Ψ (γ), and so (f ′ Ψ ) # is the unique line in G ′ contained in the imagef ′ Ψ (γ). We may therefore prove the above equation by examining directly how to obtain the over-under decomposition of the line (f ′ Ψ ) # (γ) from the over-under decomposition ofγ.
Ifγ T = V s degenerates to a single S-vertex then we have implications which implies that Ψ(γ) T = V Ψ(s) = Ψ(γ T ) and we are done. We turn to the remaining case thatγ T does not degenerate to a single S-vertex. Write the over-under decomposition ofγ T as the concatenation of paths α j , j ∈ J, where J ⊂ Z is some subinterval, so that if j is even then α j =γ s(j) is an overpath and if j is odd then α j =γ l(j) is an underpath. For each j ∈ J let p j , p j+1 ∈ G ′ ∪ ∂F n be the initial and terminal endpoints of α j . We shall define for each j ∈ J a path δ j ⊂ G ′ which will turn out to be the overpath or underpath in (f ′ Ψ ) # (γ) corresponding to α j . We first define δ j when j ∈ J is even. Note the following chain of equivalences: and similarly for q j+1 depending on whether or not j + 1 ∈ J. It follows that β j is contained in the connected subgraph St( S Ψ(s(j)) ) ⊂ G ′ consisting of the union of S s(j) with whichever of the L-vertex spaces L l(j−1) , L l(j+1) is defined. Note that the edges of H ′ u in St( S Ψ(s(j)) ) are precisely those in H ′ u,Ψ(s(j)) . The path β j must contain at least one of those edges, because q j , q j+1 are separated by at least one edge of H ′ u . To see why: if j − 1, j + 1 are both defined then L l(j−1) , L l(j+1) are distinct components of L and so L Ψ(l(j−1)) , L Ψ(l(j+1)) are distinct components and are separated by at least one edge of H ′ u ; whereas if only one of j − 1, j + 1 is defined, say j − 1 is defined, then the ideal point p j+1 ∈ ∂Γ s(j) is not contained in the Gromov boundary of the component L l(j−1) that contains p j−1 , and so the ideal point q j+1 ∈ ∂Γ Ψ(s(j)) is not contained in the Gromov boundary of the component L Ψ(l(j−1)) that contains q j−1 , and so L Ψ(l(j−1)) is separated from q j+1 ∈ ∂ G ′ by at least one edge of H ′ u . It therefore makes sense to define δ j to be the longest subpath of β j having the property that each ideal endpoint of δ j is contained in ∂Γ Ψ(s(j)) and each finite endpoint of δ j is incident to an edge of δ j in H ′ u,Ψ(s(j)) . Let r j , r j+1 be the initial and terminal endpoints of δ j . Note that whichever of r j , r j+1 is infinite, it is equal to the corresponding q j , q j+1 , and in particular is contained in ∂Γ Ψ(s(j)) = ∂ S Ψ(s(j)) . Note also that whichever of the points r j , r j+1 is finite, it is an endpoint of an edge of H ′ u,Ψ(s(j)) and so is contained in S Ψ(s(j)) . It follows that δ j ⊂ S Ψ(s(j)) .
By construction, the paths δ j concatenate without cancellation and form the over-under decomposition of some line. Also by construction that line is contained inf ′ (γ T ), and so that line is equal to (f ′ Ψ (γ) whose realization in T equals (Ψ(γ)) T . Also by construction, the path in T corresponding to the concatenation of the δ j is precisely Ψ(γ T ). This completes the proof of the equivariance equation ( * ).
Proper geodesic overpaths do not cross the stable lamination. Fix a hyperbolic structure on S with totally geodesic boundary, and let Λ u , Λ s ⊂ int(S) be the unstable and stable laminations associated to the pseudo-Anosov mapping class θ. To avoid confusion with the label "s" we shall speak of "stable principal regions" rather than "principal regions of Λ s ". For each vertex V s ∈ T lift to a hyperbolic structure with totally geodesic boundary on the universal covering spaceS s with deck transformation group Γ s acting by isometries. We will use the notions of proper geodesics (lines, rays, and arcs) and proper equivalence of proper geodesics, in S and in eachS s , as given in Definition I.2.13. Choose once and for all a homeomorphism Θ : S → S representing the mapping class θ, such that Θ preserves Λ s and Λ u and their principal regions; since φ is rotationless it follows that Θ preserves each individual principal region and fixes its cusps ( [Mil82], Theorem 9).
We have the following composition of quasi-isometries with uniform constants independent of s and the associated composition of continuous extensions to Gromov compactifications:S Let Λ u s ⊂ int(S s ) be the total lift of Λ u , and let Λ + φ be the total lift to G ′ of Λ + φ . It follows by Proposition I.2.15 that for each leaf ℓ ⊂ Λ u s its imaged s •j s (ℓ) is uniformly Hausdorff close in G ′ to a unique leaf of Λ + φ . Given a full height line γ in G ′ we define its proper geodesic overpaths in S. Choose a lift γ ⊂ G ′ of γ. For each of the overpathsγ s ⊂ G ′ s ⊂ S s , chooseγ s ⊂S s to be a geodesic whose endpoints in ∂S s ∪ ∂Γ s map to the endpoints ofγ s under the mapd s •j s (we shall address below the non uniqueness of the choice ofγ s , which may occur due to noninjectivity of the restriction ofj s to lower boundary components). It follows that any finite endpoints ofS s are contained in the lower boundary ∂S s ∩B. We may immediately rule out the possibility thatγ s ⊂ ∂S s as follows. Ifγ s were contained in the lower boundary ∂S s ∩B then it would follow thatγ s ⊂ L ∩ G ′ s , and so γ is a line in L = G ′ u−1 , contradicting that γ has full height. Ifγ s is contained in the upper boundary ∂S s −B then it has no finite endpoints and so must be an entire upper boundary component, from which it follows that γ is a bi-infinite iterate of ρ ′ u orρ ′ u , verifying conclusion (1) of Lemma 2.18. Having reduced to the caseγ s ⊂ ∂S s , and recalling Definition I.2.13, it follows that: • Eachγ s is a proper geodesic, with finite endpoints in ∂S s and interior in int(S s ).
Projectingγ s down toS we obtain a downstairs proper geodesic γ s , which we shall call a proper geodesic overpath of γ. Note that, once an orientation of γ is fixed, the linearly ordered sequence of proper geodesic overpaths of γ is well-defined independent of the choice of a liftγ. Note also thatγ s , although not well-defined, is well-defined up to proper equivalence (Definition I.2.13), meaning that its infinite endpoints are well-defined and its finite endpoints are contained in well-defined lower boundary components.
Assuming now the hypothesis that the top height line γ ∈ B is not weakly attracted to Λ + φ under iteration of φ, we prove: • No proper geodesic overpath γ s of γ crosses the stable lamination.
We shall directly prove the contrapositive: if γ s crosses the stable lamination then γ is weakly attracted to Λ + φ . Choose a liftγ ⊂ G ′ of γ with overpathγ s projecting to γ s . Choose Φ ∈ Aut ψ (F n ) representing φ so that Φ(Γ s ) = Γ s , so Ψ(s) = s. Denote Ψ = Φ −1 representing ψ, and so Ψ(s) = s. Consider the sequence of lines Φ i (γ), i ≥ 0, represented by lines in G ′ denoted γ i , with realization in T denotedγ i,T . It follows by equation ( * ) that V s ∈γ i,T for all i, with associated overpaths denotedγ i,s ⊂ G ′ s , associated proper geodesic overpaths upstairs denotedγ i,s ⊂S s , and associate proper geodesic overpaths downstairs denoted γ i,s . We may choose a liftΘ :S s →S s of Θ : S → S whose action on ∂Γ s agrees with the action of Φ; it follows that Θ and Φ also act compatibly on boundary components, in that Θ(B b ) =B Φ(b) . Combining this with equation ( * ) it follows that for each i the proper geodesics Φ(γ i,s ) andγ i+1,s are properly equivalent in the sense of Definition I.2.13, meaning that they have the same ideal endpoints and their finite endpoints are in the same boundary components. Using the notation [·] for proper equivalence, it follows that the sequence of proper geodesics γ i+1,s represent the iterated sequence of proper equivalence classes θ i [γ s ].
Assuming that γ s crosses the stable lamination, it follows by Nielsen-Thurston theory, Proposition I.2.14, that γ s is weakly attracted to the unstable lamination under iteration of θ, as stated in item (3) of that proposition: for each ǫ > 0 and M > 0 there exists I such that if i ≥ I thenγ s,i has a subpath of length ≥ M at Hausdorff distance < ǫ from some subsegment of some leaf of the stable lamination in S s . From this it follows that as i → ∞ the imaged s •j s (γ s,i ) contains longer and longer segments in common with leaves of Λ + φ , and soγ i itself contains longer and longer such segments, and so γ i contains longer and longer segments of leaves of Λ + φ . Since all leaves of Λ + φ are generic (Proposition I.3) it follows that γ i is weakly attracted to Λ + φ . Completion of the proof. Consider a line γ ∈ B of top height in G ′ which is not weakly attracted to Λ + φ , and choose a liftγ ⊂ G ′ . Consider an S-vertex V s ∈γ T with corresponding proper geodesic overpathsγ s ⊂ G ′ s upstairs and γ s downstairs. Since γ s does not cross Λ u it follows that γ s andγ s are not proper geodesic arcs with endpoints in the boundary, and soγ s is not a finite arc. It follows that V s is not an interior point of the geodesicγ T . This being true for all S-vertices inγ T , andγ t having at least one S-vertex by virtue of γ being of top height, up to choice of orientation the pathγ T in T has one of the following forms, with additional descriptions to follow: Degenerate S-point:γ T = V s . In this caseγ =γ s is a geodesic line in int(S s ) and is either a leaf of Λ s or contained in some stable principal region P .
In this caseγ s is a proper geodesic ray contained in some stable principal region P projecting to a stable crown principal region P .
Two S-endpoints: In this case, for each choice of ±, each ofγ s± is a proper geodesic ray contained in some stable principal region P ± projecting to some stable crown principal region P ± .
We justify the additional descriptions in each case. In the case "Degenerate S-point" it follows immediately that γ s is a proper geodesic line in S s (reps.), and since it does not cross Λ s it must be either a leaf of Λ s or contained in a principal region of Λ s ; the rest follows. In the cases "One S-endpoint", "Two S-endpoints" it follows immediately that γ s , γ s± (resp.) is a proper geodesic ray, and since it does not cross Λ s and has an endpoint in ∂S it follows that this ray is contained in some crown principal region; the rest follows. We now prove in each case that one of conclusions (2), (3), or (4) of Lemma 2.19 holds (the case leading to conclusion (1) has already been handled above). In each case of the proof, we assert that certain rays are principal rays, and this assertion is justified by application of Fact I.1.49 (3b) We consider first the case "Degenerate S-point" as it is denoted above. If γ s is a leaf of Λ u then by Proposition I.2.15 the line γ is a leaf of Λ − φ which is conclusion (2). Suppose then thatγ s is contained in a principal region P of Λ s lifting P . Since ψ is rotationless, we may choose a representative Ψ ∈ Aut ψ (F n ) representing ψ so that Ψ(Γ s ) = Γ s and so that Ψ fixes the points of ∂F n corresponding to points at infinity of P , namely its cusps and, in the case P is a crown, the points of ∂F n corresponding to the ideal endpoints of the component of ∂S s contained in P . Corresponding to Ψ there is a liftΘ −1 :S s → S s of Θ −1 that preserves the principal region P , fixing each of its points at infinity. Let ξ 1 , ξ 1 be the ideal endpoints ofγ s . If ξ i is not a cusp then P is a crown and ξ i is one of the ideal endpoints of the component of ∂S s contained in P ; it follows that at least one of ξ 1 , ξ 2 is a cusp. If ξ i is a cusp of P then the following hold: ξ i is an attracting point for the action of Θ −1 on ∂Γ s (Proposition I.2.12), and so ξ i is an attracting point for the action of Ψ on ∂Γ s ; ξ i is an attracting point for the action of Ψ on all of ∂F n (Fact I.1.20); ξ i is represented by a principal ray R i generated by an oriented edge E i with fixed initial direction and fixed initial vertexṽ i (Fact I.1.49), and furthermore E i ⊂ H ′ u,s for otherwise either ξ i ∈ ∂Γ s or ξ i ∈ ∂ L l for some l.
If ξ 1 , ξ 2 are both cusps, choose such principal rays R 1 , R 2 , and note thatμ = [ṽ 1 ,ṽ 2 ] is either a trivial path or a Nielsen path off ′ Ψ . The pathμ, if not trivial, decomposes uniquely into fixed edges and indivisible Nielsen paths off ′ Ψ . Choose R 1 , R 2 so as to minimize the number of copies of lifts of ρ ′ u orρ ′ u inμ. We claim that the interior ofμ is disjoint from the interiors of R 1 and R 2 , implying thatγ G ′ = R 1μ R 2 and proving conclusion (3). If the claim fails, if say int(μ) ∩ int( R 1 ) = ∅, then the first term of the decomposition ofμ contains an edge of H ′ u and so by Fact I.1.40 that term is a lift of ρ ′ u orρ ′ u that we denote αβ, and soμ = αβμ ′ . Applying Lemma I.1.51 it follows that ( R 1 − α) ∪ β is also a principle ray representing ξ 1 , whose base point is connected to the base point of R 2 by the pathμ ′ , contradicting minimality.
If only one of ξ 1 , ξ 2 is a cusp of P , say ξ 1 , then P is a crown universal cover, its boundary is a componentl of ∂S s , and ξ 2 is an ideal endpoint ofl. Ifl =B b is a lower boundary component thenγ = R 1 R 2 where the ray R 2 is the maximal subpath ofγ contained in j s (l) = B b . The ray R 1 =γ − R 2 is the minimal subpath containing every H ′ u,s edge ofγ. Sincef ′ Ψ is a principal lift fixing ξ 1 , ξ 2 it follows thatf ′ Ψ fixes the common base point of R 1 and R 2 and fixes the initial direction of R 1 , and therefore R 1 is a principal ray representing ξ 1 , proving conclusion (4). Ifl is an upper boundary component the same analysis works except that the linej s (l) is a lift of the bi-infinite iterate of ρ ′ u orρ ′ u , the subray R 2 ⊂γ is the maximal subray that is a lift of a singly infinite iterate of ρ ′ u orρ ′ u , and R 1 =γ − R 1 ; it still holds thatf ′ Ψ fixes the common base point of R 1 and R 2 and the initial direction of R 1 , and that R 1 is a principal ray representing ξ 1 , also verifying conclusion (4).
The remaining cases "One S-endpoint" and "Two S-endpoints" are very similar to each other, the first leading to conclusion (4) and the second to (3).
Consider the case "One S-endpoint" as it is denoted above. We have γ = R 1 R 2 where R 1 = γ s and R 2 = γ l . We know thatγ s is contained in a stable principal region P ⊂S covering a crown principal region P ⊂ S. We may choose Ψ ∈ Aut ψ (F n ) andΘ :S s →S s as in the case "Degenerate S-point", fixing points at infinity of P . Note also thatΘ preserves the componentB b of ∂S s corresponding to E b , thatf ′ Ψ is a principal lift preserving B b , and that the set Fix(f ′ Ψ ) ∩ B b is nonempty and invariant under Γ b . From the hypothesis of the case "One S-endpoint" the rays R 1 =γ s ⊂ S s and R 2 =γ l ⊂ L l have a common finite endpoint x ∈ B b , and initial edge ofγ s is in H ′ u,s . It follows that R 1 is the principal ray representing ξ 1 , proving conclusion (4).
In the remaining case "Two S-endpoints" one carries out the analysis of the previous paragraph on each of the rays R 1 =γ s− and R 2 =γ s + , proving that they are principal rays, and setting µ =γ l (which may be trivial) we have proved conclusion (3).
General nonattracted lines and the Proof of Theorem G
We are given rotationless φ, ψ = φ −1 ∈ Out(F n ), a lamination pair Λ ± φ ∈ L ± (φ), and a CT f : G → G representing φ with EG stratum H r corresponding to Λ + φ (note that we are abandoning the notational conventions of Section 2.1).
From Definition 1.2 we have the path set Z,ρ r ⊂ B. From Lemma 1.5 this path set is a groupoid and each line γ ∈ Z,ρ r is carried by A na (Λ ± φ ). Recall also Lemma 2.1 which says that γ ∈ B na (Λ + φ ) as long as it satisfies at least one of the following conditions.
Since each line carried by
it follows that for each line γ ∈ Z,ρ r we have γ ∈ B ext (Λ); we use this repeatedly in this section.
To simplify the notation of the proof we define the set of good lines in B to be and we repeatedly use Proposition 2.14 which with this notation says that B good (Λ ± φ ; ψ) is closed under concatenation.
The conclusion of Theorem G says that B na (Λ + φ ) = B good (Λ ± φ ; ψ). One direction of inclusion, namely B na (Λ + φ ) ⊃ B good (Λ ± φ ; ψ), follows from Lemma 2.1 and the fact that each line in B ext (Λ ± φ ; ψ) is a concatenation of lines in B sing (ψ) and lines carried by A na (Λ ± φ ). We turn now to proof of the opposite inclusion B na (Λ + φ ) ⊂ B good (Λ ± φ ; ψ). Given γ ∈ B na (Λ + φ ), if the height of γ is less than r then γ ∈ Z,ρ r and we are done. Henceforth we proceed by induction on height. Define an inductive concatenation of γ to be an expression of γ as a concatenation of finitely many lines in B good (Λ ± φ ; ψ) and at most one line ν of height lower than γ. If we can show that γ has an inductive concatenation, we prove that γ is good as follows. In some cases ν does not occur in the concatenation and so γ is good by Proposition 2.14. Otherwise, using invertibility of concatenation, it follows that ν is expressed as a concatenation of good lines plus the line γ, all of which are known to be in B na (Λ + φ ). Applying Lemma 2.3 we therefore have ν ∈ B na (Λ + φ ). Applying induction on height it follows that ν is good, and so again γ is good by Proposition 2.14.
The induction step breaks into two major cases, depending on whether or not the stratum of the same height as γ is NEG or EG. For the case of an NEG stratum we will use the following: Lemma 2.20. Suppose that φ, ψ = φ −1 ∈ Out(F n ) are rotationless, that f : G → G is a CT representing φ, that E s is the unique edge in an NEG stratum H s , and that both endpoints of E s are contained in G s−1 . Let E s be a lift of E s , letf : G → G be the lift of f that fixes the initial endpoint of E s and let Φ be the automorphism corresponding tof . Then Ψ = Φ −1 is principal. Moreover there is a line σ ∈ B sing (ψ) that has height s, that crosses E s exactly once and that lifts to a line with endpoints in Fix N (Ψ).
Proof. By Fact I.1.44 and Definition I.1.29 (6), no component of G s−1 is contractible. Letting C 1 , C 2 ⊂ G be the components of the full pre-image of G s−1 that contain the initial and terminal endpoints of E s respectively, there are nontrivial free factors B 1 , B 2 that satisfy ∂B j = ∂ C j . Each of C 1 , C 2 is preserved byf and so each of B 1 , B 2 is Ψ-invariant. By Fact I.1.21 applied to Ψ B j , there exists m > 0 and points P j ∈ Fix N ( Ψ m ) ∩ ∂ C j for j = 1, 2. Since the lineσ connecting P 1 to P 2 is not birecurrent it does not project to either an axis or a generic leaf of some element of L(φ −1 ). Thus Ψ m ∈ P (ψ). Since ψ is rotationless, Ψ ∈ P (ψ) and σ ∈ B sing (ψ).
Fix now s ≥ r and assume as an induction hypothesis that all lines in
Letγ be a lift of γ and let P and Q be its initial and terminal endpoints respectively.
Case 1: H s is NEG. Let E s be the unique edge in H s . If E s is closed and γ is a bi-infinite iterate of E s then E s ⊂ Z and γ ∈ Z,ρ r so γ ∈ B ext (Λ ± φ ; ψ). We may therefore assume that both endpoints of E s belong to G s−1 .
Orient E s so that its initial direction is fixed. Recall (Lemma 4.1.4 of [BFH00]) that for each occurrence of E s or E s in the representation of γ as an edge path, the line γ splits at the initial vertex of E s , and we refer to this as the highest edge splitting vertex determined by the occurrence of E s . We also use this terminology for lifts of E s in the universal cover. By Fact I.1.37, highest edge splitting vertices are principal.
Case 1A: Both ends of γ have height s. In this case γ has a splitting in which each term is finite. Since γ is not weakly attracted to Λ + , neither is any of the terms in the splitting. Lemma 1.6 (4) implies that each term is contained in Z,ρ r and so γ is contained in Z,ρ r and we are done.
Case 1B: Exactly one end of γ has height s. We assume without loss that the initial end of γ has height s. Pick a liftγ, let E s be the last lift of E s crossed byγ, letx ∈γ be the highest edge splitting vertex determined by E s , and letγ = R −1 − · R + be the splitting atx. The ray R − has height s and crosses lifts of E s infinitely often, and as in case 1A the projected ray R − is contained in Z,ρ r . It follows that there exists a subgroup A ∈ A na (Λ + φ ) such that P ∈ ∂A. Letf be the lift of f that fixesx and let Φ be the corresponding element of P (φ). Lemma 2.20 implies that Ψ = Φ −1 ∈ P (ψ).
We claim that A is Φ-invariant. By Lemma 1.5 (6) it suffices to show that Φ(∂A)∩∂A = ∅. This is obvious if P ∈ Fix( Φ) so assume otherwise. The ray f # (R − ) is contained in Z,ρ r by Lemma 1.6 (2) so P and Φ(P ) bound a line that projects into Z,ρ r and so is carried by A na (φ). Another application of Lemma 1.5 (6) implies that Φ(P ) ∈ ∂A as desired.
By Fact I.1.21 applied to Ψ A, the set Fix N ( Ψ m ) ∩ ∂A is nonempty for some m > 0. Since ψ is rotationless and Ψ ∈ P (ψ), we may take m = 1, from which it follows that Ψ is A-related. By Lemma 2.20, there exist P ′ , Q ′ ∈ Fix N (Ψ) so that the lineσ = P ′ Q ′ crosses E s in the same direction asγ and crosses no other edge of height ≥ s, andσ projects to σ ∈ B sing (ψ). Letσ = R ′ − −1 · R ′ + be the highest edge splitting determined byx. Assuming that P = P ′ , the lineμ = P P ′ has endpoints in ∂A ∪ Fix N (Ψ) and so projects to µ ∈ B ext (Λ + φ ). If γ crosses E s in the backwards direction then E s is the last edge of both R − and R ′ − and each of R + and R ′ + have height ≤ s − 1; otherwise each of R + and R ′ + is a concatenation of E s followed by a ray of height ≤ s − 1. In either case, assuming that Q = Q ′ , it follows that the lineν = Q ′ Q has height ≤ s − 1. We therefore have an inductive concatenation γ = µ ⋄ σ ⋄ ν, with µ omitted when P = P ′ and ν omitted when Q = Q ′ , and Case 1B is completed.
Case 1C: Neither end of γ has height s. We induct on the number m of height s edges in γ. The base case, where m = 0, follows from induction on s. Letγ = R −1 − · R + be the splitting determined by the last highest edge splitting vertexx inγ, letf be the lift of f that fixesx, and let Φ ∈ P (φ) correspond tof . As in Case 1B, from Lemma 2.20 it follows that Ψ = Φ −1 ∈ P (ψ) and that there exist P ′ , Q ′ ∈ Fix N (Ψ) so that the lineσ connecting P ′ to Q ′ crosses the last height s edge ofγ in the same direction asγ and crosses no other edge of height ≥ s. Letσ = R ′ − −1 · R ′ + be the highest edge splitting determined byx. The line µ 1 = P P ′ is obtained by tightening R −1 − R ′ − , and the line µ 2 = Q ′ Q is obtained by tightening and R ′ + −1 R + . These lines have height ≤ s, cross fewer than m edges of height s, and are not weakly attracted to Λ + φ by Lemma 2.3, because the rays R − , R + , R ′ − and R ′ + are not weakly attracted to Λ + φ . By induction on m we have µ 1 , µ 2 ∈ B good (Λ ± φ ; ψ). Since σ ∈ B sing (ψ), it follows that γ = µ 1 ⋄ σ ⋄ µ 2 ∈ B good (Λ ± φ ; ψ), completing Case 1C.
Case 2: H s is EG. Let Λ + s ∈ L(φ) be the lamination associated to H s with dual lamination denoted Λ − s ∈ L(ψ). Applying Theorem I.1.30 with C being [G r ] ⊏ [G s ], let f ′ : G ′ → G ′ be a CT representing ψ with EG stratum H ′ r ′ associated to Λ − φ and EG stratum H ′ s ′ associated to Λ − s so that [G r ] = [G ′ r ′ ] and [G s ] = [G ′ s ′ ]. Let γ ′ be the realization of γ in G ′ , a line of height s ′ . Using the F n -equivariant identification ∂ G s ≈ ∂ G ′ s ′ , there is a liftγ ′ of γ ′ with endpoints P, Q.
Case 2A: γ is not weakly attracted to Λ + s . This is the case where we apply Lemmas 2.18 and 2.19. In the situation where γ ′ is a singular line of ψ or a generic leaf of Λ − s , or in the geometric situation where γ ′ is a bi-infinite iterate of the height s ′ closed indivisible Nielsen path ρ ′ s ′ , we have γ ′ ∈ B good (Λ ± φ ; ψ) and we are done. The situation where γ ′ is a singular line of ψ includes all cases of Lemmas 2.18 and 2.19 where γ ′ = R −1 − µR + , each of R − , R + is either a height s ′ principal ray or a singly infinite iterate of a height s ′ closed indivisible Nielsen path, and µ is either trivial or a height s ′ Nielsen path. We may therefore assume that none of these situations occurs. In all remaining situations, we divide into two subcases depending on whether one or two ends of γ ′ have height s ′ .
Consider first the subcase where only one end of γ ′ , say the initial end, has height s ′ . By applying Lemma 2.18 (3) or Lemma 2.19 (4) we obtain a decomposition γ ′ = R −1 − µR + where R − is a height s ′ principal ray, µ is either a trivial path or a height s ′ Nielsen path, and the ray R + has height < s ′ . Lifting the decomposition of γ ′ we obtain a decompositioñ γ ′ = R −1 −μ R + where R − , R + have endpoints P, Q. Let x be the initial point of R + , lifting to the initial pointx of R + . The component Γ of the full pre-image of G ′ s ′ −1 that containsx isf ′ -invariant and infinite and so there is a free factor B such that Q ∈ ∂B = ∂Γ. Since Ψ is principal, Lemma I.1.21 implies the existence of Q ′ ∈ Fix N ( Ψ) ∩ ∂Γ. The lineτ ′ connecting P to Q ′ projects to τ ′ ∈ B sing (ψ); let τ be the realization of τ ′ in G. The lineν ′ =τ ′−1 ⋄γ ′ is contained in Γ and so projects to a line ν ′ = τ ′−1 ⋄ γ ′ of height < s ′ whose realization in G is a line ν of height < s. We obtain an inductive concatenation γ = τ ⋄ ν, completing the first subcase of Case 2A.
Consider next the subcase where both ends of γ ′ have height s ′ . Applying Lemma 2.18 (2) or Lemma 2.19 (3), and keeping in mind the situations that we have assumed not to occur, there is a decomposition γ ′ = R −1 1 µR 2 where R 1 , R 2 are both height s ′ principal rays, and µ has one of the forms β, αβ, βᾱ, αβᾱ where β is a nontrivial path of height < s ′ and α (if it occurs) is a height s ′ nonclosed indivisible Nielsen path oriented to have initial vertex in the interior of H ′ s ′ and terminal vertex in G ′ s ′ −1 . Absorbing occurrences of α into the incident principal rays R 1 , R 2 , we obtain rays R − , R + containing R 1 , R 2 respectively, and a decomposition γ ′ = R −1 − βR + which lifts to a decompositionγ ′ = R −1 −β R + where R − has endpoint P and R + has endpoint Q. Letx be the initial point of R − . There is a principal liftf ′ : G ′ → G ′ with associated Ψ ∈ P (ψ) such that R 1 is a principal ray forf ′ fixing the initial pointỹ of R 1 . Since eitherx =ỹ or the segment [x,ỹ] is a lift of α, it follows thatf ′ fixesx and thatf ′ # ( R − ) = R − . As in the previous subcase there is a ray based atx with height < s ′ and terminating at some Q ′ ∈ Fix N ( Ψ). The lineτ ′ connecting P to Q ′ projects to τ ′ ∈ B sing (ψ) which is good, and the lineσ ′ =τ ′−1 ⋄ γ ′ has only one end with height s ′ . By the previous subcase, the realization σ ofτ ⋄ γ in G is good and hence γ = τ ⋄ σ is good.
Case 2B: γ is weakly attracted to Λ + s . In this case H s ⊂ Z, for otherwise γ is weakly attracted to Λ + φ as well, contrary to hypothesis.
Special case: We first consider the special case that γ decomposes at a fixed vertex v into two rays γ = γ 1 γ 2 so that γ 1 has height < s and γ 2 ∈ Z,ρ r . In G there is a corresponding decompositionγ =γ 1γ2 at a vertexṽ, and there is a liftf fixingṽ with corresponding Φ ∈ Aut(F n ) representing φ. Let Ψ = Φ −1 .
Recall the notation established in Definition 1.2 of the graph immersion h : K → G used to define A na (Λ + φ ). Since the ray γ 2 is an element of the path set Z,ρ r , it follows from Definition 1.2 that γ 2 lifts via the immersion h : K → G to a ray in the finite graph K. The image of this lifted ray must therefore be contained in a noncontractible component K 0 of K. There is a lift of universal coversh : K 0 → G such thath( K 0 ) containsγ 2 and such that the stabilizer ofh( K 0 ) is a subgroup A ∈ A na (Λ ± φ ) whose conjugacy class is the one determined by the immersion h : K 0 → G. By construction we have Q ∈ ∂A. If Φ(Q) = Q then Q and Φ(Q) bound a line that projects into Z,ρ r and so is carried by A na (Λ ± φ ), and by applying Lemma 1.5 (6) it follows that Q ∈ ∂A; this is also true if Φ(Q) = Q. In particular Φ, and therefore also Ψ, preserves A. By Fact I.1.21 applied to Ψ A there exists an integer q ≥ 1 so that Fix N ( Ψ q ) ∩ ∂A = ∅; we choose q to be the minimal such integer and then we choose Q ′ ∈ Fix N ( Ψ q ) ∩ ∂A. If Q = Q ′ then the line β connecting Q to Q ′ is carried by A and so β ∈ B good (Λ ± φ ; ψ). The component C of G s−1 that contains the ray γ 1 is noncontractible, and letting C be the component of the full pre-image of C that contains γ 1 , the stabilizer of C is a nontrivial free factor B such that ∂B = ∂ C. By construction we have P ∈ ∂B. Also C is invariant underf and so B is invariant under Ψ. By Fact I.1.21 applied to Ψ B there exists an integer p ≥ 1 so that Fix N ( Ψ p ) ∩ ∂B = ∅; we choose p to be the minimal such integer and then we choose P ′ ∈ Fix N ( Ψ p ) ∩ ∂B. If P = P ′ then the line ν connecting P to P ′ has height < s.
For some least integer m > 0 we have P ′ , Q ′ ∈ Fix N (Ψ m ). If P ′ = Q ′ , consider the line µ connecting P ′ to Q ′ . By hypothesis ψ is rotationless and so Ψ is principal if and only if Ψ m is principal. It follows that if Ψ is principal then m = 1 and µ ∈ B sing (ψ), whereas if Ψ is not principal then Fix N (Ψ m ) = {P ′ , Q ′ } so m = p = q = 1 or 2 and either µ ∈ B gen (ψ) or µ is a periodic line corresponding to a conjugacy class that is invariant under φ 2 . In all cases, µ ∈ B good (Λ ± φ ; ψ). We therefore have an inductive concatenation of the form γ = ν ⋄ µ ⋄β, where ν is omitted if P = P ′ , µ is omitted if P ′ = Q ′ , andβ is omitted if Q ′ = Q, but at least one of them is not omitted because P = Q. This completes the proof in the special case.
General case. First we reduce to the subcase that γ has a subray of height s in Z,ρ r . To carry out this reduction, after replacing γ with some φ k # (γ) we may assume that γ contains a long piece of Λ + s and so has a splitting γ = R − · E · R + where E is an edge of H s whose initial vertex and initial direction are principal. Lifting this splitting we havẽ γ = R − · E · R + . Letf be the principal lift that fixes the initial vertex of E and let R ′ be the principal ray determined by the initial direction of E. Neither the line R − R ′ nor the line obtained by tighteningR + R ′ is weakly attracted to Λ + r , because R − and R + are not weakly attracted and the ray R ′ is contained in Z,ρ r . Each of these lines contains a subray of R ′ , and any subray of R ′ contains a further subray of height s in Z,ρ r , and so it suffices to show that each of these lines is contained in B good (φ), which completes the reduction.
Let t be the highest integer in {r, . . . , s − 1} for which H t is not contained in Z. Using that γ has a subray of height s in Z,ρ r , after making it a terminal subray by possibly inverting γ, there is a decomposition γ = . . . ν 2 µ 1 ν 1 µ 0 into an alternating concatenation where the µ l 's are the maximal subpaths of γ of height > t that are in Z,ρ r , and the ν l 's are the subpaths of γ that are complementary to the µ l 's. Each subpath ν l has fixed endpoints, is contained in G t , and is not an element of Z,ρ r . Further, ν l is finite unless the decomposition of γ is finite and ν l is the leftmost term of the decomposition. Since H t is not a zero stratum, each component of G t is non-contractible and hence f -invariant. We prove that the above decomposition of γ is finite by assuming that it is not and arguing to a contradiction.
We claim that for all l and all m ≥ 1 the following hold: (1) If ν l is finite, not all of f m # (ν l ) is cancelled when f m # (µ l )f m # (ν l )f m # (µ l−1 ) is tightened to f m # (µ l ν l µ l−1 ). Moreover, as m → ∞ the part of f m # (ν l ) that is not cancelled contains subpaths of Λ + φ which cross arbitrarily many edges of H r . | 2015-11-21T16:59:06.000Z | 2013-06-19T00:00:00.000 | {
"year": 2013,
"sha1": "de2dd37f16d513ef39323ebf40b6ca507caaa381",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "de2dd37f16d513ef39323ebf40b6ca507caaa381",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
257663901 | pes2o/s2orc | v3-fos-license | Multisimplicial chains and configuration spaces
This paper presents a generalization to multisimplicial sets of previously defined $E_\infty$-coalgebra structures on the chains of simplicial and cubical sets. We focus on the surjection chain complexes of McClure--Smith as a main example and construct a zig-zag of complexity preserving quasi-isomorphisms of $E_\infty$-coalgebras relating these to both the singular chains on configuration spaces and the Barratt--Eccles chain complexes.
Introduction
The cochain complex of a simplicial set is equipped with the classical Alexander-Whitney product defining the ring structure in cohomology.This cochain level structure has several explicit extensions to an E ∞ -algebra [MS03; BF04; Med20a] encoding commutativity and associativity up to coherent homotopies.The importance of E ∞ -algebras in homotopy theory is well known.For example, Mandell showed that finite type nilpotent spaces are weakly equivalent if and only if their singular cochains are quasi-isomorphic as E ∞ -algebras [Man06].Our first objective is to define a natural product together with an E ∞ -algebra extension on the cochains of multisimplicial sets [Gug57].These are generalizations of both simplicial and cubical sets which are useful for concrete computations since they can model homotopy types using fewer cells.For example, the proof of the non-formality of the cochain algebra of planar configuration spaces [Sal20] used a simplicial model and the Alexander-Whitney product on its cochains.By using a multisimplicial model and the product defined here, these computations become simpler and faster, paving the way for extending this result to higher dimensions.
Multisimplicial sets are contravariant functors from products of the simplex category △ to Set.Explicitly, for any positive integer k the category mSet (k) of k-fold multisimplicial sets is the presheaf category Fun((△ op ) ×k , Set).There is a notion of geometric realization for multisimplicial sets, which results in a CW complex having, for each non-degenerate multisimplex, a cell modeled on a product of geometric simplices ∆ n1 × • • • × ∆ n k .We are interested in modeling homotopy types algebraically, for which we consider the composition of the geometric realization and the functor of cellular chains C.This composition defines N : mSet (k) → Ch, the functor of (normalized) chains.In §2.5 we define a lift of N to the category of E ∞ -coalgebras, and, consequently, a lift of the functor of cochains to the category of E ∞ -algebras.We do so using the finitely presented E ∞ -prop introduced in [Med20a] and its monoidal properties.Specifically, using the isomorphism we extend the image of the prop generators constructed in [Med20a] from the chains of standard simplices to those of standard multisimplices.These generators are the Alexander-Whitney coproduct, the augmentation map, and an algebraic version of the join product.The resulting E ∞ -coalgebra structure generalizes those defined in [MS03; BF04;Med20a] for simplicial chains and in [KM22] for cubical chains.
As an application, we study the Steenrod construction for multisimplicial chains in §2.7 emphasizing the explicit nature of our construction.
Let us now focus on the relationship between multisimplicial and simplicial theories.The restriction to the image of the diagonal inclusion △ op → (△ op ) ×k of any k-fold multisimplicial set X defines its associated diagonal simplicial set X D .There is a natural homeomorphism of realizations |X| ∼ = |X D | [Qui73].Under this homeomorphism the cells of |X D | arise from those of |X| through subdivision, a procedure described algebraically by the Eilenberg-Zilber quasi-isomorphism The functor induced by the diagonal restriction has a right adjoint N (k) , the multisimplicial nerve of a simplicial set, making the categories of k-fold multisimplicial and simplicial sets equivalent in Quillen's sense.Furthermore, there is a natural inclusion I : N(Y ) → N(N (k) Y ) which is also a quasi-isomorphism.On one hand, the EZ map preserves the counital coalgebra structure, but it does not respect the higher E ∞ -structure.On the other, the map I is an E ∞ -coalgebra quasi-isomorphism as proven in §3.3.We use this fact to prove in §3.4 that, for any topological space X, the linear map from its singular simplicial chains to its singular k-fold multisimplicial chains, given by precomposing a continuous map (∆ n → X) with the projection ( In the second part of the paper, we use these constructions to study a multisimplicial model of the canonical filtration and showed that X (r) is connected to the singular chains of Conf r (R ∞ ) via a zigzag of filtration preserving S r -equivariant quasi-isomorphisms.Presumably it was observed by both McClure-Smith and Berger-Fresse that X (r) can be interpreted as the chains of an r-fold multisimplicial set Sur(r), which we introduce in §4.2 with a filtration There is an operad structure on {X d (r)} r≥1 for each d ≥ 1, but we do not focus on it since it is not induced from one at the multisimplicial level.By the constructions in Section 2 the complex N Sur(r) is equipped with an E ∞ -coalgebra structure, which we connect to the singular chains of Conf r (R ∞ ) via an explicit zig-zag of filtration preserving S r -equivariant quasi-isomorphisms of E ∞ -coalgebras.
In a similar way, Berger and Fresse [BF04] studied a chain complex E(r) of Z[S r ]-modules with a filtration This complex comes from the chains on a simplicial set introduced by Barratt and Eccles [BE74] equipped with a filtration [Smi89].(As before we disregard the operadic structure.)Since E(r) is induced from a simplicial set, it is endowed with an E ∞ -coalgebra structure, and it is not hard to see that the zig-zag of filtration preserving S r -equivariant quasiisomorphisms used to compare it to the singular chains of Conf r (R ∞ ) respects this higher structure.Consequently, X (r) and E(r) can be related by an explicit zig-zag of such maps.
It is desirable to have a direct map between the multisimplicial and simplicial models.Berger-Fresse constructed two such filtrations preserving S r -equivariant quasi-isomorphisms TR : N E(r) → N Sur(r) and TC : N Sur(r) → N E(r).
The first one, introduced in [BF04, 1•3], is unfortunately not a coalgebra map.Therefore we will focus on the second one, which was introduced in [BF02].Our contribution, presented in §4.4, is the construction of a factorization TC : N Sur(r) where the second map is induced from a filtration preserving S r -equivariant weakequivalence of simplicial sets.Therefore, we prove that TC is a coalgebra map since EZ is one.
Multisimplicial algebraic topology
2.1.Multisimplicial sets.Let us consider an arbitrary positive integer k.The k-fold multisimplex category △ ×k is the k-fold Cartesian product of the simplex category △.The category is referred to as the category of k-fold multisimplicial sets.We remark that mSet (1) and mSet (2) are naturally equivalent to the categories of simplicial and bisimplicial sets respectively.A representable k-fold multisimplicial sets is denoted by △ n1,...,n k .Explicitly, a k-fold multisimplicial set X consists of a collection of sets indexed by k-tuples of non-negative integers (m 1 , . . ., m k ) together with face maps d j i : X m1,...,mj ,...,m k → X m1,...,mj−1,...,m k and degeneracy map s j i : X m1,...,mj,...,m k → X m1,...,mj +1,...,m k for 1 ≤ j ≤ k and 0 ≤ i ≤ m j such that, referring to j as the direction of these maps, two of them satisfy the simplicial identities when they have the same direction and commute when they do not.An element of X m1,...,m k is called an (m 1 , . . ., m k )multisimplex and it is said to be degenerate if it is in the image of a degeneracy map.
The geometric realization functor has a right adjoint Sing (k) : Top → mSet (k) defined on a topological space X, as usual, by the expression
Algebraic realization. The functor of chains
is the Yoneda extension of the functor defined on representable objects by It is naturally isomorphic to the composition of the geometric realization functor and the functor of cellular chains with respect to the canonical cellular structure.
Explicitly, for a k-fold multisimplicial set X the k-module N(X) n is freely generated by the non-degenerate (n 1 , . . ., n k )-multisimplices with For any topological space X the chain complex N Sing (k) (X) is denoted S (k) (X) and referred to as the k-fold singular chains of X.
Coalgebra structure.
A counital coalgebra structure on a chain complex C is a pair of chain maps ∆ : The tensor product of two counital coalgebras C and C ′ is itself a counital coalgebra with structure maps given by where τ transposes the second and third factors.
For each n ∈ N, the complex N(△ n ) is naturally equipped with a counital coalgebra structure defined by: We will refer to it as the Alexander-Whitney structure.
Using the tensor product structure, we deduce a natural counital coalgebra structure on the chains of representable multisimplicial sets and, via a Yoneda extension, one on the chains of general multisimplicial sets.
Explicitly, for a k-fold multisimplicial set X and (m 1 , . . ., m k )-multisimplex where the front (i 1 , . . ., i k )-face of x is the multisimplex [Med20a], the collection of all maps {C → C ⊗r } r∈N generated by ∆, ǫ and * make C into an E ∞ -coalgebra, that is to say, a coalgebra over certain operad UM that is a cofibrant resolution of the terminal operad.
As proven in [MR21], the counital coalgebra structure on the tensor product of two M-bialgebras C and C ′ can be naturally extended to an M-bialgebra structure using For any integer n, the join product where π is the permutation that orders the vertices.It is proven in [Med20a] that on the chains of representable simplicial sets the Alexander-Whitney structure together with the join product make N(△ n ) into a natural M-bialgebra and, consequently, a natural E ∞ -coalgebra.We mention that this structure is induced by one preset at the level of geometric realizations [Med21b].
Using the tensor product structure, we deduce a natural M-bialgebra structure on the chains of representable multisimplicial sets and consequently a natural E ∞ -coalgebra structure, which extends along the Yoneda inclusion to the chains on any multisimplicial set X.
Explicitly, for two basis elements of N △ n1,...,n k we have where, with the convention x <1 = x >n = 1 ∈ k, We remark that since the category of M-bialgebras is not cocomplete, we do not necessarily have an M-bialgebra structure on N(X) for a general multisimplicial set X.An example for which such structure does not exist is given by one such X whose geometric realization consists of just two points.
2.6.Cubical theory.Since the complex of chains of the k-fold multisimplicial set △ 1 × • • • × △ 1 is isomorphic to the chains on the standard cubical set k , it is natural to compare the E ∞ -coalgebra structure defined here with that presented in [KM22] for cubical sets.As counital coalgebras N(△ 1 × • • • × △ 1 ) and N( k ) are isomorphic, and, denoting the product of the M-bialgebra defined there by * , we have x * y = (−1) |x| x * y under this chain isomorphism.The sign convention used here is more natural, used for example to endow Adams' cobar construction with the structure of a monoidal E ∞ -coalgebra [MR21].
2.7.Steenrod construction.In [Ste47], Steenrod introduced natural operations on the mod 2 cohomology of spaces, the celebrated Steenrod squares via an explicit construction of natural linear maps ∆ i : N(X) → N(X) ⊗ N(X) for any simplicial set X, satisfying up to signs the following homological relations with the convention ∆ −1 = 0.These so-called cup-i coproducts appear to be fundamental, as they are axiomatically characterized [Med22] and induce the nerve of strict infinity categories [Med20b].A description of cup-i coproducts for multisimplicial sets can be deduced from our E ∞ -coalgebra structure.It is given recursively by Steenrod also introduced operations on the mod p cohomology of spaces when p is an odd prime [Ste52;Ste53].To define these effectively, generalization of the cupi coproducts were introduced in [KM21].After the present work, these so-called cup-(p, i) coproducts are defined on multisimplicial chains, and their formulas are explicit enough to be implemented in the computer algebra system ComCH [Med21a], where constructions of Cartan and Adem coboundaries [Med20c; BMM21] for multisimplicial sets are also be found.
Comparison with the simplicial theory
We will use sSet to denote the category of 1-fold multisimplicial sets mSet (1) referring to its objects and morphisms as simplicial sets and simplicial morphisms as usual.
3.1.Diagonal simplicial set.For any k ∈ N, the diagonal The functor (−) D : mSet (k) → sSet admits a right adjoint N (k) : sSet → mSet (k) , defined, as usual, by the expression These functors define a Quillen equivalence.A proof of this fact can be given using [Mal05, Proposition 1.6.8]or adapting that in [Moe89, Proposition 1.2].
3.2.Eilenberg-Zilber map.Recall that an (n 1 , . . ., n k )-shuffle σ is a permutation in S n satisfying We denote the set of such permutations by Sh(n 1 , . . ., n k ).For any σ ∈ Sh(n 1 , . . ., n k ) the inclusion If e is the identity permutation, we denote i e simply as i.The set {i σ | σ ∈ Sh(n 1 , . . ., n k )} defines a triangulation of ∆ n1 × • • • × ∆ n k making it isomorphic, in the category of cellular spaces, to the geometric realization of the simplicial set △ n1 × • • • × △ n k .Using this identification, the identity map induces a cellular map agrees, under the natural identifications, with the traditional Eilenberg-Zilber map.
Since the traditional Eilenberg-Zilber map preserves counital coalgebra structures we have the following.Theorem 3.2.1.For every multisimplicial set X the map EZ : N(X) → N(X D ) is a quasi-isomorphism of counital coalgebras.
We remark that the Eilenberg-Zilber map is not a morphism of E ∞ -coalgebras.For example, as shown in [KM22, §5.4], we have
Canonical inclusion.
Let Y be a simplicial set and n an integer.Consider the function Y n → N (k) Y n,0,...,0 sending a simplex with characteristic map ζ : △ n → Y to the composition These functions induce a chain map and we have the following.
Proof.The structure-preserving properties of this map are immediate.It to be shown that it induces a homology isomorphism.Consider the composition of quasi-isomorphisms where the second map is induced by the counit of the adjunction.We will now verify that it is left inverse to I. Consider a simplex y with characteristic map ζ : △ n → Y .The multisimplex I(y) is given by the simplicial map Since the only (n, 0, . . ., 0)-shuffle is the identity, the simplex EZ • I(y) is the simplicial map Finally, the image of this simplex under the counit is the evaluation of 3.4.Singular chains.
Theorem 3.4.1.Let X be a topological space.The chain map defined by precomposing a continuous map (∆ n → X) with the projection Proof.This map factors as the composition of two quasi-isomorphisms of E ∞coalgebras.The first is I : S(X) → N(N (k) Sing(X)), which was studied in § 3.3.The second is induced by a multisimplicial isomorphism defined as follows.Using the adjunction of §2.2, any simplicial map which precomposing with ez gives a continuous map It is not hard to see that every such map arises this way since ez is a homeomorphism.
Models of configuration spaces
We are interested in modeling algebraically the S r -equivariant homotopy type of the space of configurations of r labeled and distinct points in Euclidean ddimensional space.Multisimplicial sets can be used to provide an explicit chain complex model with a small number of generators, which, using the E ∞ -structure defined in this paper, retains all homotopical information by Mandell's theorem [Man06].
In the first subsection, we recall a method due to Berger detecting spaces homotopy equivalent to euclidean configuration spaces by means of a filtration indexed by a complete graph poset.In the second subsection, we construct the multisimplicial model and show that is equipped with such a filtration.In the third subsection, we recall the construction of the simplicial Barratt-Eccles model and show that is equipped with a similar filtration.In the fourth subsection, we relate the multisimplicial and simplicial chain models by an explicit map.In the last subsection, we give some examples of the sizes of the two models, showing that the multisimplicial is smaller.4.1.Recognition of configuration spaces.Let Conf r (R d ) denote the configuration space of r-tuples of pairwise disjoint vectors in R d .This space is equipped with a free action of the symmetric group S r of permutations of {1, . . ., r} swapping elements of a r-tuple.Definition 4.1.1.A complete graph on r vertices is a pair (µ, σ) with µ a collection of non-negative integers µ ij for all 1 ≤ i < j ≤ r, and σ is an ordering of {1, . . ., r}.We write σ ij for the restriction of the ordering σ to the set {i, j}.Graphically (µ, σ) is a simple directed graph in the edge corresponding to i < j directed according to σ ij and labeled by µ ij .Please consult Figure 1 for an example.Let us denote the set of complete graphs with r vertices by K(r) equipped with the poset structure for each pair i < j.It is equipped with an exhaustive filtration by subposets where K d (r) consists of those graphs with max(µ ij ) < d.
Definition 4.1.2.For a given poset A, a cellular A-decomposition of a topological space X is a family of subspaces {X a } a∈A such that: The relevance of this notion is the well-known fact that if a topological space X admits a cellular A-decomposition, then the natural maps are cellular homotopy equivalences.Please consult [Ber97, §1.7] for a proof.Let C d (r) be the space of r little d-dimensional cubes, which is equipped with an equivariant homotopy equivalence to Conf r (R d ) picking the center of cubes.Brun and others in [BFV07] show that C d (r) has a cellular K ex d (r)-decomposition {C a }, where K ex d (r) is a poset containing the poset K d (r) and the inclusion of posets induces an equivariant homotopy equivalence on realizations.Combining these results we have is a zig-zag of equivariant homotopy equivalences.
Definition 4.1.4.Let X be a multisimplicial (or simplicial) set.A K(r)-filtration of X is a family of (multi)simplicial subsets {X a } indexed by a ∈ K(r) so that (1) a ≤ b implies X a ⊆ X b ; (2) |X a | is a cellular K(r)-decomposition of the realization |X| In particular this implies that X = colim a∈K(r) X a .Let X d = colim a∈CG d (r) X a .There is a nested sequence X 1 ⊂ X 2 ⊂ • • • For a given (multi)simplex x ∈ X we will refer to min{d | x ∈ X d } as the complexity of x.
4.2.Multisimplicial model.We define for each positive integer r a multisimplicial set Sur(r) equipped with a K(r)-filtration.The functor of chains applied to the nested sequence Sur 1 (r) ⊂ Sur 2 (r) ⊂ • • • will recover the algebraic models Spaces Y r 0 homeomorphic to |Sur(r)| were studied in the work of McClure-Smith [MS03].homeomorphism between Y r 0 and |Sur(r)| is described explicitly in the appendix of [Sal09].
Next we define a K(r)-filtration on Sur(r).For i < j, let f ij be the subsequence of f (1) • • • f (m + r) obtained by omitting all occurrences of elements different from i and j.For (µ, σ) ∈ K(r) we say that f ∈ Sur(r) (µ,σ) if for each i < j, either i and j alternate strictly less than µ ij times in the sequence f ij , or they do so exactly µ ij times and the ordering formed by the first occurrences of i and j in f ij agrees with σ ij .
The surjection f has complexity d or less if the alternation number of each f ij is less than d + 1, i.e., if the non-degenerate dimension of f ij in Sur(2) is d or less for each i < j.We notice that the action of S r on Sur(r) preserves the nested sequence Sur 1 (r) ⊂ Sur 2 (r) ⊂ . . .For the proof that |Sur(r)| has indeed an induced cellular K(r)-decomposition we refer to Lemma 14.8 in [MS04].Applying the functor of singular chains to the zig-zag of Proposition 4.1.3produces a zig-zag of equivariant quasi-isomorphisms of UM-coalgebras connecting S|Sur d (r)| and S Conf r (R d ).We can extend it using the following zig-zag of maps of the same kind As announced in the introduction, this construction relates the chains on the multisimplicial model of configuration space and its singular chains via an explicit zig-zag of equivariant quasi-isomorphisms of E ∞ -coalgebras.4.3.Simplicial model.We recall the Barratt-Eccles simplicial set E(r) defined for each r ∈ N that is equipped with a K(r)-filtration.Applying the functor of chains to the nested sequence will provide the algebraic models of configuration spaces studied by Berger and Fresse in [BF04].
The n-simplices of E(r) are tuples of n + 1 elements of the symmetric group S r .Its face and degeneracy maps are defined by removing and doubling elements respectively.There is an operad structure on these simplicial sets, but we do not consider it here.
In particular w has complexity d or less if for each i < j the non-degenerate dimension of w ij = ((w 0 ) ij , . . ., (w n ) ij ) in E(2) is d or less for all i < j.We notice that the action of S r on E(r) preserves the nested sequence For a proof that this is a K(r)-filtration we refer to Example 2.8 in [Ber97].Please consult [Smi89; Kas93; Ber97] for more details.
Applying the functor of singular chains to the zig-zag of Proposition 4.1.3produces a zig-zag of equivariant quasi-isomorphisms of UM-coalgebras connecting S|E d (r)| and S Conf r (R d ).Using the unit of the Quillen equivalence extends this zig-zag to one relating N E d (r) and S Conf r (R d ), which can be combined with the zig-zag constructed in the previous subsection.As announced in the introduction, this construction relates the chains on the multisimplicial model of configuration space and those the simplicial model via an explicit zig-zag of equivariant quasiisomorphisms of E ∞ -coalgebras.
Table completion.
It is desirable to have a direct S r -equivariant quasi-isomorphism between these algebraic models.Two filtration preserving quasi-isomorphisms were constructed by Berger-Fresse TR : N E(r) → N Sur(r) and TC : N Sur(r) → N E(r).
The first one, introduced in [BF04, 1•3], is not a coalgebra map, as the reader familiar with its definition can easily verify.We will focus on the second one which was introduced in [BF02] and termed table completion.We will construct a factorization TC : N Sur(r) where the second map is induced from a simplicial map defined below.This factorization proves that TC is a coalgebra map since both factors are.We warn the reader that since EZ does not respect the E ∞ -coalgebra structure, neither does TC.
The restriction to the diagonal defines a K(r)-filtration of Sur(r) D and a nested sequence Sur 1 (r) D ⊂ Sur 2 (r) D ⊂ • • • on Sur(r) D that is preserved by the action of S r on Sur(r) D .In terms of cellular K(r)-decompositions, given (µ, σ) ∈ K(r) then f ∈ Sur(r) D (µ,σ) if for each i < j, either i and j alternate strictly less than µ ij times in the sequence f ij , or they do so exactly µ ij times and the ordering formed by the first occurrences of i and j in f ij agrees with σ ij .
Since the complexity of an element is unchanged by degeneracy maps, it can easily be seen that EZ: N Sur(r) → N Sur(r) D preserves K(r)-filtrations.
Let us now define the simplicial map tc.For f as above, let tc(f ) = (σ 0 , . . ., σ m ) with σ j represented by the subsequence of f containing the (j + 1) st occurrence of each ℓ ∈ {1, . . ., r}.For example, we have tc(122333112) = (123, 231, 312).Proof.It is clear that tc is a simplicial map, and verifying its relationship with Berger-Fresse's chain map is straightforward using the prism interpretation of a surjection as in [BF02, §2.1] and explicit combinatorial formula for T C in [BF02, §3.1].
To check that tc preserves K(r)-filtrations let us first notice that tc(f ) ij = tc(f ij ), so without loss of generality we can assume r = 2.In this case it is clear that nondegenerate simplices are sent to non-degenerate simplices (of the same dimension), so for these the complexity is preserved.We conclude the same for degenerate simplices using that tc is a simplicial map and that degeneracy maps leave complexity unchanged.
S|Sur d (r)| ∼ = S|Sur d (r) D | → N(Sur d (r) D ) → N(N (r) (Sur d (r) D )) ← N Sur d (r).The first map is induced by the homeomorphism |Sur d (r)| ∼ = |Sur d (r) D |, the second by the unit of the Quillen equivalence between simplicial sets and topological spaces, the third is the comparison map of §3.3, and last one is induced by the unit of the Quillen equivalence between multisimplicial sets and simplicial sets.
Theorem 4.4.1.The simplicial map tc : Sur(r) D → E(r) satisfies TC = N(tc) • EZ and induces a weak equivalence tc d : Sur d (r) D → E d (r) for every r, d ∈ N. | 2020-12-04T03:13:43.060Z | 2020-12-03T00:00:00.000 | {
"year": 2020,
"sha1": "9cdca208794a7a265ca3e4a080b00ed180b06a46",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40062-024-00344-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "9cdca208794a7a265ca3e4a080b00ed180b06a46",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
232121976 | pes2o/s2orc | v3-fos-license | Combinations of Drinking Occasion Characteristics Associated with Units of Alcohol Consumed among British Adults: An Event-Level Decision Tree Modeling Study
Background: Alcohol consumption is influenced by the characteristics of drinking occasions, for example, location, timing, or the composition of the drinking group. However, the relative importance of occasion characteristics is not yet well understood. This study aims to identify which characteristics, and combinations of characteristics, are associated with units consumed within drinking occasions. It also tests whether accounting for occasion characteristics improves the prediction of consumption com- pared to using demographic information only. Methods: The data come from a cross-sectional, nationally representative, online market research survey. Our sample includes 18,409 British drinkers aged 18 + who recorded the characteristics of 46,072 drinking occasions using 7-day retrospective drinking diaries in 2018. We used decision tree modeling and nested linear regression to predict units consumed in occasions using information on drinking location/venue, occasion timing, company, occasion type (e.g., a quiet night in), occasion motivation, drink type and packaging, food eaten and entertainment/ other activities during the occasion. We estimated models separately for 6 age-sex groups and controlled for usual drinking frequency, and social grade in nested linear regression models. Open Science Framework preregistration: https:// osf.io/42epd. Results: Our 6 final models accounted for between 55% and 71% of the variance in drinking occasion alcohol consumption. Beyond demographic characteristics (1 to 9%) and occasion duration (24 to 60%), further occasion characteristics and combinations of characteristics accounted for 31 to 70% of the total explained variance. The characteristics most strongly associated with heavy alcohol consump- tion were long occasion duration, drinking spirits as doubles, and drinking wine. Spirits were also consumed in light occasions, but as singles. This suggests that the serving size is an important differentiator of light and heavy occasions. Conclusions: Combinations of occasion duration and drink type are strongly predictive of alcohol consumption in adults’ drinking occasions. Accounting for characteristics of drinking occasions, both individually and in combination, substantially improves the prediction of alcohol consumption.
T HERE IS A growing literature using event-level methods to study the relationships between characteristics of drinking occasions and drinking behavior (Stevely et al., 2019). The existing literature has identified occasion characteristics associated with increased alcohol consumption such as predrinking, drinking with multiple friends, and drinking at the weekend Labhart et al., 2013Labhart et al., , 2014Thrul and Kuntsche, 2015;Thrul et al., 2017). Research in this area can help to shape our thinking about which occasions are likely to involve problematic drinking, how policies may affect these occasions, and how to develop and refine occasion-specific interventions for occasions associated with heavy consumption (Clapp et al., 2008;Kuntsche and Labhart, 2013;Stanesby et al., 2019;Stevely et al., 2019Stevely et al., , 2020aThrul and Kuntsche, 2015). However, it is not yet clear which characteristics are most strongly associated with alcohol consumption and whether occasion characteristics combine to produce important effects on outcomes, or whether there are interaction effects between characteristics (Stevely et al., 2019).
In our study, we were particularly interested in exploring the importance of joint effects of different drinking occasion characteristics on units of alcohol consumed. We conceptualized drinking occasions as social practices, since this theoretical perspective is well suited to studying combinations of characteristics (Blue et al., 2016;Meier et al., 2017;Shove et al., 2012). Reckwitz defines practices as: a routinised type of behaviour which consists of several elements, interconnected to one another: forms of bodily activities, forms of mental activities, 'things' and their use, a background knowledge in the form of understanding, know-how, states of emotion and motivational knowledge. (Reckwitz, 2002, p. 249) For example, in the UK "going out with friends" tends to make us think of occasions that involve characteristics of socializing and drinking with a group of friends in licensed premises, typically on a weekend evening. Crucially for the current paper, using this approach emphasizes the relationships between different aspects of the drinking context that come together to form a practice (Meier et al., 2017). So far, research in this area has tended to rely on linear regression models which assume independence of effects of characteristics on outcomes (Clapp et al., 2008;Stevely et al., 2019;Wells et al., 2008). Instead, we need conceptual and analytical approaches that properly account for the combined effects of occasion characteristics, which may improve our understanding of their cumulative effects on alcohol consumption.
Our study aims to identify the combinations of characteristics that are associated with units consumed within adults' drinking occasions in Great Britain and which characteristics are the strongest predictors of alcohol consumption. It also aims to test whether accounting for occasion characteristics (individually and in combinations) improves the prediction of consumption relative to models including only demographic characteristics.
Data
We used data from the 2018 Alcovision survey, collected by the market research company Kantar. Alcovision is a continuous online survey that includes a detailed retrospective 7-day drinking diary and measures of socio-demographic characteristics and usual drinking frequency. The drinking diary collects information about drinking occasions, defined by Kantar as periods of drinking in only the on-trade (in licensed premises such as pubs) or only the off-trade (such as at home). Our analysis instead redefined drinking occasions as periods of drinking with no 2-hour gaps between drinks. This allowed occasions in the dataset to be combined to include both onand off-trade locations (e.g., preloading before a night out).
The sample was taken from an online market research panel using monthly quotas based on age, sex, social grade, and geographic region. Survey invitations were sent continuously throughout the year to ensure that responses covered every day of the year. The original sample was 29,599 adults (18+) resident in Great Britain. Our analytic sample included 18,409 drinkers, excluding nondrinkers (respondents who report usually drinking "Less often than once in 12 months" or "Never"). Weighting was applied based on age, sex, social grade, and geographic region using Great Britain census data. Our analysis included 46,072 drinking occasions reported by the sample. Informed consent was given by all participants in the survey.
Measures
Outcome Measure. Our primary outcome measure was alcohol consumption in UK units within each drinking occasion (1 unit = 8 grams of alcohol). Units were calculated based on the number of servings reported by participants, serving size, and the alcohol by volume (ABV). Participants reported brands for most servings, and we used this information to identify actual ABVs via web searches. Where brand information was not available, we used standard ABVs for some beverage types.
Occasion Characteristics. A wide range of occasion characteristics were used in our analysis. This reflects our conceptualization of drinking occasions as social practices made up of materials (e.g., a glass of wine), meanings (e.g., drinking to chill out), and timings (e.g., the duration of the occasion) (Ally et al., 2016;Meier et al., 2017;Shove et al., 2012;Southerton, 2006;Stevely et al., 2019).
Occasion characteristics used in our analyses are day of the week, start time of the occasion (11 categories), duration (measured in 9 bands and we use mid-points as point estimates), month of the year, trade type (on-trade, off-trade, preloading, postloading, mixed, unclear), company type (6 categories; e.g., with friends, with family members), group structure (7 categories; e.g., male pair, female group, with children), entertainment (42 categories; e.g., watching television, listening to music), food consumption (11 categories; e.g., having a formal meal), drink type (10 categories; e.g., spirits or wine), drink packaging (20 categories; e.g., a 440ml can), venue (29 categories; e.g., a modern bar), motivation for drinking (12 categories; e.g., to wind down or chill out), type of occasion (31 categories; e.g., a sociable night in), and reason for the choice of venue (30 categories; e.g., "it's my local"). Preloading occasions involved drinking in the off-trade and then the on-trade and postloading occasions started in the on-trade and moved to the off-trade. We defined mixed occasions as switching between the on-and off-trade more than once and labeled occasions as "unclear" when the order of on-and off-trade drinking was not reported.
The full set of occasion characteristics and their responses categories are shown in Table S1. The table also indicates that many of these characteristics are not mutually exclusive and/or are allowed to change across the course of an occasion. We have treated categories within variables as separate binary variables where necessary in the analyses to account for this. For example, if a participant reported drinking with friends at the start of an occasion, but later drank with family, both friends and family were classed as present using separate variables.
Controls and Stratifying Variables. We used measures of sex, age in years, usual drinking frequency, and social grade. Usual drinking frequency was measured by the question "Over the year as a whole, about how often do you drink any alcoholic drink of any kind?" with 10 response options (e.g., "3 to 5 times a week"). Social grade was recorded using National Readership Survey (NRS) categories which is an occupation-based measure ranging from workers in higher managerial positions to semi-or unskilled workers and those who are unemployed.
Statistical Analysis
Preregistered Analyses. This study was preregistered using Open Science Framework (https://osf.io/42epd (Stevely et al., 2020b)). The frequency of drinking in different contexts varies by age and sex, and there may also be differences in the relationships between occasion characteristics and consumption (Ally et al., 2016). All analyses were therefore stratified across 6 age-sex groups (18 to 35, 36 to 64, 65+).
The first stage of our analysis used decision tree modeling (recursive partitioning in JMP Pro 14.3) to predict units of alcohol per drinking occasion based on occasion characteristics (details of these are in Table S1). Decision tree models start with all drinking occasions and then choose the best characteristic by which to split the data. The best split will create 2 groups of roughly equal size with the maximum difference in mean consumption (Hawkins et al., 2011;Kass, 1980;SAS Institute Inc, 1989. For example, occasions could be split into under vs over 2 hours in duration. The modeling process is recursive as the created groups are then successively split on the next best characteristic. These models therefore inherently consider complex combinations of occasion characteristics. The final groups created by a decision tree model are referred to as leaves and are defined by the combination of all of the splits in predictor variables. We used k-fold cross validation (5-fold) to prevent over-fitting. We also restricted the model so that the leaves would include a minimum of 1% of the sample of drinking occasions to avoid generating very small groups.
The second stage of our analysis estimated nested linear regression models (i.e., a series of models adding predictors to the previous model) to predict units consumed per occasion. We used clustered standard errors in Stata 15 to account for the clustering of drinking occasions within participants. The simplest models included age (within the age-sex strata), usual drinking frequency, and social grade. We then sequentially added: occasion duration, all of the occasion characteristics selected by decision tree models for each age-sex group, and the leaves generated by decision tree modeling. Occasion duration was added in a separate step as it showed a very strong association with consumption in decision tree models. For continuous predictors-age and duration-we included polynomial terms (to model nonlinear relationships) where these were significant at a = 0.1.
The number of units per drinking occasion (our outcome variable) had a positive skew. We therefore log-transformed this variable for regression analyses. Occasions in the top 1% of the distribution of units per occasion were excluded due to concerns about extreme and possibly unreliable values. We used weighted data for all analyses.
Unplanned Analyses. We noted during decision tree modeling that the duration of the drinking occasion accounted for a large proportion of the variance in units of alcohol consumed. Prior studies have also found that occasion characteristics can be associated with longer occasion duration (and therefore increased consumption) (Labhart et al., 2014). We therefore repeated the decision tree analysis with duration as the splitting criteria, rather than alcohol consumption, to identify characteristics that predict longer drinking occasions. We interpreted the findings from both sets of decision tree models to identify occasion characteristics with both direct effects on alcohol consumption in units and effects mediated by duration.
Ethics Approval
This study was approved by the University of Sheffield's ethics committee and conforms to the principles embodied in the Declaration of Helsinki. Use of this data is allowed under the terms of the contract and nondisclosure agreement between Kantar and the University of Sheffield, which requires research outputs to be submitted to the data provider ahead of publication. The data providers' right to request changes is limited to matters of accuracy regarding the data.
Decision Tree Modeling of Alcohol Consumption in Units
To identify the strongest predictors of units consumed, we consider the proportion of explained variance that is attributable to each predictor in decision tree models. Figure 1 shows the variables selected by the decision tree modeling of alcohol consumption in drinking occasions and their predictive contributions (results also reported in Table S2).
The duration of drinking occasions accounts for the highest proportion of explained variance in units consumed across all age-sex groups (ranging from 37.3% to 72.2%), with longer drinking occasions predictive of heavier consumption. Other important predictors are drinking spirits as doubles (particularly for 18 to 35 year olds-24.4% of explained variance for 18 to 25 year old men and 28.6% for women) and drinking wine (4.1 to 15.4%) (Table S2). There are other patterns across age-sex groups-for example, the type of beer/ cider packaging is more important in models of units per occasion for 18 to 35 year old men. Drinking large bottles (500ml/1 pint) of beer or cider in the off-trade and draught beer or cider in the on-trade is associated with increased consumption in this group.
Combinations of Occasion Characteristics Associated With Heavy Alcohol Consumption
Decision tree modeling produces a set of terminal nodes, or leaves, that are a combination of the splits throughout the tree. In our analysis, these represent combinations of characteristics of drinking occasions. Figure 2 shows the heaviest and lightest drinking leaves for each age-sex group (i.e., the combinations of occasion characteristics associated with the highest and lowest number of units consumed), following the branches of the decision tree models and showing the mean units consumed at each node. We present only the lightest and heaviest occasions as the full decision trees produce many leaves and cannot be easily summarized. This section describes an example leaf in detail to illustrate their structure before presenting the overarching findings.
The lightest drinking leaf for men aged 36 to 64 has a mean consumption of 1.2 units. The most important predictor is that these occasions last less than an hour and a half. Within those that were shorter than 1.5 hours, the next most important determinant of consumption is not drinking spirits as doubles, followed by not drinking wine, drinking beer or cider in standard sized bottles (275/ 330ml) in the off-trade, the respondent considering the occasion type to be a regular/ everyday drink, and starting the occasion before 2pm.
Comparing across the age-sex groups reveals many commonalities, particularly within heavy drinking occasionswhich are longer in duration and typically involve drinking spirits as doubles. However, among young adults (aged 18 to 25 years) the heaviest drinking occasions also involve drinking wine. Light drinking occasions are generally shorter, spirits are drunk as singles, and no wine is consumed. Fig. 2. Pathways through decision trees to the heaviest and lightest occasions (leaves) for 6 age-sex groups. The pathways shown lead to the types of drinking occasions identified by decision tree models with the lowest and highest mean alcohol consumption (in units). As has happened for men aged 18-35, 1 or more of the steps in the process may move the mean consumption in a counterintuitive direction as long as this branch ends up with the lowest mean consumption.
Interestingly, spirits are drunk in both the heaviest and lightest occasion types in different ways (i.e., doubles vs. singles), suggesting that serving sizes may represent important material components of drinking practices, rather than simply incremental differences in consumption levels. The patterns by age-sex group in mean alcohol consumption in the heaviest drinking occasions are as expected-men and younger people consume more units in their heaviest occasions. Conversely, there is little variation in mean units consumed across the lightest drinking occasions, suggesting that all agesex groups have very light drinking occasions.
Decision Tree Modeling of Occasion Duration
The duration of drinking occasions accounts for a large proportion of the explained variance in units consumed (Figure 1, Table S2). Since some characteristics may influence, or be associated with consumption through longer occasions, we also used decision tree modeling to predict the duration of occasions using all of the other characteristics as predictors.
The trade type of drinking occasions accounts for the highest proportion of variance in occasion duration across all age-sex groups (Figure 3). Drinking in both the on-and off-trade (preloading or postloading) predicts longer occasions than drinking in the on-or off-trade only. Other important predictors are the start time and drinking with friends. There is also an interaction effect between start time and trade type: When drinking occasions start earlier, mixed trade type drinking is more strongly associated with longer duration than it is in occasions that start later (Table S3). Overall, drinking with friends is also an important predictor of longer drinking occasions.
There are patterns in the results across age-sex groups. For example, drinking in a mixed sex group and drinking spirits are more important predictors of female occasion duration and general use of a computer in the off-trade is more important for male occasion duration.
Nested Models Predicting Occasion Alcohol Consumption in Units
We used a series of nested linear regression models to predict the natural log of units consumed per occasion. Firstly, individual-level factors (age in years, usual drinking frequency, and social grade) accounted for between 1 and 9% of the final R 2 , depending on the age-sex subgroup (Table 1). Sequentially adding occasion duration, all other occasion characteristics selected by decision tree models, and the combinations of variables within the terminal groups (leaves) of decision tree models, accounted for 24 to 60%, 28 to 54%, and 3 to 16% of variance, respectively. These findings suggest that each set of predictors accounted for additional variance over and above previous models.
Individual-level factors and occasion duration accounted for more of the variance among 36 to 64 year olds than the other age groups, while other occasion characteristics improved prediction less. Adding occasion characteristics and leaves as predictors had a particularly large effect on the R 2 for women aged over 65.
DISCUSSION
This study is the first to estimate units of alcohol consumed in adults' drinking occasions using a wide range of occasion characteristics. We found that the occasion Fig. 3. The proportion of explained variance attributable to each characteristic in models of occasion duration for 6 age-sex groups. duration, beverage type, and serving size are strongly predictive of units consumed. Occasion characteristics improve the prediction of alcohol consumption both individually and in combination relative to models including only demographic characteristics. Combinations of characteristics are therefore useful for understanding levels of alcohol consumption within drinking occasions.
The occasion characteristics measured in the Alcovision survey were not informed by a specific theoretical perspective and our review of previous literature suggests this is common with event-level alcohol research. However, the characteristics measured appear to be suitable for interpretation through a theories of practice lens. In our previous work, we have drawn on Shove et al.'s description of the main elements of social practice-materials, meanings and competences-and extended these to include temporal elements (Ally et al., 2016;Meier et al., 2017;Stevely et al., 2019). In this study, we find that temporal factors are particularly important-duration is the strongest predictor of units of alcohol consumed, and start time is strongly related to occasion duration. The day of the week was a less important predictor than might be expected given the cultural association of binge drinking with Friday and Saturday nights in Britain. Our findings suggest that weekend drinking is not heavier once occasion duration is accounted for. However, weekend occasions will involve heavier drinking if they have characteristics that are associated with longer occasions, such as drinking in both the on-and off-trade, with friends, and starting earlier in the day. Material elements are also important predictors of occasion consumption and duration-particularly drink type, drink packaging, and venue type. The measures of meaning included in the Alcovision survey were not strong predictors of consumption or duration. This may have been due to the limitations of the market research-oriented measures as we have some findings that suggest the importance of meaning elements. For example, spirits were drunk in both the heaviest and lightest occasions in different ways (i.e., as doubles vs. singles). These differences are evocative of different meanings-perhaps the light occasions involve enjoying a relaxing tipple of whiskey for an hour or so while the heavy ones involve downing shots which could be linked to "determined drunkenness" (Haydock, 2016;Measham and Brain, 2005). We did not have measures of competencies, such as round-buying or downing drinks.
Exploring the relative importance of different factors in predicting units of alcohol consumed per occasion across demographic groups may also speak to the social organization of practices. Our nested linear models found that occasion duration accounted for the most variance in units consumed among 36 to 64 year olds, while other occasion characteristics were less predictive. A possible explanation is that their daily lives and drinking occasions are more established and routinized so there is less variation in wider occasion characteristics.
Our findings offer some important insights that build on the existing literature. A recent mapping review by Stevely and colleagues (2019) found that the most commonly studied characteristics in event-level alcohol research are the day of the week, affect/mood, and venue type (e.g., pub or restaurant). Just 8.6% of the included papers studied duration of drinking occasions. Based on this analysis, the occasion characteristics commonly studied may not be the most important predictors of alcohol consumption and greater attention should be given to other material and temporal elements. The effects of occasion characteristics also vary across agesex groups (moderation effects)-however, Stevely et al. found that few studies on drinking contexts and acute alcohol-related harm tested for mediation or moderation effects, partly because the literature has a heavy focus on young adult populations (Stevely et al., 2020a).
We used detailed data on the characteristics of drinking occasions collected by the Alcovision survey to estimate units of alcohol consumed. Although it offers novel analytical possibilities, there are important limitations of the Alcovision dataset (Ally et al., 2016). The variables are designed for market research purposes and are often not well-aligned with measures designed for scientific purposes. For example, the drinking motivation measures used are not based on a standard validated survey tool such as the Drinking Motives Questionnaire. There were also no measures of drinking The occasion characteristics selected by decision tree models out of the full set listed in Table S1. c The terminal groups of occasions produced by decision tree models, representing combinations of characteristics. Models used clustered standard error to account for individuals reporting multiple occasions.
companions' behavior, drinkers' expectancies, or drinkers' intentions, which previous studies have linked to consumption in drinking occasions (Fillo et al., 2017;Larsen et al., 2009;Monk and Heim, 2013;Stevens et al., 2017). Furthermore, we have not analyzed factors that are associated with having a drinking occasion in the first place. For example, people may be much more likely to drink at the weekend, but weekend drinking occasions may not involve heavier consumption.
Our findings suggest future research and prevention efforts may benefit from using theories of practice to systematically consider elements of drinking occasions. Prevention campaigns building on these findings could promote shorter occasions (or shorter forms of existing practices-such as knowing "when to call it a night"), drunk people could be more stringently excluded from entering on-trade venues to prevent very long occasions across multiple venues, and ontrade venue licensing could restrict the availability of spirits as doubles. Future research could contribute to developing, testing, and evaluating interventions in these areas. It would be particularly valuable to follow up this exploratory work by testing for causal mechanisms that link occasion characteristics and alcohol consumption including combinations, mediation via occasion duration, and moderation by age-sex group. aggression? Event-level associations in a Canadian national survey of university students. Alcohol Clin Exp Res 32:522-533.
SUPPORTING INFORMATION
Additional supporting information may be found online in the Supporting Information section at the end of the article. Table S1. Characteristics of drinking occasions. Table S2. The proportion of explained variance attributable to each characteristic in models of alcohol consumption per occasion for six age-sex groups. Table S3. The proportion of explained variance attributable to each characteristic in models of occasion duration for six age-sex groups. | 2021-03-06T06:16:29.432Z | 2021-03-05T00:00:00.000 | {
"year": 2021,
"sha1": "2686b6e2a7e21a233b71dcf5d9d28afaa0a11c92",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/acer.14560",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "34cafde3f19eb76fd8c8de8048cb76de2e4f4aa9",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
256471367 | pes2o/s2orc | v3-fos-license | A new species of Hirudo (Annelida: Hirudinidae): historical biogeography of Eurasian medicinal leeches
Species of Hirudo are used extensively for medicinal purposes, but are currently listed as endangered due to population declines from economic utilization and environmental pollution. In total, five species of Hirudo are currently described throughout Eurasia, with Turkey being one of the major exporters of medicinal leech, primarily H. verbana. To define the distribution of Hirudo spp. within Turkey, we collected 18 individuals from six populations throughout the country. Morphological characters were scored after dorsal and ventral dissections, and Maximum Likelihood (ML) and Bayesian Inference (BI) analyses resolved phylogenetic relationships using mitochondrial cytochrome c oxidase subunit I (COI), 12S ribosomal RNA (rRNA), and nuclear 18S rRNA gene fragments. Our results identify a new species of medicinal leech, Hirudo sulukii n. sp, in Kara Lake of Adiyaman, Sülüklü Lake of Gaziantep and Segirkan wetland of Batman in Turkey. Phylogenetic divergence (e.g., 10–14 % at COI), its relatively small size, unique dorsal and ventral pigmentation patterns, and internal anatomy (e.g., small and pointed atrium, medium-sized epididymis, relatively long tubular and arc formed vagina) distinguish H. sulukii n. sp. from previously described Hirudo sp. By ML and BI analyses, H. sulukii n. sp. forms a basal evolutionary branch of Eurasian medicinal leeches. Phylogeographic interpretations of the genus identify a European Hirudo “explosion” during the upper Miocene followed by geological events (e.g., Zanclean flood, mountain building) that likely contributed to range restrictions and regional speciation of extant members of the clade.
Background
Hirudinid leeches are parasitic to a variety of vertebrates leading many to regard them with distaste, but their medicinal utility is well established. For centuries, Hirudo medicinalis and related species (e.g., H. verbana, H. troctina) were prescribed to treat virtually every human ailment from arthritis to yellow fever, most without efficacy. In 1830, during their peak usage, a Paris hospital employed more than five million medicinal leeches [30]. Consequently, populations of H. medicinalis in Central Europe were depleted, and non-sustainable collecting led to their extinction in many areas. Pollution and habitat drainage further added to their decline, forcing Europe to import medicinal leeches from the Ottoman Empire (Anatolia), North Africa and Russia [31] to meet demand. By the late 1900's, the advent of "modern" medicine drastically reduced clinical demand for leeches, allowing some threatened populations to rebound.
Leech therapy languished for most of the 20th century, considered "quackery" by mainstream medical practitioners [66], but the discovery of various bioactive compounds in leech saliva [27,39], and recognition of the leech's superior ability to relieve venous congestion (e.g., [58]), has led to renewed interest in clinical applications. Current fields of employment include reconstructive microsurgery, hypertension, and gangrene treatment [24]. In light of 19th century threats to medicinal leech populations as demand increased, considerable conservation steps were implemented to ensure their continued availability. Pursuant to these efforts, much confusion resulted regarding the taxonomic status of different morphological forms [18,28,56,65]. Phylogenetic analysis of nuclear and mitochondrial DNA sequences suggest that the genus Hirudo is monophyletic [60], and that species or morphological varieties can be readily identified by coloration patterns. Molecular studies have shown that European medicinal leeches, although usually marketed as H. medicinalis, comprise a complex of at least three species: H. orientalis, the commonly sold H. verbana and the relatively rare H. medicinalis [4,37,54,55,60]. Kutschera and Elliott [36] analyzed the behavior of adult H. medicinalis, but could not find differences with respect to its sister taxon H. verbana. Morphological and molecular data demonstrate that commercially available medicinal leeches are generally not H. medicinalis [35,56,60], but rather specimens belonging to the Eastern phylogroup H. verbana [61,62], which is predominantly bred in leech farms and used as a modern 'medicinal' stock.
Turkey is rich in wetlands and known to support at least two species of medicinal leech, H. medicinalis and H. verbana. Prior to~2000, it was believed that medicinal leeches from Turkey's wetlands were only H. medicinalis [21,31]. Molecular characterization of Turkish leeches was not performed until the turn of the century, however, and leeches from the Kızılırmak and Yesilirmak Deltas on the Black Sea coast, comprising the majority of leech specimens destined for export, have proven to be to H. verbana [4,51,55].
Mapped localities of all Hirudo species show extensive, belt-shaped ranges extending from east to west. To establish the distribution of Hirudo species in Turkey, one of the major exporters of medicinal leeches worldwide, we sampled broadly in three representative localities within the western, eastern and southeastern regions of Turkey. Our data identifies a new species for the genus, H. sulukii n. sp., that forms a basal evolutionary branch among European medicinal leeches and sheds light on the evolutionary history of the genus.
Specimen collection and maintenance
Leech specimens collected throughout Turkey (Kara Lake, Beyaz Cesme Marsh, Uluabat Lake, Segirkan wetland, Balik Lake, Sülüklü Lake) were transported to Fırat University, Fisheries Faculty (Elazig, Turkey) and maintained in separate 600 L fiberglass tanks based on collection location. Tank bottoms were elevated with peat soil 10 cm on one side to create a terrestrial to aquatic continuum. Leeches were fed one adult frog (e.g., Pelophylax ridibunda) blood meal per month (others have utilized mammalian blood), and typically survived 2+ years in the laboratory. Specimens were fixed in 70 % ethanol for molecular analysis and some were fixed with 10 % formaldehyde in PBS for dissection. External traits of live specimens were observed by stereomicroscopy. Preserved specimens were dissected dorsally and ventrally, with representative sketches of internal morphology derived directly from the type specimen.
DNA extraction
Tissue samples from live specimens were obtained by placing the leech in a 10 % ethanol sedating solution until it was unresponsive to touch. Approximately half of the caudal sucker was removed with a scalpel, and tissue cuttings were immediately processed using the E.Z.N.A.™ Tissue DNA kit (Omega Bio-Tek) following the manufacturer's instructions. Whenever possible, tissue from postmortem specimens was taken from the caudal sucker to avoid contamination from gut contents.
DNA sequence amplification of target genes
Nuclear 18S rRNA, mitochondrial 12S rRNA and partial cytochrome c oxidase subunit 1 (COI) DNA fragments were amplified from genomic DNA using the polymerase chain reaction (PCR). All 12S sequences were obtained under conditions described by Borda and Siddall [8]. PCR amplification protocols were conducted as described by Wirchansky and Shain [67] employing primers listed in Table 1. PCR products were purified using the Wizard SV Gel and PCR Clean-Up System kit (Promega, Inc.) according to the manufacturer's protocol.
DNA sequencing and editing
Purified PCR products were shipped to GeneWiz, Inc.
(South Plainfield, NJ) for Sanger DNA sequencing using an ABI 3730xl DNA analyzer. Each PCR product was sequenced in both directions using amplification primers, and sequence chromatograms were viewed and manually adjusted in ChromasPro (Technelysium, Queensland, Australia) or BioEdit [26]. Sequence alignments were made with MUSCLE [17] or CLUSTAL W [29,38]. Accession numbers for all CO1, 12S and 18S sequences are listed in Suppl. Data (Table 1).
Phylogeny
Maximum-likelihood (ML) analyses were performed for all DNA comparisons, using the pipeline sequence MEGA 7 [34] to align corresponding sequences from multiple individuals or homologous DNA across species, Gblocks [9] for alignment curation, PhyML [25] for tree building and TreeDyn [11] for tree drawing, as configured in the Phylogeny.fr platform [14]. The aLRT statistical test (approximation of the standard Likelihood Ratio Test; [3]) embedded in PhyML determined branch support values. Default settings were used for all parameters. Bayesian Inference (BI) analysis was performed on the combined data set (morphological parameters, 18S, 12S, COI in Nexus format) in MrBayes v. 3.2.1x64 [48,49]. Data were partitioned for 18S and 12S, and by codon position for COI. ModelTest [47] via FindModel was used to determine the optimal model of evolution for each gene under the Akaike Information Criterion (AIC; [46]). The general time reversible (GTR) model with a gamma distributed rate parameter was used for COI, 12S and 18S. Two analyses were run simultaneously with all parameter sets unlinked by partition for two million generations each, sampling every 100 generations, with a burn-in achieved by <50,000 generations. Setting the burn-in to 500,000 generations left a total of 7413 trees sampled for assessment of posterior probabilities. Gaps were treated as missing data, and default settings were used for all other parameters.
Results
Specimens of Hirudo were collected from multiple locations in Turkey ( Fig. 1; Tables 2 and 3). These localities are separated by 1312 km (Uluabat Lake to Kara Lake), 1306 km (Uluabat Lake to Beyaz Cesme Marsh) and 289 km (Kara Lake to Beyaz Cesme Marsh). Leeches were typically found in muddy bottoms, as well as underwater and in aquatic/terrestrial vegetation (typically reedbeds), with banks of water proving the most prevalent habitat.
Male reproductive apparatus notably large, with thick muscular penis sheath terminating in a bulbous prostate, located at ganglion in segment XI. Epididymis medium-sized, spherical, more than twice size of pearlescent-sheened ejaculatory bulb, tightly packed masses of ducting standing upright on either side of the atrium. Testisacs ovoid and larger than ovisacs, located posterior to ganglion in segment XIII. Female reproductive system relatively coiled tubing. The pearlescentsheened vagina long and upright, evenly bowed tube entering directly into ventral body wall. Oviducts a thin duct forming several coiled and covered with a thick layer of glandular tissue, bi-lobed ovaries. Ovisacs globular ovoid or small bean seed-shaped (Fig. 6).
Remarks
Despite similarities between Hirudo sulukii n. sp. and other Hirudo species, the former can be distinguished from its closest relatives using internal and external features. Hirudo sulukii n. sp. differs from H. medicinalis and H. orientalis by the form of black spots on the dorsal, paramedian stripes of the body. Hirudo sulukii n. sp. has black, segmentally-arranged united ellipsoid and elongated spots, and dorsal lateral margins of body a pair of zigzagged black dorsolateral longitudinal stripes (Fig. 4a). The ventral coloration pattern of H. sulukii n. sp. has a variable number of irregular spots (Fig. 4b) [65]. Hechtel and Sawyer [28] considered external pigmentation to be not only the most useful, but also arguably the best character to distinguish species of Hirudo.
In this study we used the approach of Hechtel and Sawyer [28] and Utevsky and Trontelj [65] regarding the size of the epididymis in relation to the ejaculatory duct. The epididymes of Hirudo sulukii n. sp. (Fig. 6) and H. orientalis are medium-sized. In contrast, the epididymes of H. verbana are relatively small, whereas H. troctina and H. medicinalis have massive epididymes [65]. The vagina of Hirudo sulukii n. sp. is relatively long tubular and arc formed (Fig. 6), while in H. orientalis the vagina is tubular and evenly curved. The former two species do not show the central swelling and sharp folding typical for H. verbana. In H. medicinalis, the vagina can have two conditions: straight and tubular, or terminally curved [65]. Hirudo troctina has a bulbous vagina [28].
Moquin-Tandon [40] described at least five species of Hirudo including H. verbana and H. medicinalis, but later concluded that they were all varieties of the same leech species. The medicinal leech, H. sulukii n. sp., considered here was determined to be morphologically different than all species described by Moquin-Tandor [40,41].
Phylogenetic analyses
To determine the relationship of specimens to other Hirudo species, we subjected them to the comparative analysis of CO1 (cytochrome c oxidase subunit 1) and 12S rRNA from mitochondria, and nuclear 18S rRNA. Combined COI, 12S and 18S rRNA analysis contained 13 terminals with 1514 aligned characters. Maximum Likelihood of the combined data set yielded five equally parsimonious trees with 500-1000 steps ( Fig. 7; Additional file 1); concordant trees were generated independently with COI data (Fig. 8; Additional file 1). Collectively, H. sulukii n. sp., formed a basal branch among European medicinal leeches with strong bootstrap support, while resolution among H. medicinalis, H. orientalis and H. verbana lineages was ambiguous, as noted in previous studies [45,56]. Population structure was shallow among the collected specimens (<2 % divergence at CO1; Table 4), suggesting recent invasions into field sites sampled in the current study (see Fig. 1). The Asian species, H. nipponia, fell outside the Hirudo clade in combined sequence analyses (Fig. 7), suggesting a deep ancestral split with European species, and calling into question the designation of H. nipponia within the Hirudo phylogroup. Interestingly, H. nipponia was equidistant to European Hirudo species (~22-25 % at CO1), the latter of which were approximately equidistant to each other (i.e.,~10-14 % at CO1; Table 4). Inferring a divergence rate of~2 % per million years at the CO1 locus based on combined geological and molecular data within Oligochaeta [10,15,67], we estimate a lower Miocene split between lineages leading to H. nipponia and European Hirudo sp., and radiation of the latter species during the upper Miocene. Branch patterns of remaining species were consistent with those reported previously [45].
Discussion
Maximum Likelihood and Baysian Inference analyses yielded trees with concordant topologies and strong support for H. sulukii as a basal branch of the European medicinal leeches. Relationships between H. medicinalis, H. verbana and H orientalis were less conclusive, consistent with confusion regarding their morphological identification [45,56]. The relatively small size of H. sulukii, unique dorsal and ventral pigmentation patterns, and internal anatomy (e.g., small and pointed atrium, medium-sized epididymis, relatively long tubular and arc formed vagina) are distinguishing features of this previously undescribed leech. Note that H. sulukii has thus far been collected only from relatively high elevation field sites (i.e., Kara Lake-Adiyaman 1233 m, Sülüklü Lake-Gaziantep 877 m, and Segirkan wetland-Batman 525 m), and its small size in comparison with other Hirudo species may reflect an adaptation to this environment (e.g., reduced foraging season/food supply), as suggested for other annelid species (e.g., [15]). Previously, only two medicinal leeches were thought to occur in Turkey, H. verbana and H. medicinalis, while a total of five are currently described throughout Eurasia. The range of H. verbana occurs to the south of H. medicinalis in an almost parapatric fashion with little overlap [5,32,42,43,51]. The former is subdivided into Tables 2 and 3 for specimen descriptions an Eastern (southern Ukraine, North Caucasus, Turkey and Uzbekistan) and Western phylogroup (Balkans and Italy) that do not overlap, suggesting distinct postglacial colonization from separate refugia [61,64]. Easternmost records are from Samarqand Province in Uzbekistan [61,64,65], resulting in an east-to-west extent of~4600 km. Leeches supplied by commercial facilities belong to the Eastern phylogroup, originating mostly from Turkey and the Krasnodar Territory in Russia, two leading areas of leech export.
Hirudo medicinalis is distributed from Britain and southern Norway to the southern Urals and probably as far as the Altai Mountains, occupying the deciduous arboreal zone [6,12,16,21,22,31,43,51,52,59,63,68]. Hirudo orientalis is associated with mountainous areas in the sub-boreal eremial zone and occurs in Transcaucasian countries, Iran and Central Asia, while H. troctina has been found in northwestern Africa and Spain in the Mediterranean zone [64]. Hirudo verbana and H. medicinalis have recently experienced range expansions while H. orientalis has remained geographically isolated within arid and alpine areas of Central Asia and Transcaucasia [61].
By molecular clock inference using divergence estimates at the CO1 locus [10,15,67], our data suggest a deep, ancestral split between European and Asian (i.e., the lineage leading to H. nipponia) medicinal leeches somewhere in the lower Miocene, followed by an "explosion" of Hirudo species upon their putative arrival to the European continent during the upper Miocene, 5-10 mya (Fig. 9). The possible misclassification of H. nipponia does not affect this evolutionary scenario since it represents a basal, sister branch to the European Hirudo phylogroup (see Fig. 8). This evolutionary timeline is supported by tree topologies and relative genetic distance of European Hirudo species to each other at the COI locus (i.e., 10-14 % divergence; see Table 4). The time frame of these events suggest the presence of an open habitat corresponding with, for example, formation of Levantine land bridges, which may have facilitated mammalian-based, passive dispersal of an ancestral Hirudo archetype throughout Europe. Thereafter, tectonic activity at the onset of the Pliocene~5.3 mya broke the land bridge between Morocco and Spain causing the Zanclean Flood that filled the Mediterranean basin, and in combination with mountain building throughout the European continent [7], appears to have restricted panmixia among extant Hirudo lineages, leading in part to their speciation and current geographic ranges. For instance, concurrent with the closing of the Tethys Sea by continental drift of the African and Arabian plates, mountain building events occurred in Southern Turkey forming the Taurus Mountain chain [13]. At present, the Species of Hirudo have had broad applications in medicine, ranging from reconstructive surgeries (e.g., facial, finger reattachment, ear flap) to anticoagulants/ analgesics secreted from salivary glands [2,24]. Thus the discovery of a new Hirudo species, particularly a basal member of this phylogroup, has considerable value in the context of medical potential. Specifically, natural variants of known bioactive factors (e.g., hirudin, antistasin, etc.) are logical candidates to explore for their potentially enhanced or novel pharmaceutical properties. The current study has prompted a more systematic survey of Hirudo throughout Turkey and surrounding regions with the collective aims of refining the evolutionary history of the genus, facilitating conservation efforts, and identifying species that may expand the repertoire of medicinal applications for this important Hirudinid genus.
Conclusions
By phylogenetic and morphological criteria, specimens collected from Kara Lake of Adiyaman, Sülüklü Lake of Gaziantep and Segirkan wetland of Batman in Turkey comprise a new species, Hirudo sulukii. Geographic isolation by the Taurus Mountain chain has likely contained H. sulukii within the regional sampling area. By ML and BI analyses, H. sulukii n. sp. forms a basal evolutionary branch of Eurasian medicinal leeches, preceded by a deeper ancestral split with the Asian medicinal leech. H. nipponia. Phylogeographic interpretations of the genus identify a European Hirudo "explosion" during the upper Miocene followed by geological events (e.g., Zanclean flood, mountain building) that likely contributed to range restrictions and regional speciation of extant members of the European clade. | 2023-02-02T14:10:32.861Z | 2016-08-23T00:00:00.000 | {
"year": 2016,
"sha1": "da6915a36dd44dac808bd372b4dfe0abdad62fbb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s40850-016-0002-x",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "da6915a36dd44dac808bd372b4dfe0abdad62fbb",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
261013362 | pes2o/s2orc | v3-fos-license | Emotional Intelligence in Spanish Elite Athletes: Is There a Differential Factor between Sports?
Emotional intelligence is a determinant factor in sports performance. The present study analysed differences in total emotional intelligence and its four dimensions in 2166 Spanish athletes (25.20 ± 10.17 years) from eight sports (volleyball, track and field, shooting, football, basketball, handball, gymnastics, and judo). A total of 1200 men and 966 women answered anonymously using a Google Forms questionnaire sent via WhatsApp about demographics and psychological variables. A Pearson correlation was conducted to assess the age–emotional intelligence relationship. An independent T-test and One-Way ANOVA were carried out to check for age differences between biological sex and sport and a One-Way ANCOVA to determine differences between sports controlled by age. Age differences were observed by sex and sport (p < 0.001). An association was found between age and emotional intelligence dimensions (p < 0.001), except for other’s emotional appraisal (p > 0.05). Judo was the sport with the highest levels of regulation of emotions, other’s emotional appraisal, use of emotion, and total emotional intelligence (p < 0.05). Generally, emotional intelligence was found to be more developed in individual sports than in team sports, except football. Consequently, psychological skills like emotional intelligence could be critical to achieving high performance, depending on the sport.
Introduction
Sport psychology focuses on the study of different variables that have an impact on an athlete's performance, with the aim of improving it. Different studies have proposed that variables such as anxiety [1,2], stress [3,4], motivation [5,6], self-confidence [7] have a relationship with sporting performance. Thus, sport psychology will seek to understand how psychological factors affect sport performance and vice versa in order to develop strategies that can help improve these variables [8].
Although emotional intelligence has been a popular research topic, there is a wide diversity of numerous paradigms and consequent assessment tools [9][10][11][12][13]. However, there is some consensus on the definition of emotional intelligence, which is a person's ability to manage emotions. This ability is studied in four dimensions: (i) self-emotional appraisal (SEA). Refers to an individual's ability to understand his or her deep emotions and to be able to express them naturally; (ii) other's emotional appraisal (OEA). Connected to an individual's ability to perceive and understand the emotions of the people around him or her; (iii) use of emotion (UOE). Associated with an individual's ability to make use of emotions by directing them towards constructive activities and personal performance; and (iv) regulation of emotions (ROE). Related to an individual's ability to regulate emotions and control behaviour when experiencing extreme moods [11,14].
Even though it is acknowledged that women show higher values compared with men [15][16][17], some investigations show that these differences are not significant [18][19][20]. However, specifically in the field of physical activity and sport, several results point out the opposite, with women showing significantly lower data [21][22][23]. Anyway, Fernandez-Berrocal et al. [24] argued that when age is controlled for, these relationships tend to disappear. This is because there appears to be a significantly positive correlation between emotional intelligence and age [25]. Some comparative studies have shown that adults have better emotional regulation than young people, which allows them to develop better mental health [26,27].
In the sports field, Laborde et al. [28] described an emotion as an organised psychophysiological reaction that evaluates ongoing contextual relationships. Numerous authors have studied emotional intelligence in sport and concluded that it is a variable that can significantly influence sport performance [22,[29][30][31][32][33]. Specifically, Laborde et al. [34] stated that the type of sport has different psychological requirements. Thus, in individual sports, what each individual does or decides will not be compensated by any partner. This means that the athlete will bear the full weight of his or her decisions. Instead, team sports can compensate for their psychological requirements. In this line, Crombie et al. [35] defended how team emotional intelligence (i.e., the sum of emotional intelligence) predicts sport performance. Some studies have looked at emotional intelligence differences between sports. In relation to whether the type of sport was individual or team sport, most studies found no significant differences [36][37][38][39]. However, Castro-Sánchez et al. [40] found that athletes in team sports with contact (e.g., handball, football, or basketball) showed higher levels of emotional management than those in individual sports (with or without contact). Szabo and Urban [41], in their study of combat sports, concluded that boxing and combat sports in general may foster EI development. This could be because certain task-oriented motivational climates positively influence levels of emotional intelligence and anxiety, especially in contact sports [42]. EI can be developed through sport, insofar as sport, in many of its stages, is framed in educational or training stages [43]. Furthermore, it is important to focus attention on the risks that athletes present in terms of IE and well-being, as optimal performance is associated with pleasant emotions and dysfunctional performance with unpleasant emotions [44].
Although these studies have sought to categorise sports, it is recognised that each sport is different. For example, a study of volleyball has shown that the higher the emotional intelligence, the better the performance of male and female players [45]. With regard to track and field, Lu et al. [46] correlated emotional intelligence with lower levels of precompetitive anxiety. In relation to shooting and archery sports, Sudarshan and Nagre [47] conducted research comparing both sports and found that archers showed significantly higher emotional intelligence. However, it should be noted that the sample was very small. It is worth highlighting the work developed by Berastegui-Martínez and Lopez-Ubis [48] on intervention in professional female football players, achieving an improvement in emotional intelligence levels and subsequent performance. In relation to gymnastics, a study conducted by Tatsi et al. [49] with acrobatic gymnasts aged 9 to 18 years showed that these athletes presented high emotional intelligence levels despite their age. Similar results have also been found with athletes in extreme and high-intensity sports such as ultramarathons, climbers, and cyclists [50][51][52][53] or combat sports [22,23,29,41]. However, no research has been found that treats the characteristics of each sport as unique and compares levels of emotional intelligence.
Due to the limited number of studies, a meta-analysis by Kopp and Jekauc [54] was unable to obtain conclusive results in the comparison between sport types, highlighting the Sports 2023, 11, 160 3 of 15 need for further research. Therefore, the aim of this research was to analyse the differences in total emotional intelligence and its four dimensions (SEA, OEA, UOE, and ROE) in a large sample of Spanish-federated athletes from eight different sports (volleyball, track and field, shooting, football, basketball, handball, gymnastics, and judo), controlling for sex and age.
Participants
A total of 2232 athletes participated in the study by completing the questionnaires. Athletes participated in eight sports disciplines: volleyball, track and field, Olympic shooting, football, basketball, handball, gymnastics, and judo. The participant's inclusion criteria were that all athletes must have a license from their official Spanish Federation and have competed in the 2020 season. The exclusion criteria were the following: (a) records with missing data (n = 13); and (b) staff and coaches' responses were excluded (n = 53). The final sample was 2166 athletes (age 25.20 ± 10.17 years), of whom 1200 were men (age 27.85 ± 11.10 years) and 966 were women (age 21.90 ± 7.68 years). Considering the data provided by Consejo Superior de Deportes-CSD [55], this study was carried out with a more equitable-balanced-sample in relation to the sex comparison (77%/23% federation men/women licences, respectively). In addition, athletes were divided according to their nationality (Spanish or not), their residence country, and whether they had been called up to represent their national team (Table 1). This study was approved by the ethics committee of the Universidad Politécnica de Madrid.
Instrument and Variables
A questionnaire was used to collect information from the eight sports cited above. The demographic and training questionnaire designs were carried out independently by two professional coaches with wide international experience. Later, both questionnaires were discussed by the two researchers, who developed an initial version of the combined questionnaire. This version was tested in a pilot study involving four athletes (two men and two women), and their feedback was used to revise and modify the survey by a third external sports expert. The definitive version was prepared by consensus among the three experts and consisted of 23 items structured as follows: Q1-Q7 were demographic questions adapted from [56,57] and were single-choice. Moreover, Q8-Q23 were Likert-type scales from 1 (totally disagree) to 7 (totally agree) that belong to the Wong Law Emotional Intelligence Scale Short Form (WLEIS-S), adapted and validated in Spanish by Extremera-Pacheco et al. [58]. Although the approximate time to complete the questionnaire was 5 min, unlimited time to fill out the survey was provided to the athletes.
The study variables were distributed into two areas: demographic and psychological. Demographic variables were sport (volleyball, track and field, olympic shooting, football, basketball, handball, gymnastics, or judo), age (years), sex (male or female), nationality (Spanish or foreign), residence country (Spain or other countries), play role (coach or athlete), and sport level (athlete called up by the national team in the last two years, yes or no). With regard to the psychological variables, emotional intelligence (EI) was analysed in five areas: own emotions-SEA (α = 0.855), evaluation of others' emotions-OEA (α = 0.779), use of emotion-UOE (α = 0.852), regulation of emotions-ROE (α = 0.883), and total emotional intelligence-EI Total (α = 0.873).
Procedures
The snowball sampling technique was used to send the final version of the survey through a Google Forms questionnaire to the athletes and technical staff [59]. A follow-up was sent two days later with the aim of increasing the response rate [60]. All the survey invitations and follow-ups were sent via WhatsApp. The questionnaire was open for ten days (after which no surveys were accepted) and anonymous to verify the honesty of the answers. The minimum estimated response rate was 55.70%, with 210 surveys finally registered and a maximum hypothetical number of responses associated with the invitations sent out of 377. According to Deutskens et al. [60] and Mavletova and Couper [61], the estimated response rate could be considered adequate or good. However, since it is not possible to know the exact response rate, there are still convincing reasons to consider a very good data set of the actual responses from 175 Spanish players [62]. All participants signed an informed consent form before completing the survey.
Data Analysis
Variables were described using the mean and the standard deviation (X ± SD). The normal distribution of the variables was checked using the Kolmogorov-Smirnov test, and the homogeneity of variance was tested using Levene's test. Values higher than three standard deviations were excluded to avoid extreme outliers. An independent T-test was carried out to check for age differences between sexes, while a One-Way ANOVA was used to compare the ages between sports. Furthermore, Pearson correlation coefficient was used to assess the linear relationship between age and EI dimensions. Lastly, a One-Way ANCOVA was conducted to determine significant differences between sports on the EI, controlling for age. The post-hoc analysis was conducted using the Bonferroni test. The effect size was estimated using Cohen's d index (d) in the comparison of sexes, establishing two cut-off points: medium effect (0.30) and large effect (0.60). On the other hand, the effect sizes were determined using the Eta squared (η 2 ) for sport groups based on the following criteria [63]: small effects (<0.06); moderate effects (0.06-0.14); and large effects (>0.14). The collected data were studied using the software Statistical Package for Social Science (SPSS, IBM Corporation; Armonk, NY, USA), 25.0. version. The level of significance was set at 0.05.
Results
Age was checked by sex and sport ( Table 2). Differences were observed in age by sex, with men older than women (t 2164 = 14.17; p < 0.001; d = 0.613). Moreover, age presented also significant differences by sport (F 7,2158 = 186.38; p < 0.001; η 2 = 0.377). Bonferroni post hoc analysis showed that gymnastics presented the lowest age, with significant differences from the rest of the groups (p < 0.001, in all comparisons). In contrast, shooting and judo were the oldest ones in all comparisons (p < 0.001, both). In addition, handball players showed differences with football (p < 0.01), with higher values for footballers; volleyball and track and field also presented differences with football (p < 0.001, both) and basketball (p < 0.01, both), being younger than them. Notes: Dif. = Differences between groups; = differences between sexes; A = differences with volleyball; B = differences with track and field; C = differences with Olympic shooting; D = differences with football; E = differences with basketball; F = differences with handball; G = differences with gymnastics; H = differences with judo. * = p < 0.05; ** = p < 0.01; *** = p < 0.001.
Additionally, a correlation was found between EI and players' age (Table 3). A positive association was found in men between age and IE dimensions, such as SEA, UOE, ROE, and EI Total, with values ranging from (r = 0.159 to 0.218, all p < 0.001). Similarly, SEA, UOE, ROE, and EI Total presented a positive association in women, with values ranging from (r = 0.192 to 0.272, all p < 0.001). However, OEA did not show any relationship with age, regardless of sex (p > 0.05). Table 3. Correlation between age and total emotional intelligence (EI Total) and dimensions (SEA, OEA, UOE, and ROE). Table 4 presents the relationship between age and EI, controlling for age as a covariable by sex and for the general population. Moreover, age's effect analysis was also included to confirm the relation between age and EI. Notes: Dif = Differences between groups; A = differences with volleyball; B = differences with track and field; C = differences with shooting; D = differences with football; E = differences with basketball; F = differences with handball; G = differences with gymnastics; H = differences with judo; SEA = self-emotional appraisal; OEA = other's emotional appraisal; UOE = use of emotion; ROE = regulation of emotions; EI Total = total emotional intelligence; * = p < 0.05, ** = p < 0.01, *** = p < 0.001.
Discussion
The purpose of this study was to analyse the differences in total emotional intelligence and its four dimensions (SEA, OEA, UOE, and ROE) in a large sample of federated Spanish athletes in eight different sport modalities, controlling for sex and age.
In relation to women, shooting showed significantly higher values than the rest of the sports evaluated in the SEA variable-volleyball, track and field, football, basketball, handball, gymnastics, and judo. Although no preliminary studies comparing these sports have been found, Dal and Dogan [64] reported that shooters with higher emotional intelligence levels perceived projectile-induced physical and psychological stress as a challenge, leading to better performance. On the other hand, judokas also showed significantly higher values in SEA than track and field, handball, and gymnastics athletes. Likewise, judo showed significantly higher values than handball in OEA and ROE, and in total EI, handball showed higher values. Piskorska et al. [65] described combat sports as having differentiating characteristics from other sports that may be related to working on and improving emotional intelligence. Similar results were found by Reche-García et al. [66], who reported significantly higher emotional intelligence and resilience levels than individual sports (i.e., track and field and gymnastics) or team sports (i.e., handball). Furthermore, in relation to handball players, significantly lower values in total emotional intelligence and each of its dimensions were identified. The handball players also showed lower OEA values in relation to track and field basketball and judo and in total EI in relation to track and field and judo.
The results for men showed that judokas have the highest levels of emotional intelligence. Specifically, they seem to have a greater ability to perceive and understand the emotions of the people around them (OEA) than track and field, football, basketball, and handball. Also, they have a superior ability to make use of their emotions by directing them towards constructive activities and personal performance (UOE) compared with volleyball, basketball, and handball athletes. Likewise, judokas have a greater ability to regulate their emotions and control their behaviour when in extreme moods (ROE) compared with volleyball, shooting, track and field, football, basketball, and handball athletes. Moreover, they showed significantly higher EI Total values than volleyball, track and field, basketball, and handball. These results are in line with Mitic et al. [67], who affirm that judokas, compared with other athletes, had greater control over their emotions. According to these authors, this is due to the fact that the sport requires its competitors to have a high emotional charge but at the same time to control their emotions throughout the fight in order not to make possible mistakes. Furthermore, Stankovic et al. [68] showed in a study comparing team sports athletes and judokas that team sports athletes scored higher on emotionality and aggressiveness than judokas. That is, team athletes experience fear of physical danger, experience anxiety in response to life stresses, feel a need for emotional support from others, and feel empathy and sentimental bonds with others. Similarly, these results again support those of Reche-García et al. [66]. On the other hand, it seems that, in general, team sports (handball, basketball, and volleyball) show lower values compared with individual sports (gymnastics, shooting, or judo), with the exception of football. Nevertheless, scientific evidence did not find differences in emotional intelligence levels according to the type of sport [36,69]. In fact, Akelaitis and Malinauskas [70] stated in a study with 204 individual and 212 team athletes that team athletes showed higher self-awareness and self-regulation skills. It is worth noting that the sample was aged between 15 and 18 years, which could be relevant. However, in Acebes-Sánchez [71], with a similar sample in terms of number and nationality (1784 Spaniards of legal age who practised some type of sport), it was found that individual athletes showed significantly higher values than athletes in collective sports.
Controlling for comparisons by sex and age, the results presented judo as a sport with high OEA levels compared with collective sports [i.e., handball, basketball, and football]; ROE significantly higher than collective [i.e., volleyball and handball] and individual sports [i.e., track and field and gymnastics]; UOE higher than volleyball, basketball, handball, and gymnastics; and total EI compared with volleyball, basketball, handball, track and field, and gymnastics. Similar results were found previously by Acebes-Sánchez et al. [22], where judo athletes and high-performance judo athletes showed better EI than the rest of the studied groups (from different sports, active, and non-active individuals). According to Szabo and Urban [41], judo, as a combat sport, may foster emotional intelligence. It is worth noting that football, despite being a team sport, does not follow the same pattern. In fact, football players are higher in ROE than handball and gymnastics; in UOE than volleyball, basketball, handball, and gymnastics; and in total emotional intelligence than volleyball and handball. No similar results have been found previously. These results could be due to the fact that this sport enjoys greater resources (i.e., the presence of a sports psychologist).
The present study has several limitations. Firstly, the cross-sectional design meant that we were unable to infer causal relationships between the variables analysed; secondly, longitudinal studies would be necessary to establish cause-and-effect relationships and to study changes in EI during sports practise and in different competition contexts; and thirdly, differences with regard to control groups could not be analysed. On the other hand, EI has been assessed with a self-report tool, which means that it is treated as trait EI. However, this tool defines each dimension (SEA, OEA, UOE, and ROE) as ability EI. This should be taken into account when interpreting the results. It would also be interesting to carry out research with intervention to assess whether the mere practise of sport develops emotional competences or whether the specific development of these skills by professionals is necessary. Similarly, it would be interesting to carry out discriminant analyses to determine the general differences between these types of sports or factor analyses that could explain the variance between them.
This study presents the following strong points: On the one hand, the quantity and quality of the sample are high. No previous research is known to have made such a large comparison in terms of sample size and number of sports and athletes. On the other hand, this research highlights the realities of emotional intelligence according to different sports. From this reading, interventions can be developed in different sports to improve emotional intelligence levels. Also, it seems that some sports, such as judo, develop better emotional intelligence levels. This could be considered a way to promote this type of sport among young people, as it is a protective variable for mental health. As Gimeno et al. [58] highlighted, these results suggest the importance of psychological skills training to favour sports performance and injury prevention.
Conclusions
The main conclusion is that judokas have higher emotional intelligence levels compared with other sports, in both women and men. Also, when controlling for sex and age, these results remain the same. Although significant differences have been found between other sports, they have not been as noticeable or consistent as when comparing judo with other sports. However, it should be noted that female shooters show significantly higher SEA levels than the rest of the sports analysed.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of the Universidad Politécnica de Madrid as "Factores psicológicos y actividad física en la población residente en España". | 2023-08-20T15:08:16.040Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "23ba76a7997ca5bc655412f49eba386e36595e14",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4663/11/8/160/pdf?version=1692365253",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "48c5108edb6480e3ab8d4089cdc39cd92a44202a",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2478166 | pes2o/s2orc | v3-fos-license | Matrix metalloproteinase 14 participates in corneal lymphangiogenesis through the VEGF-C/VEGFR-3 signaling pathway
The aim of the present study was to investigate the roles of matrix metalloproteinase 14 (MMP-14) in corneal inflammatory lymphangiogenesis. The expression of MMP-14 in vivo was detected by immunohistochemistry, reverse transcription-quantitative polymerase chain reaction (RT-qPCR) and western blot assays, under various corneal conditions. pCMV-MMP-14 or empty pCMV vectors were injected into mouse corneal stroma, 3 days after suture placement in a standard suture-induced inflammatory corneal neovascularization assay. The outgrowth of blood and lymphatic vessels and macrophage recruitment were analyzed using immunofluorescence. The expression levels of vascular endothelial growth factor (VEGF) subtypes were tested by RT-qPCR. MMP-14 expression was upregulated significantly following various corneal injuries. The results demonstrated, for the first time, that MMP-14 strongly promotes corneal lymphangiogenesis and macrophage infiltration during inflammation. Furthermore, expression levels of VEGF-C and VEGF receptor-3, but not other VEGF components, were significantly upregulated by the intrastromal delivery of MMP-14 during corneal lymphangiogenesis. In conclusion, this study indicates that MMP-14 is critically involved in the processes of lymphangiogenesis. Inhibition of MMP-14 may provide a viable treatment for transplant rejection and other lymphatic disorders.
Introduction
The healthy cornea is devoid of blood and lymphatic vessels. However, vascularization of the cornea can occur during a number of pathological disorders, such as corneal chronic inflammation, alkali burns and graft rejection (1,2). The outgrowth of blood and lymphatic vessels from the limbus into the cornea, which is defined as corneal hemangiogenesis (HG) and lymphangiogenesis (LG), reduces transparency and visual acuity (3,4). It is also widely recognized that corneal HG and LG are strong risk factors for graft failure. Studies have shown that inhibition of corneal neovascularization (NV) can promote graft survival in a murine model of corneal transplantation (5,6). Thus, inhibition of corneal NV is a key point for not only optimal clarity and vision, but also higher graft survival rates.
Unlike blood vessels, corneal lymphatic vessels are not easily visible; therefore, systematic lymphatic research started much later than the study of blood vessels. With the advancement of technologies and the discovery of several lymphatic endothelial-specific markers, such as lymphatic vessel endothelial hyaluronan receptor-1 (LYVE-1), prospero homeobox protein 1 (Prox-1) and vascular endothelial growth factor receptor-3 (VEGFR-3), significant progress has been made (7)(8)(9). A number of factors involved in LG have been identified to date, including VEGF-C and VEGF-D (10-12); however, their underlying mechanisms remain unknown.
Matrix metalloproteinases (MMPs), which are a family of zinc-dependent enzymes, play key roles in degrading the extracellular matrix (ECM) associated with metastasis, angiogenesis and tumor invasion (13,14). MMP-14 (also known as membrane type-1 matrix metalloproteinase), a transmembrane type MMP, has been reported to possess a broad spectrum of activity against ECM components such as type I and II collagen, fibronectin, vitronectin, laminin, fibrin and proteoglycans (15,16). A previous study suggested that MMP-14 also enhanced cancer cell migration and invasion by the shedding of cluster of differentiation (CD)44 and syndecan-1 from the cell surface, which indicated that MMP-14 participated in tumor invasion and metastasis (17). MMP-14 may promote angiogenesis by degrading the ECM (18), and may also regulate corneal HG by cleaving decorin, an anti-angiogenic factor in corneas with basic fibroblast growth factor (bFGF)-induced vascularization (19). Furthermore, MMP-14 expression has been shown to be enhanced in wounded corneal epithelium and stroma (20). In contrast to HG, LG is known to also participate in corneal injuries, but the contribution of MMP-14 to this process remains unclear.
Hence, the present study aimed to determine if MMP-14 promotes corneal LG, in vivo. The expression of MMP-14, which is able to induce the outgrowth of lymphatic vessels after various corneal injuries, was determined by immunohistochemistry, reverse transcription-quantitative polymerase chain reaction (RT-qPCR) and western blot assays. In addition, whether corneal LG was increased in a murine model of suture-induced inflammatory corneal NV was investigated, by the intrastromal injection of MMP-14 plasmid. Finally, whether VEGF signaling was involved in the molecular pathway leading to corneal LG was investigated, through the intrastromal delivery of MMP-14.
Materials and methods
Plasmid construction. A cDNA encoding the MMP-14 gene was subcloned into the pCMV vector (Invitrogen; Thermo Fisher Scientific, Inc., Waltham, MA, USA) as previously described (21). Plasmids were purified using a Qiagen plasmid purification kit (Qiagen, Inc., Santa Clarita, CA, USA) according to the manufacturer's protocol. The concentration of plasmid was ~2 µg/µl for naked DNA injection.
Animals and anesthesia. All animal protocols were approved by the local animal care committee of the First Affiliated Hospital of Harbin Medical University (Harbin, China), and were in accordance with the ARVO Statement for the Use of Animals in Ophthalmic and Vision Research. A total of 153 male C57BL/6 mice (6-8 weeks old; 18-20 g) were purchased from the Laboratory Animal Center of Harbin Medical University (Harbin, China). All the mice were raised in constant room temperature and free for food and water. The room was in 12-h light/dark cycle. Mice were anesthetized with an intraperitoneal injection of a combination of ketamine and xylazine (120 and 20 mg/kg bodyweight, respectively). The mice were sacrificed by CO 2 inhalation overdose at the end of the experiment.
Suture-induced corneal NV and corneal intrastromal injec-
tions. The mouse model of suture-induced inflammatory corneal NV was established as previously described (22). The central cornea was marked with a 2-mm diameter trephine and three 11-0 nylon sutures (Lingqiao; Ningbo Medical Needle Co., Ltd., Ningbo, China) were placed in the intrastromal position. The outer point of suture placement was near the limbus, and the inner point was near the corneal center equidistant from the limbus. Sutures were removed after 7 days. Corneal intrastromal injections were performed on day 3 after suture placement, as previously described (23). A 0.5-inch 33-gauge needle on a 10-µl gas tight syringe (Hamilton Robotics, Reno, NV, USA) was introduced into the corneal stroma, and plasmid (5 µg pCMV-MMP14 or pCMV) solution was injected into the stroma to separate corneal lamellae and disperse the plasmid. Thus, MMP-14 and empty vector (pCMV) groups were established. Normal mice were entirely untreated, and saline control mice received only standard suture placement, and treated eyes were then rinsed with sterile physiological saline (0.9% NaCl, 1 ml, twice daily for 1 week). A total of 35 male C57BL/6 mice were used, including 8 mice in untreated group, 7 mice in saline control group, 8 mice in empty vector group and 10 mice in the MMP-14 group.
Mouse alkali injury model. The alkali burn injury model in the mouse cornea was used, as described previously (n=9) (24). In brief, a 2-mm diameter filter paper disc was wetted with 1 mol/l NaOH solution for 20 sec and placed on the central cornea of the mouse for 30 sec. Injured eyes were rinsed with sterile physiological saline (0.9% NaCl, 20 ml) immediately.
Immunohistochemistry. Corneas were cut and fixed in 10% neutral buffered formalin for 24 h. Paraffin-embedded tissue sections (4 µm) were deparaffinized, rehydrated, and treated with 0.3% hydrogen peroxide in methanol for 30 min, to eliminate endogenous peroxidase activity. The tissue sections were then incubated for 60 min at room temperature with a rabbit anti-mouse MMP-14 monoclonal antibody (1:2,000; ab51074; Abcam, Cambridge, UK). After three washes (3 min each) with phosphate-buffered saline (PBS), a DAB Detection kit (PV9000; ZSGB-Bio, Beijing, China) was used for MMP-14 staining, according to the manufacturer's protocol. Images were acquired with the Leica DM4000B biological microscope equipped with a Leica DFC 550 digital camera and Leica Application Suite version 4.2.0 software (Leica Biosystems, GmbH, Heidelberg, Germany).
RNA isolation and RT-qPCR. Total RNA was extracted from the corneas using TRIzol reagent (Invitrogen; Thermo Fisher Scientific, Inc.) according to the manufacturer's protocol. Total RNA (400 ng) was reverse transcribed using the PrimeScript™ RT reagent kit with gDNA Eraser (Takara Biotechnology Co., Ltd., Dalian, China). qPCR was performed using SYBR ® Premix Ex Taq™ II (Takara Biotechnology Co., Ltd.) with a LightCycler 480 Real-time PCR System (Roche Diagnostics, Basel, Switzerland). qPCR was performed under the following conditions: Initial denaturation step of 95˚C for 30 sec, 40 cycles of 95˚C for 5 sec, and of 60˚C for 30 sec, followed by an additional denaturation step of 95˚C for 5 sec and 60˚C for 60 sec, as a subsequent melt curve analysis to check amplification specificity. All assays were conducted three times, and were performed in triplicate. Results were derived from the comparative threshold cycle method (25,26) and normalized by glyceraldehyde-3-phosphate dehydrogenase (GAPDH) as an internal control. The primers used for qPCR were as shown in Table I.
Western blot analysis. The corneas were harvested and lysed in ice-cold radioimmunoprecipitation assay lysis buffer (Beyotime Institute of Biotechnology, Shanghai, China) with the addition of protease inhibitors. After separation by 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis, proteins were transferred onto nitrocellulose membranes. The membranes were incubated in blocking solution [2% bovine serum albumin (BSA) in Tris-buffered saline with Tween-20; Beyotime Institute of Biotechnology] for 1 h at room temperature, then incubated with a rabbit anti-mouse MMP-14 monoclonal antibody (1:2,000; ab51074; Abcam) overnight at 4˚C. Each step was followed by extensive washing with PBS. The membranes were then incubated with corresponding horseradish peroxidase-conjugated secondary antibody (1:5,000; A0545; Sigma-Aldrich) for 1 h at room temperature, and developed using an enhanced chemiluminescence system (RPN2108; GE Healthcare, Little Chalfont, UK.). β-actin was used as loading control. The antibody was mouse anti-mouse β-actin monoclonal antibody (1:1,000; A00702-100; GenScript, Nanjing, China).
Corneal immunofluorescence microscopy and quantification. The immunofluorescence experiments were performed as previously described (27). The excised corneas were rinsed in PBS and fixed in acetone for 30 min. After washing and blocking with 2% BSA in PBS for 2 h, the corneas were stained overnight at 4˚C with a rabbit anti-mouse LYVE-1 antibody (1:500; Abcam) and a rat anti-mouse CD31 antibody (1:100; BD Pharmingen, San Diego, CA, USA). On day 2, the tissue was washed three times with PBS, and was stored at 4˚C in the absence of light. The LYVE-1 antibody (ab14917) was detected with an Alexa Fluor 647-conjugated goat anti-rabbit IgG antibody (1:200; A-21244; Invitrogen; Thermo Fisher Scientific, Inc.) and the CD31 antibody (550274) was detected with an Alexa Fluor 488-conjugated goat anti-rat IgG antibody (1:200; A-11006; Invitrogen; Thermo Fisher Scientific, Inc.). The corneas were stained with the Alexa Fluor antibodies overnight at 4˚C. To detect the recruitment of macrophages into the inflamed corneas, a fluorescein isothiocyanate-conjugated rat anti-mouse CD11b antibody (1:100; 557396; BD Pharmingen) was used. The excised corneas were rinsed by PBS and fixed in acetone for 30 min. After washing and blocking with 2% BSA in PBS, for 2 h, corneas were stained overnight at 4˚C, with a CD11b antibody.
The stained whole mounts were analyzed with a fluorescence microscope (EVOS f1; AMG; Thermo Fisher Scientific, Inc.). Each whole mount image was quantified using ImageJ software (National Institutes of Health, Bethesda, MD, USA), and the version was 1.24o. A detailed explanation of this method has been described previously (28). The mean vascularized area of the control groups was defined as being 100%; vascularized areas were then related to this value (vessel ratio). For macrophage analysis, the mean number of macrophages of the control groups was set as 100%; the numbers of macrophages per whole mount were then related to this value (cell ratio).
Statistical analysis. Statistical analysis was performed using Student's t-test with SPSS version 13.0 software (SPSS, Inc., Chicago, IL, USA). Results were expressed as the mean ± standard error of the mean, and a value of P<0.05 was considered to indicate a statistically significant difference. Graphs were drawn using GraphPad Prism, version 5.0 software (GraphPad Software, Inc., La Jolla, CA, USA).
MMP-14 expression in the cornea.
Immunohistochemical staining revealed weak expression of MMP-14 in normal cornea, and high expression levels of MMP-14 in the cornea on day 3 after standard suture placement or after alkali injury ( Fig. 1A-C). As shown, these corneal injuries resulted in evident corneal NV. Moreover, the levels of corneal MMP-14 expression were also increased on the first day after the intrastromal injection of MMP-14 plasmid when compared with that of normal cornea, but increased levels were not detected in the vector-injected cornea. The data indicated that the intrastromal delivery of MMP-14 plasmid resulted in increased MMP-14 expression. The levels of MMP-14 expression were almost returned to normal at 3 days after the injection of MMP-14 DNA (Fig. 1D-F).
The results of the RT-qPCR assays demonstrated that MMP-14 expression was significantly upregulated in corneas treated with an intrastromal injection of MMP-14 plasmid (P=0.007), suture placement (P=0.003) or alkali injury (P=0.002), whereas MMP-14 expression in corneas treated with empty vector injection (P=0.442) or in corneas 3 days after MMP-14 vector injection (P=0.239) showed no significant changes, in comparison with the normal cornea ( Fig. 2A). Furthermore, western blot assay confirmed the significant upregulation of MMP-14 expression in the aforementioned corneas (Fig. 2B).
Intrastromal delivery of MMP-14 DNA promotes corneal
LG and HG following suture placement. The standard suture-induced corneal NV assay and corneal intrastromal injection model were used to investigate the effect of MMP-14. Mice were randomized to receive intrastromal injections of either pCMV-MMP-14 or pCMV empty vector on day 3 after suture placement. On day 7, the densities of CD31-positive blood vessels and LYVE-1-positive lymphatic vessels were detected by immunohistochemistry as described previously (29). Quantitative immunohistochemical and morphometric analyses clearly revealed that corneal LG and HG were induced by standard suture placement, and injection of MMP-14 led to a significant promotion of LG and HG ( Fig. 3A-I). The numbers of lymphatic vessels in mice treated with MMP-14 were significantly increased (P= 0.009), and the numbers of blood vessels were also markedly increased (P=0.011), in comparison with those in the mice treated with empty vector (Fig. 3J and K).
Treatment with MMP-14 promotes the recruitment of inflammatory macrophages into the cornea. To test whether the and VEGFR-2 (P=0.321) showed no significant changes, in comparison with those in the empty vector group (Fig. 5).
Discussion
For the first time, to the best of our knowledge, the present study provides evidence that MMP-14 promotes in vivo LG in a corneal suture-induced mouse model. Although previous studies have reported the effects of MMP-14 on several angiogenesis-related properties, including degradation of ECM and cleavage of decorin (18), its contribution during LG has received less attention and so further investigation is merited. It has previously been shown that proMMP-2 activation can be blocked by a specific monoclonal antibody against MMP-14, which resulted in a marked reduction of lymphatic vessel sprouting (31). However, the impact of MMP-14 on LG in vivo was not determined in that study. In the present study, corneal LG and HG were significantly increased in the suture-induced inflammatory corneal NV model when naked MMP-14 DNA was added. Thus, it may be concluded that MMP-14 plays an important role in the development of new lymphatic vessels.
To assess the association between MMP-14 and corneal NV, MMP-14 expression was investigated under various corneal conditions using immunohistochemical analysis, RT-qPCR and western blot analysis. In the present study, corneal intrastromal injection of MMP-14 plasmid was an effective method of increasing the amount of the protein, consistent with published reports (32). Additionally, the present study showed that significantly increased MMP-14 expression existed in the standard corneal suture model and the alkali burn model. Through the examination of these models, it was shown that corneal HG and LG were significantly induced. This was in agreement with previous results showing that keratocytes and myofibroblasts express MMP-1, -2 and -9 following corneal injuries (33)(34)(35). Collectively, these results demonstrate that MMP-14 is involved in corneal NV, at least under certain pathophysiological conditions. The MMP-14 overexpression in corneal tissues implied that MMP-14 plays an important role in corneal HG and LG.
Macrophages are acknowledged to have a key role in corneal LG. Previous studies have confirmed that large numbers of activated CD11b + macrophages induce LG during corneal inflammation, by transdifferentiating into lymphatic endothelium and by releasing lymphangiogenic growth factors (36,37). As shown in the present study, the numbers of CD11b + macrophages infiltrating the inflammatory corneas in MMP-14-treated mice were significantly greater than in vehicle-treated mice. It may be speculated that the lymphangiogenic effect of MMP-14 might also be partially caused by an indirect effect on macrophages.
To further investigate the mechanism through which MMP-14 regulates corneal HG and LG, the associations between MMP-14 and VEGF proteins and receptors were examined. A marked upregulation of VEGF-C and VEGFR-3 expression levels was detected in sutured corneas treated with MMP-14, but other members of the VEGF family exhibited no significant changes. The outgrowth of lymphatic vessels is primarily triggered by VEGF-C and its receptor VEGFR-3 (38,39), and the specific inhibition of VEGFR-3 alone is sufficient to block corneal LG (40). In previous studies, corneal LG induced by fibroblast growth factor-2 or hepatocyte growth factor could be blocked by VEGFR-3 inhibition (41,42). These investigations suggest that the VEGFR-3 signaling pathway is critical for corneal LG.
In addition, the data presented in the present study indicate that VEGF-C and its receptor VEGFR-3 might induce corneal HG in addition to LG. Among the VEGFs, VEGF-A is widely studied and has been found to be responsible for HG by binding to its receptors, VEGFR-1 and VEGFR-2 (43,44), while VEGF-C is thought to a dominant factor stimulating LG through binding to VEGFR-3. However, certain studies have provided evidence that VEGF-C is also associated with HG (45)(46)(47). An earlier study reported that VEGF-C enhanced microvascular endothelial cell migration, branching and capillary sprouting in association with MMP-14 overexpression (48), which is consistent with the findings of the present study. Several studies have also shown that VEGF-C and VEGFR-3 are associated with angiogenesis in cancer (49,50). A possible mechanism by which VEGF-C induces HG has been shown to be through a RhoA-mediated pathway (51). Furthermore, it may be speculated that increased expression of VEGF-C are associated with the large numbers of activated CD11b + macrophages in the inflammatory corneas of MMP-14-treated mice. A previous study has shown that significantly increased numbers of dermal CD11b + macrophages expressed higher levels of VEGF-C (52), which indicates their close association. It is possible, therefore, that MMP-14 induced corneal HG and LG by upregulating the expression levels of VEGF-C and VEGFR-3 in vivo. Additional studies to elucidate the relationship between MMP-14 and VEGF-C/ VEGFR-3 are necessary.
In summary, through the use of a corneal suture model, the data in the present study demonstrate that MMP-14 promotes corneal HG and LG. The important role of MMP-14 in corneal LG was closely associated with CD11b + macrophage infiltration, and with the VEGF-C/VEGFR-3 signaling pathway. Based on these findings, MMP-14 inhibition could be a promising new target for the management of inflammatory corneal disorders and for maintaining the transparency of the cornea. | 2016-10-11T02:48:20.416Z | 2016-08-22T00:00:00.000 | {
"year": 2016,
"sha1": "fda0e4d74b06700a4f626cb2db83329f3d0fb233",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/etm.2016.3601/download",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fda0e4d74b06700a4f626cb2db83329f3d0fb233",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
15249788 | pes2o/s2orc | v3-fos-license | A Strategy for Training Set Selection in Text Classification Problems
An issue in text classification problems involves the choice of good samples on which to train the classifier. Training sets that properly represent the characteristics of each class have a better chance of establishing a successful predictor. Moreover, sometimes data are redundant or take large amounts of computing time for the learning process. To overcome this issue, data selection techniques have been proposed, including instance selection. Some data mining techniques are based on nearest neighbors, ordered removals, random sampling, particle swarms or evolutionary methods. The weaknesses of these methods usually involve a lack of accuracy, lack of robustness when the amount of data increases, over?tting and a high complexity. This work proposes a new immune-inspired suppressive mechanism that involves selection. As a result, data that are not relevant for a classifier’s ?nal model are eliminated from the training process. Experiments show the e?ectiveness of this method, and the results are compared to other techniques; these results show that the proposed method has the advantage of being accurate and robust for large data sets, with less complexity in the algorithm.
I. INTRODUCTION
Nowadays most of the information is stored electronically, in the form of text databases.Text databases are rapidly growing due to the increasing amount of information available in electronic form, such as electronic publications, various kinds of electronic documents, e-mails, and the World Wide Web.
Text mining, also known as knowledge discovery from textual databases, is a semi-automated process of extracting knowledge from a large amount of unstructured data.Traditional information retrieval techniques become inadequate for the increasingly vast amounts of text data.Typically, only a small fraction of the many available documents will be relevant to a given individual user.Without knowing what could be in the documents, it is difficult to formulate effective queries for analyzing and extracting useful information from the data.Users need tools to compare different documents, rank the importance and relevance of these documents, or find patterns and trends across multiple documents.Thus, text mining has become an increasingly popular and essential theme in data mining (Feldman 1995).
There are many types of statistical and artificially intelligent classifiers, as it can be seen in [1], [2].One of the main issues in classification problems involves the choice of good samples to train a classifier.A training set capable to represent well the characteristics of a class has better chances to establish a successful predictor.
II. OBJECTIVES
This paper proposes a new approach for addressing the training data reduction in text mining classifications problems.This new algorithm was inspired by suppression mechanisms found in biological immune systems [3].The suppression concept is applied to the training process to eliminate very similar data instances and to keep only representative data.The propose consists in a non-statistical method to select samples for training.The main objectives of this work are to find a subset of samples for training without spending excessive processing time and to simultaneously maintain good accuracy.
In order to do this, this paper is set out as follows.The Section 2 presents a literature review of what has been done to solve the reduction problem as well as the features and problems associated to each of them.Section 3 introduces a detailed description of the algorithm proposed and the suppression mechanism.Section 4 explains the methodology used in the experiments.Finally, Section 5 points out the conclusions and gives some direction of future work.
III. PREVIOUS WORK
An important contribution in the area of data reduction for structured data (data mining) can be found in (Cano et al. 2003).In this work, the authors present a review of the main instance selection algorithms.In addition, they perform an empirical performance study that compares the classical instance selection methods with four major evolutionary-based strategies.The authors divide the instance selection methods into four sets.The first set involves techniques based on nearest neighbor (NN) rules.These techniques are Cnn [4], Enn, Renn [5], Rnn [6], Vsm [7], Multedit [8], Mcs [9], Shrink, Icf [10], Ib2 [11], and Ib3 [12].The second set involves methods based on ordered removal.These methods are Drop1, Drop2 and Drop3 [13].There are two methods based on random sampling that were considered, i.e., Rmhc [14] and Ennrs [15].The evolutionary-based methods are the generational genetic algorithm (GGA) [17] and [17], the steady-state genetic algorithm (SGA) [18], and the CHC adaptive search algorithm [19].The authors in [19] claim that the execution time associated with evolutionary algorithms (EAs) represents a greater cost compared to the execution time www.ijacsa.thesai.org of the classical algorithms.However, when compared to non-EAs that have a short execution time, EA-based algorithms offer more reduction without overfitting.The authors concluded that the best algorithm corresponds to the CHC, whose time is lower compared to the rest of the EAs, the probabilistic algorithms and some of the classical instance selection algorithms.The classical and evolutionary algorithms are affected when the size of the data set increases, whereas CHC is more robust.In CHC, the chromosomes select a small number of instances from the beginning of the evolution, so that the fitness function based on 1-NN has to perform a smaller number of operations.There are many other strategies in the literature [20], [21].[22], [23], [24], [25], [26] and [27].
IV. THE SUPPRESSION MECHANISM
The suppression concept for proposed algorithm SeleSup (selection by suppressor) is employed in the training set to eliminate very similar data instances and to keep those instances that are truly representative of a certain class [28].To perform such tasks, the mechanism divides the training database into two subsets.The first subset represents the white blood cells (WBCs) or antibodies in the organism, representing the training set.The second subset represents a set of pathogens or antigens that will select the higher affinity with WBCs; hence, this method performs suppression.The algorithm starts with the idea that the system's model must identify the best subset of WBCs to recognize pathogens, i.e., the training set, and to be able to identify new pathogens that are presented.
Both antibodies and antigens were represented as vectors containing the most relevant terms of the documents.Each vector was normalized to belong to the same scale of values which is mapped to the interval [0,1].The affinity between antibodies and antigens was determined by the cosine distance.This measure is commonly used to measure the level of similarity between two documents.
Given two vectors representing documents, WBC and Pathogen, their cosine will describe the similarity.
As the angle between the vectors shortens, the cosine angle approaches 1, meaning that the two vectors are getting closer, or more similar.
According to [28] the algorithm aims to identify the best subset of antibodies to recognize the antigens, i.e., the new training set must be able to identify new antigens.Finally, the antibody survivors are represented by an evaluation measure (fitness value) and are selected to be a part of the new reduced training set.
In other words, those WBCs able to recognize pathogens from the suppression set remain while the others are eliminated from the population.The signals for a WBC's survival are represented by a fitness variable.Each time the nearest WBC recognizes a same class-label pathogen, the survival signal is sent and the fitness is incremented.Every WBC with a fitness greater than zero is selected to be part of the new suppressed repertoire.The pseudo-code for this technique can be seen in Algorithm 1.The Reuters-21578 Text Collection contains documents collected from the Reuters newswire in 1987.It is a standard text categorization benchmark that contains 135 classes.The collection was divided it in two subsets: one consisting of the four more balanced classes, which was identified as Reuters-4, and the other consisting of the ten most frequent classes, which was identified as Reuters-10.The third datasets consists of the sixty two classes, which was identified as Reuters-Original.
The last data set, the NewsGroup (20NG) dataset contains approximately 20000 articles evenly divided among 20 Usenet www.ijacsa.thesai.orgnewsgroups.Over a period of time 1000 articles were taken from each of the newsgroups, which makes an overall number of 20000 documents in this collection.Except for a small fraction of the articles, each document belongs to exactly one newsgroup (Joachims 1997).
The performance of the two classification algorithms Naive Bayes and Support Vector Machine (SVM) over the resulting reduced training and test subsets of SeleSup is compared to the performance over the subsets selected by the CHC algorithm, which is based on genetic algorithms [19] and random sampling (RS) based on the reduction percentages of experiments of each algorithm.
For each one of these subsets, the algorithms SeleSup and RS of each method were run out ten times and the reduced sets of training data were submitted to the classification algorithms (Naive Bayes and Support Vector Machine).The CHC percentage reduction, obtained in just one execution, due to computational cost was adopted.The RS was run 10 times.The average was obtained as final result for each experiment.
A. REUTERS
The first experiment performed in this paper makes use of the The SGML files were transformed into XML format and were pre-processed in Microsoft Excel, joining all documents in one single file.The resulting file was considered as the format for the input file for the mining process containing a collection of 8250 records sorted into 62 categories.
Then, the usual text mining data preparation techniques were performed.From this subset it was partitioned other two subsets: Reuters-4 and Reuters-10 as explained in next section.The four more balanced and the ten most frequent classes are indicated in Table 2 and 3.
C. Parameters
The parameter setting is given in Table 5 and remained constant throughout the experiments.It was used stopwords and stemming in the document preparation stage.In additional, it was performed a filter on keywords with more than 50% significance and keyword´s relevance was used to generate the vector space model.www.ijacsa.thesai.org
D. Significance Test
Statistical evaluation of experimental results has been considered an essential part of validation of the new machine learning methods [29], [30].The statistical test has the objective of reject a false null hypothesis [31].
This paper shows a comparison between nonparametric tests, Wilcoxon signed rank test [32] and Mann-Whitney test [33] for comparing of two classifiers, Naïve Bayes and SVM.[29] mentions Wilcoxon signed rank test as safe and robust non-parametric tests for statistical comparisons of classifiers.
It was used data sets with high dimension space, which demand a high processing time.So, it was chosen the training data set of the each one of the four data sets (see Table 1), which have been run on 10-fold cross-validation method to obtain a random sample of 10 results.The test is two-tailed with significance level of 0.05.The results have been obtained through the KEEL software [34], [30] and [29].
Generally when the p value is greater than 0.05, the null hypothesis is accepted resulting as no evidence that the samples are significantly different.However, if the null hypothesis is rejected (p < 0.05) denotes that the samples are statistically significant.
VII. RESULTS AND ANALYSIS
The first experiment was carried out in the Reuters-4 data set.This data set is characterized by balanced classes (see Table 6 and 7).The accuracy of SeleSup is just as good as results of CHC-100 and with the same data set without reduction, the results presented are very similar.The CHC-100 produces the best performance.Therefore, CHC-100 hasn't nearly as high reduction rate as SeleSup.
The CHC-1000 has a bigger reduction, but comparing with SeleSup the accuracy don't nearly produce as good results as its.In the tests, there was only one case (CHC1000) where the performance hasn't shown significantly different.The second experiment was carried out with the Reuters-10 data set.This data set is characterized by an imbalance on its classes (see Error! Reference source not found.).
Therefore, as can be seen in Table 8, all the classifiers produced satisfactory results when their learning process used all the training and test data set.In addition, as expected, the same behavior occurs when suppression mechanism is applied.
The accuracy of SeleSup is just as good as results with the same data set without reduction, Random Sampling and CHC-100.The results are very similar between the classifiers.Therefore, CHC-100 has not nearly as high reduction rate as SeleSup.
It can be noticed that if the number of evaluation increases, the accuracy test of CHC-1000 decreases and consumes a high time execution (more than 50 higher).So, the CHC-1000 doesn't produce nearly as good results as SeleSup.
The results (Table 9) indicate that the Wilcoxon test is more powerful than the Mann-Whitney test according to [29].www.ijacsa.thesai.orgThe third experiment was carried out with the Reuters Original data set.This data set is characterized by a great imbalance on its classes and high dimensionality (Table 10 and 11).SeleSup produced results almost as good as CHC-1000 in the training set, but the Reuters Original without suppression produces the best results in the test set.
It can be noticed once more that the CHC-1000 produces the best data reduction percentages, but it isn't nearly as fast as SeleSup.According to (Cano et al. 2003) the main limitation of CHC is its long processing time, which makes it difficult to apply this algorithm to very large data sets.
This experiment shows the limitations of the SVM with the larger dataset (Original Reuters) which were omitted.
Finally, the last experiment was carried out using the Newsgroup data set.This data set is an example of a very large data set with 18300 instances (see Table 12).This is the largest data set in our experiments.
The SeleSup and CHC obtained results are very similar in accuracy.In addition, the algorithm SeleSup was easily applied in this data set and its results were just as good as CHC-1000.Its processing time has been very meaningful when compared with the CHC that produces a very similar percentage of reduction (92,09% and 93,29%).
It can be observed that the RS had in general results very similar to the algorithms SeleSup and CHC, but it has a clear disadvantage of not reducing data by itself.Therefore, another algorithm has to be used to define the reduction percentage.This paper presented a new method for instance selection (IS) by suppressing data in the original training set.IS can be very useful to reduce costs, improve computational performance and eliminate non-informative data.The proposed technique was designed to work together with different types of classifiers.The goal was to improve the performance related to the time spent on training without losing accuracy.This approach was inspired by the suppression mechanisms found in biological immune systems.
The experiments were conducted by testing the SeleSup algorithm in four data sets.The performance of three classification algorithms over the resulting training subsets of SeleSup was compared with the performance over the subsets selected by the CHC algorithm and random sampling (RS).
In order to test whether the algorithms' performances were significantly different or not, it was adopted a comparison between non-parametric tests Mann-Whitney U and Wilcoxon signed rank.In the tests, there were only one case where the performances haven't shown significantly different.Therefore, the statistical tests have provided strong evidence concerning the results obtained when comparing the evaluated algorithms.
The SeleSup algorithm significantly reduces the data set size.This algorithm is just as good as CHC algorithm and it offers the advantage of being faster.Then, it consumes less processing time.Although CHC has a higher reduction rate, it does not produce the best results with high dimensionality data sets and it showed high time execution.Moreover, on the contrary of CHC, the presented approach was applied to all the data sets on a less power computer, and overall, its results were better than RS.
IX. FUTURE WORK
An alternative method for performing a faster test would be inserting into the WBCs' population the pathogen-specific WBC whose distance is the minimum distance.This technique should provide the system with the capability of keeping rare cases or rare classes in the training set.
An additional improvement to the original algorithm could be to insert some probabilistic information on the choice of the WBCs to be eliminated.The way that the mechanism works currently is deterministic with regard to data selection.
_______________________________________________ Algorithm 1 :
The Suppressive Algorithm ________________________________________________ input:The normalised (in[0, 1]) full training data set T and the fraction f of WBCs (default f =0.9) output: A reduced training data set T // Initialisation phase Shuffle T and assign [f •|T|] samples as WBCs (training set); the remaining samples are assigned as pathogens (suppression set); for all the WBCs do fitness = 0; // Suppression phase for each pathogen p do NearestWBC ← Find the nearest WBC with regard to p; if NearestWBC's class = p's class then // NearestWBC was able to recognize the pathogen Increment the NearestWBC's fitness by one; endif; end; // Output phase Eliminate those WBCs whose fitness value is 0; Output the set of surviving WBCs as the reduced training set T __________________________________________________ V. EXPERIMENTAL STUDY In this section, the experiments presented aims to evaluate the reduced training instances selected by the SeleSup algorithm in four data sets (shown in Error!Reference source not found.)frequently used in information retrieval research.
Reuters collection (Zeidat et al. 2006; Yang et al.1996; Schapire 1990; Schapire et al. 2000; Sebastiani 2002).The Reuters-21578 collection is a collection of documents from the Reuters news agency that was released in 1987.By 1990, the collection was given to the scientific community to perform research related to text categorisation.The rights of authorship belong to Reuters Ltd. and the Carnegie Group, which promoted its free distribution for research activities.The document basis consists of 21578 Reuters articles that consist of files in the SGML language.These documents are grouped into 22 separate files.Each document possesses several attributes that indicate different characteristics.The attributes used in this work are: Lewissplit (related to the information of the experiments done by Lewis who defines the values Test, Training and Not-Used); Oldid, which represents the identification number of the collection (before the Reuters-21578); D, which represents the categories or classes; and Body, which presents the text content of major news.The number of documents per class varies from class "earnings" (3964 documents) to class "castor-oil" (which contains a single document).Furthermore, some documents are not associated with any of the classes, and others are associated with up to 12 of the classes.
TABLE II .
FOUR MORE BALANCED CLASSES OF REUTERS DATA SET.
TABLE VI .
RESULTS FOR REUTERS-4 DATA SET
TABLE VIII .
RESULTS FOR REUTERS-10 DATA SET
TABLE IX .
MANN-WHITNEY U AND WILCOXON TESTS COMPARING BAYES VS SVM FOR REUTERS-10 DATA SET
TABLE X .
RESULTS FOR ORIGINAL REUTERS DATA SET
TABLE XI .
MANN-WHITNEY U AND WILCOXON TESTS COMPARING BAYES VS SVM FOR ORIGINAL REUTERS DATA SET
TABLE XIII .
MANN-WHITNEY U AND WILCOXON TESTS COMPARING BAYES VS SVM FOR NEWSGROUP DATA SETTo carry out efficiently the training of classifiers of large collections of text the selection of the training set must be done carefully.If it is used an excessive number of documents the computational effort can make the task impossible.Using a very small sample leads to the inaccuracy of the classifier. | 2014-10-01T00:00:00.000Z | 2013-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "8f26e0c09b1a9ad29d1cd5c18e42d1d3f9966733",
"oa_license": "CCBY",
"oa_url": "http://thesai.org/Downloads/Volume4No6/Paper_8-A_Strategy_for_Training_Set_Selection_in_Text_Classification_Problems.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "8f26e0c09b1a9ad29d1cd5c18e42d1d3f9966733",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
219633430 | pes2o/s2orc | v3-fos-license | Leakage Current Analysis of 6T & 7T-SRAM using FINFETs at 22nm Technology
Persistent scaling of planar MOSFET results in increase in transistor package density and performance of chip. However at nanometer regime , it has become a very challenging issue due to the increase in the short channel effects. In nanoscaled MOSFETs, the channel loses control from gate terminal due to potential at drain. Due to this, it is difficult to turn off MOSFET completely which inturn leads to leakage currents. Since cache memory occupies more area of processors, it is difficult to reduce leakage power in microprocessors. Double gate transistors have become replacement for MOS transistors at nano level. Since FINFETs have double gates, the leakage currents can be controlled effectively than planar MOSFET. In this paper, leakage currents of 6T & 7T-SRAM memory cell are analyzed using FINFETs at 22nm technology in hspice software
INTRODUCTION
The major concern in battery-operated portable devices is increase in power consumption due to increase in transistor package density and operating frequency at nano scale. The power consumption in non-portable electronic systems is also significant due to problems existing in packaging and also cost for cooling the systems. Thus, the main design concern in semiconductor industry is to obtain performance requirements with low power consumption. The unceasing scaling of MOS transistors with every new process technology has provided enhanced system performance for years. However, continuous miniaturizing of MOS transistor is not a good choice above 22nm technology due to limitations in fabrication [1]. The significant design difficulties at nano regime are: (a) Reduction of short channel effects and (b) minimization of device to device inconsistency to increase yield. Double gate transistors have evolved as an alternative to solve the problems posed due to continuous miniaturization [2]. The steps involved in FINFET fabrication are similar to that of planar MOS transistors. This enables manufacturing industry to fabricate rapidly. Here, the ON current in FINFET flows parallel to the wafer plane. And the channel is formed vertical to the plane. Due to this structure, FINFET is named as quasiplanar device. The front and back doors of the DG-FET are frequently controlled severally by drawing endlessly the entryway conductor at the most elevated of the channel. The effective gate width of DG-transistor is 2nh, where n is the number of fins and h height of fin. Transistor with high ON current can be obtained by using more number of fins. The pitch "p' of the fin is made as small as half of the lithography pitch using lithography techniques [3].
A. Shorted Gate:
In this mode, the 2 gates of FINFET are connected along resulting in a three-terminal device [4]. This acts as direct standby for typical CMOS devices.
B. Independent Gate:
Here, the upper part of the gate is carved out to provide two separate gates [5]. Since, the two gates can be controlled independently; IG-FINFET offers more design possibilities.
C. Low Power:
In this mode, the threshold voltage of FINFET transistor can be altered by employing a low voltage to ntype FinFET and high voltage to p-type FinFET. This reduces the static power consumption with delay increment [1]. The hybrid IG/LP-DGFET combines both LP and IG modes.
DG-FETs have the following advantages over planar MOS transistors [6]: 1) Reduced leakage currents 2) Admirable sub-threshold slope 3) Applicable for Radio frequency applications 4) High performance. 5) Lower power consumption 6) Exceptional electrostatic control over the channel
III. FINFET BASED SRAM
Static Random Access Memory is a volatile memory which holds information as far as power supply is provided. It offers quick access to information and is extremely consistent Static RAM clusters are customarily utilized as reserve memory in high-end processor chips and in Application-Specific Integrated Circuits (ASICs). These occupy an outsized portion of the die space. Giant arrays of quick Static RAM facilitate improve the performance of the system [7].
The basic 6T-Static RAM memory cell is shown in Fig.4. Typical 6T static RAM memory consists of two CMOS inverters connected back to back and 2 NMOS transistors called access transistors that act as gates to access information in memory cell. These back connected inverters form a latch and work as storage element. The word line exerts pass transistors to store information into cell. The drain terminals of pass transistors are connected to memory cell and source terminals are coupled to BL & BLB. The access transistors are deactivated once the word line is made low. At this moment, scan or write operations can't be performed. At this state, memory cell will hold data bits, because the line voltages stay at dd and gnd. Once the word line is set to high, the pass transistors are activated and write operations is performed.
Fig. 4 Conventional 6T-SRAM Memory
CMOS Scaling technology helps tons in coming up with the economical SRAMs however, thinning out on the far side 32nm, such a lot of limitations found in CMOS like sub threshold currents, gate tunneling currents becomes additional sensitive to short channel effects & variations in threshold voltage. In high-end processor chips, high performance digital systems, bio-medical instruments, medical specialty implants all needs economical SRAM having less power consumption and high performance. It becomes essential to design economical low power Static RAMs with acceptable stability and small size, since these occupy more area in processor chip.
To reduce the Short Channel Effects, multi-gate transistors are evolved [8,9]. The processing steps for FINFET fabrication are similar to that of conventional MOS transistors. Since this structure has full control over the channel, it offers reduced SCEs. FINFET SRAM provides higher results than panar bulk CMOS SRAM.
IV. SIMULATION RESULTS
Here, 6T&7T FinFET Static RAM cell is designed in shorted-gate mode and simulated using Hspice at 22nm technology
V. CONCLUSION
In CMOS circuits, flow of static currents at nano scale has become a significant contributor to overall power consumption due to short channel effects. FinFET based SRAMs can be used as a substitute to the conventional CMOS devices. FinFET are suitable for nano-scale memory circuits because of minimized Short Channel Effects (SCE). In this paper, 6T &7T-SRAM is designed using FINFET at 22nm technology. The simulation show that FINFET designs offer low power consumption and high performance at nano-regime beyond 45nm. | 2019-10-31T08:58:02.267Z | 2019-10-10T00:00:00.000 | {
"year": 2019,
"sha1": "8d469f21085e91c696cf515d975863761f026468",
"oa_license": null,
"oa_url": "https://doi.org/10.35940/ijitee.k2292.1081219",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b1106a81448122fffb9e95c2639c0c0faf5d8b3c",
"s2fieldsofstudy": [
"Engineering",
"Physics",
"Computer Science"
],
"extfieldsofstudy": []
} |
254044430 | pes2o/s2orc | v3-fos-license | Reconciling model-X and doubly robust approaches to conditional independence testing
Model-X approaches to testing conditional independence between a predictor and an outcome variable given a vector of covariates usually assume exact knowledge of the conditional distribution of the predictor given the covariates. Nevertheless, model-X methodologies are often deployed with this conditional distribution learned in sample. We investigate the consequences of this choice through the lens of the distilled conditional randomization test (dCRT). We find that Type-I error control is still possible, but only if the mean of the outcome variable given the covariates is estimated well enough. This demonstrates that the dCRT is doubly robust, and motivates a comparison to the generalized covariance measure (GCM) test, another doubly robust conditional independence test. We prove that these two tests are asymptotically equivalent, and show that the GCM test is optimal against (generalized) partially linear alternatives by leveraging semiparametric efficiency theory. In an extensive simulation study, we compare the dCRT to the GCM test. These two tests have broadly similar Type-I error and power, though dCRT can have somewhat better Type-I error control but somewhat worse power in small samples or when the response is discrete. We also find that post-lasso based test statistics (as compared to lasso based statistics) can dramatically improve Type-I error control for both methods.
Conditional independence testing and the model-X assumption
Given a predictor X ∈ R, response Y ∈ R, and high-dimensional covariate vector Z ∈ R p drawn from a joint distribution (X, Y , Z) ∼ L n (potentially varying with n to accommo-date growing p), consider testing the hypothesis of conditional independence (CI) at level α ∈ (0, 1) using n data points In a high-dimensional regression setting, H 0n is a model-agnostic way of formulating the null hypothesis that the predictor X is unimportant in the regression of Y on (X, Z) (Candès et al., 2018). In a causal inference setting with treatment X, outcome Y , observed confounders Z, and no unobserved confounders, H 0n is the null hypothesis of no causal effect of X on Y (Pearl, 2009). As Shah and Peters (2020) showed, the CI null hypothesis is too large in the sense that any test controlling Type-I error on H 0n must be powerless against all alternatives (unless Z is supported on a finite set). Therefore, additional assumptions must be placed on L n to make progress. One such assumption is the model-X (MX) assumption (Candès et al., 2018), which states that L n (X|Z) is known exactly. Under the MX assumption, Candès et al. (2018) propose the MX knockoffs and conditional randomization test (CRT) methodologies, which have elegant finite-sample Type-I error control guarantees. These MX methodologies have since exploded in popularity, undergoing active methodological development and deployment in a range of applications.
One of the primary challenges in the practical application of MX methods is to obtain the required conditional distribution L n (X|Z). Outside the context of randomized controlled experiments (Aufiero and Janson, 2022;Ham, Imai, and Janson, 2022), the MX assumption is an approximation (Barber, Candès, and Samworth, 2020;Huang and Janson, 2020;Li and Liu, 2022). In genome-wide association studies, a realistic parametric distribution can be postulated for this conditional law (Sesia, Sabatti, and Candès, 2019), but the parameters of this distribution must still be learned from data. In practice, the conditional law is usually fit in sample on the same data that is used for testing, and then treated as if it were known (Candès et al., 2018;Sesia, Sabatti, and Candès, 2019;Sesia et al., 2020;Bates et al., 2020;Liu et al., 2022;Li et al., 2021;Sesia et al., 2021;Barry et al., 2021). Such adaptations of MX methodologies are widely deployed, but their robustness and power properties have not been thoroughly investigated.
Our contributions
In this paper, we address this gap by investigating the properties of MX methods with L n (X|Z) learned in sample. This investigation leads us to establish close connections between these methods and double regression approaches to CI testing, and to explore the optimality of CI tests against semiparametric alternatives. We focus our analyses on the distilled conditional randomization test (dCRT ), a fast and powerful instance of the CRT (Liu et al., 2022), and the generalized covariance measure (GCM) test, a prototypical double regression approach to CI testing (Shah and Peters, 2020). Both tests involve learning L n (X | Z) and L n (Y | Z) in sample. Our main contributions are outlined next: 1. The dCRT with L n (X | Z) learned in sample with can have poor Type-I error control if L n (Y | Z) is learned poorly. If L(X | Z) is known exactly, then the dCRT has finite-sample Type-I error control regardless of L(Y | Z) or the quality of its estimate. This is no longer the case once L(X | Z) is fit in sample, as we demonstrate in a numerical simulation and a theoretical counterexample (Section 3).
2. The dCRT is doubly robust, in the sense that errors in L n (X | Z) can be compensated for by better approximations of L n (Y | Z). The MX assumption shifts the modeling burden entirely from L n (Y | Z) to L n (X | Z).
When the latter is fit in sample, shifting the modeling burden partially back towards L n (Y | Z) helps recover asymptotic Type-I error control, as we demonstrate theoretically (Section 4.2).
3. The dCRT resampling distribution approaches normality, making this test asymptotically equivalent to the GCM test. The dCRT is a resamplingbased test, whereas the GCM test is asymptotic. In large samples, however, the resampling-based null distribution of the former converges to the N (0, 1) null distribution of the latter (Section 2). We show that these two tests are asymptotically equivalent against local alternatives (Section 4.1).
4. The GCM test is asymptotically uniformly most powerful against local non-interacting alternatives. Optimality results are widely prevalent in the semiparametric literature, but not in the CI testing literature. We leverage semiparametric optimality theory to prove that the GCM is the optimal CI test against local (generalized) partially linear alternatives (Section 5), a broad class of alternatives in which X and Z do not interact.
5.
In finite samples, the dCRT and GCM test have broadly similar Type-I error and power, with some exceptions. The asymptotic equivalence between the dCRT and GCM test largely carries over to finite samples, as we demonstrate in numerical simulations (Section 6). The two tests have broadly similar Type-I error and power, although there is some divergence in small samples or when Y is discrete: in these cases dCRT can have somewhat better Type-I error control but somewhat worse power.
6. In finite samples, replacing the lasso with the post-lasso markedly improves Type-I error control for both dCRT and GCM test. In MX applications, the lasso is perhaps the most common approach for learning both L n (X | Z) and L n (Y | Z). However, we demonstrate in numerical simulations (Section 6) that the bias reduction offered by the post-lasso greatly improves Type-I error control in the context of both GCM test and dCRT, though at some cost in power.
On the way to making the aforementioned primary contributions, we make a few secondary contributions of independent interest: 7. We reexamine numerical simulation setups from prior MX papers, finding that many have only low levels of marginal dependence between X and Y . Prior works have used numerical simulations to establish that MX methods are fairly robust when fitting L n (X|Z) in sample. However, we note that the conditional independence testing problem (1) is difficult to the extent that Z induces spurious marginal dependence between X and Y (a "confounding" effect). We find simulation setups in prior works have low levels of this marginal dependence (Section 6.1), potentially leading to optimistic conclusions.
8. We collate a number of conditional analogs of classical convergence theorems (some but not all novel). The dCRT involves resampling conditionally on the observed data, so its asymptotic analysis requires reasoning about convergence after conditioning on a σ-algebra that changes with n. We state and prove conditional analogs of Slutsky's theorem, the law of large numbers, the central limit theorem, and other classical convergence theorems (Appendix B). These results are not surprising, but at least some appear novel.
9. We prove a sharpened theorem on optimality in semiparametric testing.
In the literature on semiparametric estimation, an estimator need only be regular in the vicinity of a point for efficiency bounds to hold, whereas popular textbooks (Van Der Vaart, 1998;Kosorok, 2008) state semiparametric testing optimality results globally: a test must control Type-I error on the entire semiparametric null, rather than just in the vicinity of a point, for efficiency bounds to hold. We address this gap by proving a stronger local optimality result for semiparametric testing (Appendix E.1).
Related work
The question of robustness of existing MX methods to misspecification of L n (X | Z) has been investigated before, though not specifically in the context of learning this distribution in sample. Berrett et al. (2020) proved that, in the worst case over all possible test statistics and all possible distributions L n (Y | Z), the excess Type-I error of the CRT based on an approximation to L n (X | Z) is bounded below by the total variation error in approximating n i=1 L n (X i | Z i ). This error is O(1) when fitting L n (X | Z) in sample. We show (see contribution 1) that, even when specializing to the dCRT test statistic, Type-I error control can be poor when L n (Y | Z) is estimated poorly. Berrett et al., 2020 provided a matching upper bound on the Type-I error of the CRT, while Barber, Candès, and Samworth (2020) proved a similar upper bound for MX knockoffs. These worst-case bounds guarantee Type-I error control only when an additional unlabeled sample of size N n is available. Another kind of robustness to misspecification of the MX assumption was proposed by Katsevich and Ramdas (2022); they showed that if only the first two moments of L n (X | Z) are known exactly, then the dCRT has asymptotic Type-I error control. Even this weaker assumption cannot be expected to hold when L n (X | Z) is fit in sample, however.
Other MX methods have been designed specifically to have improved robustness to misspecifications of L n (X | Z). For example, if this law is known to belong to a parametric family with a low-dimensional sufficient statistic, MX inference can be carried out conditionally on this sufficient statistic without needing to accurately estimate the parameters themselves (Huang and Janson, 2020;Barber and Janson, 2022). The former methodology enjoys a double robustness property, related to but different from the one we state for the dCRT (see contribution 2). The conditional permutation test (Berrett et al., 2020) was proposed as a more robust variant of the CRT, though this additional robustness has yet to be formalized theoretically. Finally, the Maxway CRT (Li and Liu, 2022) has recently been proposed as a doubly robust analog of the dCRT. We argue that the dCRT itself is doubly robust. We conjecture that the improved empirical performance of the Maxway CRT over the (lasso-based) dCRT is primarily due to the post-lasso step in the former. Indeed, our inspiration to apply the dCRT with the post-lasso (see contribution 6) comes from the Maxway CRT; we find in simulations that this variant of the dCRT is actually more robust than the Maxway CRT.
Asymptotic analysis of MX methodologies has also been undertaken before (Weinstein, Barber, and Candes, 2017;Liu and Rigollet, 2019;Weinstein et al., 2020;Katsevich and Ramdas, 2022;Wang and Janson, 2022), although primarily for the purposes of power analyses and none in the context of fitting L n (X | Z) in sample. All but Katsevich and Ramdas (2022) assume that L n (X | Z) is known exactly (the full MX assumption), whereas the latter assumes that the first two moments of this distribution are known exactly. In some ways, the current work generalizes the results of Katsevich and Ramdas (2022). For example, the convergence of the dCRT resampling distribution to normality (see contribution 3) was shown in a fixed-dimensional setting where L n (X | Z) is learned out of sample and the first two moments of L n (X | Z) are known. Here, we allow growing dimension, and learning both L n (X | Z) and L n (Y | Z) in sample.
Notation, definitions, and preliminaries
Notation We use boldface font to denote population quantities and regular font to denote sample quantities. We denote by the set of laws satisfying conditional independence, and R n a class of distributions satisfying some regularity assumptions. For example, the MX assumption is that where L * n (X|Z) is a fixed, known distribution. For any regularity class R n , we consider testing the null hypothesis L n ∈ L 0 n ∩ R n . A sequence of tests φ n : (X, Y, Z) → [0, 1] of this null hypothesis has asymptotic Type-I error control if lim sup Let The dCRT and dCRT A simple approach to CI testing under the MX assumption is the conditional randomization test (CRT, Candès et al., 2018), which controls Type-I error not just asymptotically (4) but in finite samples as well. The CRT is based on constructing a null distribution for any test statistic T n (X, Y, Z) by resampling X conditionally on Z using the known conditional law L n (X|Z). While the CRT is in general computationally costly, using a test statistic of the form gives a fast and powerful test called the distilled CRT (dCRT, Liu et al., 2022). Here, µ n,x is known under the MX assumption and µ n,y is learned in sample. Variants of the dCRT have now been deployed in genetics and genomics (Barry et al., 2021) applications. As discussed in Section 1.1, MX methodologies (including the dCRT) are usually deployed by learning L n (X | Z) in sample. For clarity, we give the dCRT with L n (X | Z) fit in sample a new name: dCRT. This procedure is based on the test statistic where µ n, The dCRT procedure is outlined in Algorithm 1; one of the primary goals of this paper is to study this procedure.
1 Learn L n (X|Z) based on (X, Z) and µ n,y (Z) based on (Y, Z); The resampled test statistics T dCRT n ( X (m) , X, Y, Z) (7) have four arguments instead of three in order to emphasize that the conditional mean µ n,x (·) is not refit upon resampling.
The GCM test and double robustness Another CI test is the GCM test (Shah and Peters, 2020), defined as where (9) and ( S GCM n ) 2 is the empirical variance of the product-of-residual summands: It controls Type-I error if the following in-sample mean-squared error quantities are small (Shah and Peters, 2020): In particular, Shah and Peters (2020) require that and, for some constants c 1 , c 2 , δ > 0, The GCM test is therefore doubly robust in the sense that it controls Type-I error if the product of the estimation errors for E[X|Z] and E[Y |Z] (E n,x E n,y ) converges to zero at the o Ln (n −1/2 ) rate. Note that this is a rate double robustness property rather than a model double robustness property; see Smucler, Rotnitzky, and Robins (2019) for a discussion of this distinction.
dCRT resampling distribution converges to normal
To make it easier to analyze the asymptotic properties of the dCRT, in this section we prove that it is asymptotically equivalent to the resampling-free MX(2) F -test, a variant of the MX(2) F -test (Katsevich and Ramdas, 2022) where the first two moments of L n (X|Z) are estimated in sample. This equivalence was already shown by these authors in the case when µ n,x is known and µ n,y is fit out of sample (see their Theorem 2). They conjectured that the equivalence continues to hold when µ n,y is fit in sample. Here, we prove this conjecture, not just when µ n,y is fit in sample, but also when the first two moments of µ n,x are unknown and also fit in sample. Note that the variance of the resampling distribution of T dCRT (11) It will be convenient to reformulate dCRT as Note that this test is obtained from that in Algorithm 1 by sending M → ∞; we focus our theoretical analysis here and throughout on this infinite-resamples limit of the dCRT.
One would expect, based on the central limit theorem, that the conditional distribution of the ratio T dCRT n ( X, X, Y, Z)/ S dCRT n tends to N (0, 1). This statement is complicated by the conditioning event, which requires us to be careful to define conditional convergence in distribution: Definition 1. For each n, let W n be a random variable and let F n be a σ-algebra. Then, we say W n converges in distribution to a random variable W conditionally on F n if Based on an extension of the Lyapunov central limit theorem to conditional convergence in distribution (Theorem 8), we get the following result: Theorem 1. Suppose the sequences of true and learned laws L n and L n satisfy the following two nondegeneracy properties: If the conditional Lyapunov condition is satisfied for some δ > 0, then and therefore This suggests that the dCRT is asymptotically equivalent to the MX(2) F -test, defined Indeed, we have the following corollary.
Corollary 1. Consider a sequence of laws L n satisfying the assumptions (NDG1), (NDG2), and (Lyap-1) of Theorem 1, and assume that the test statistic does not accumulate near Then, the dCRT is asymptotically equivalent to the MX(2) F -test: This result extends Katsevich and Ramdas (2022, Theorem 2) by allowing µ n,x and µ n,y to be fit in sample, rather than assuming µ n,x is known and µ n,y is fit out of sample. It is a first indication that the dCRT approximates a test based on asymptotic normality.
3 dCRT is not robust for general µ n,y One of the hallmarks of MX inference is that it requires "no restriction on the dimensionality of the data or the conditional distribution of [L n (Y |Z)]" (Candès et al., 2018). For the CRT, this means that Type-I error is controlled in finite samples, regardless of the test statistic used or the distribution of the response variable. If L n (X|Z) is described by a parametric model with k unknown parameters and we have N n · k unlabeled samples to learn this model, then at least asymptotic Type-I error control is still possible without assumptions on L n (Y |Z) (Berrett et al., 2020). By contrast, in this section we show that when L n (X|Z) is approximated in sample, we cannot expect Type-I error control without assumptions on the response variable.
Let us consider a simple null model L n with Suppose we fit L n (X|Z) via a ridge regression while using the trivial estimate µ n,y (Z) ≡ 0 for E[Y |Z]. To build intuition while avoiding technical difficulties, we loosely approximate the ridge regression estimator as β n ≡ (1 − c √ n )β, where the 1/ √ n error term reflects that we are fitting β n in sample (and is optimistic in the sense that it ignores possible growth in p). Then, consider the dCRT based on L n (X|Z) = N (Z T β n , 1) and µ n,y (Z) ≡ 0. In this case, the normality of L n (X|Z) leads to normality of the resampling distribution holding not just asymptotically (14) but in finite samples as well. Therefore, the dCRT is equal to the MX(2) F -test: On the other hand, it is easy to derive that Therefore, the limiting Type-I error of the dCRT in this case is which can be made arbitrarily close to one as c → ∞. This issue is caused by a combination of the O(1/ √ n) shrinkage bias in the estimator for µ n,x and the failure to estimate µ n,y . This leaves an O(1/ √ n) correlation between X − µ n,x (Z) and Y induced by Z, which shifts the mean of the null distribution of the dCRT test statistic away from zero by a nontrivial amount.
Numerical simulations (although with lasso instead of ridge regression) confirm this phenomenon. We constructed a numerical simulation based on the null model (19) with n = 1600, p = 400, and β having only s = 5 nonzero entries (see Section 6.2 below for more on our data-generating model). In this setting, we applied the dCRT using the crossvalidated lasso and intercept-only models to estimate µ n,x and µ n,y , respectively. As we increased the magnitude of the coefficient vector β, this test exhibited significant loss of Type-I error control (Figure 1). By contrast, using the lasso instead of the intercept-only model to estimate µ n,y reduced the Type-I error to nearly the nominal level. (19), depending on which method is used to estimate µ n,y , when the lasso is used to estimate µ n,x . Improved estimation of µ n,y leads to markedly reduced Type-I error.
So even when L n (X|Z) is estimated at a parametric rate (albeit with regularization), the dCRT can have inflated Type-I error rate for certain test statistics. A similar observation was made by Li and Liu (2022) (see the discussion after Theorem 3). Similar phenomena have been noted in the contexts of causal inference (Dukes and Vansteelandt, 2020) and doubly robust estimation (Chernozhukov et al., 2018;Chernozhukov et al., 2022); in the latter literature this issue is called "regularization bias." We note that poor estimation of E[Y |Z], in conjunction with the plug-in resampling scheme of the dCRT can also lead to conservative inference rather than liberal inference. This happens in cases when β n is an efficient estimator of β, e.g. that derived from ordinary least squares. In the causal inference context, this conservatism is a consequence of the fact that using estimated propensity scores can lead to more efficient estimates than using known propensity scores (Robins, Mark, and Newey, 1992;Henmi and Eguchi, 2004). If the propensity score is estimated but the standard error is constructed as though it were known, then conservative inference would result.
As already alluded to, the Type-I error inflation in the above example stems from the fact that a rate insufficient for Type-I error control. If we had at least consistency of µ n,y (Z), then this rate would improve to o(1/ √ n) and Type-I error control would be restored. This intuition is supported by the simulation results in Figure 1, where estimating E[Y |Z] via lasso brought the Type-I error down to nearly the nominal level. This discussion suggests that, if L n (X|Z) is learned in sample (or on an external sample of similar size), then assumptions must be placed not only on L n (X|Z) but also on L n (Y |Z) for Type-I error control. This motivates us to investigate the double robustness of the dCRT and compare it to the GCM test.
4 dCRT is doubly robust and equivalent to GCM test Of course, in practice µ n,y is not fit as naively as in the counterexample from Section 3. The conditional mean E[Y |Z] is usually approximated via a machine learning algorithm, as improved approximation of this quantity improves the power of the dCRT (Katsevich and Ramdas, 2022). In the context where L n (X|Z) must be approximated, we claim that more accurate estimation of E[Y |Z] can improve not just the power but also the Type-I error control of the dCRT. We formalize this by showing that the dCRT is doubly robust (recall Section 1.4). This property is a consequence of the fact that, under the null, the dCRT is asymptotically equivalent to the GCM test, which itself is doubly robust. This equivalence also implies that the dCRT and GCM test have the same asymptotic power against contiguous alternatives.
Equivalence between GCM test and dCRT
When comparing the GCM test (8) to the MX(2) F -test (16), which is asymptotically equivalent to the dCRT (Corollary 1), the only difference is the normalization term. Under the null hypothesis, this difference vanishes asymptotically as long as the estimated variance Var Ln [X|Z] is consistent in the following sense: In preparation to state our equivalence result, we augment the assumption (SP1) as follows: Theorem 2. Suppose L n ∈ L 0 n is a sequence of laws satisfying the assumptions (SP1') and (SP2), the nondegeneracy condition (NDG2), the variance consistency property (23) and the Lyapunov condition Then, the dCRT and GCM variance estimates are asymptotically equivalent: as are the dCRT and GCM tests themselves: The variance consistency property (23) is relatively easy to achieve, given the other assumptions of Theorem 2. The following proposition states two sufficient conditions for this property.
Proposition 1. If the assumptions of Theorem 2 other than variance consistency (23) hold, then the latter property holds in the following two cases: Conv(supp(L n (X))) and supp( µ n,x (Z)) ⊆ Conv(supp(L n (X))) almost surely for every n; The first variance estimate given in the proposition can always be applied; the second applies to cases when the mean-variance relationship for L n (X|Z) is known and Lipschitz on the convex hull of the support of X, denoted Conv(L n (X)). This is the case, for example, if X is binary and we define f (t) ≡ t(1 − t).
One consequence of Theorem 2 is that the dCRT and GCM test are also asymptotically equivalent against local alternatives, so in particular have the same power.
Corollary 2. If L n is a sequence of alternative distributions that is contiguous to a sequence L n ∈ L 0 n satisfying the assumptions of Theorem 2, then the dCRT and GCM tests are asymptotically equivalent against L n : and therefore have the same asymptotic power: By constructing a null distribution via resampling, the CRT allows for arbitrarily complicated test statistics whose asymptotic distributions are not known. For the dCRT, however, the resampling-based null distribution simply recapitulates the asymptotic normal distribution used by the GCM test (Theorems 1 and 2). Therefore, at least in large samples, the extra computational burden of resampling is unnecessary as the equivalent GCM can be applied instead.
Double robustness of dCRT
Another consequence of Theorem 2 is that the dCRT is doubly robust under the variance consistency condition (23), since it is equivalent under the null hypothesis to the doubly robust GCM test.
Corollary 3. Let R n be a sequence of regularity conditions such that for any sequence L n ∈ R n , we have the nondegeneracy condition (NDG2), the Lyapunov condition (Lyap-2), the conditions (SP1') and (SP2), and consistent variance estimates (23). Then, the dCRT has asymptotic Type-I error control over L 0 n ∩ R n in the sense of the definition (4).
Therefore, Type-I error control requires accuracy of only the first two moments of L n , in parallel to Theorem 2 of Katsevich and Ramdas (2022). The condition on the second moment of L n (X|Z) is needed because the variance of the resampling distribution must not be smaller (asymptotically) than the true variance of the test statistic. This condition does not require much more than accurate estimation of the first moments (Proposition 1). It can be dropped altogether if we build normalization directly into the dCRT test statistic. We explore this possibility in Appendix A.
Our conclusion that dCRT is doubly robust initially appears at odds with the statement that "the model-X CRT...does not pursue such double robustness through learning and adjusting for both X|Z and Y |Z..." (Li and Liu, 2022). This statement is in reference to the worst-case performance of the CRT across all possible test statistics (Berrett et al., 2020). We agree that this worst-case performance can be poor when learning L n (X|Z) in sample (Section 3). However, the test statistics applied in conjunction with the CRT (such as the dCRT statistic) do usually involve learning and adjusting for L n (Y |Z). In this sense, practical applications of the (d)CRT do learn and adjust for both L n (X|Z) and L n (Y |Z); the former is learned when approximating the "model for X" and the latter when computing the test statistic. If the quality of these estimates is sufficiently good, then the dCRT will control Type-I error (Corollary 3).
GCM test is optimal against certain alternatives
We have shown that, in large samples, the dCRT has the same power against local alternatives as the resampling-free GCM test. Of course, other instances of the much more general CRT paradigm have better power than the GCM test against certain alternatives. We show in this section, however, that this is not the case for generalized partially linear models (GPLMs), a broad class of alternatives. In fact, the GCM test is asymptotically most powerful against GPLM alternatives. We leverage classical semiparametric efficiency theory (Choi, Hall, and Schick, 1996;Van Der Vaart, 1998;Kosorok, 2008) to prove this result. We state our optimality result in Section 5.1, give an example of its application in Section 5.2, and then compare it to existing semiparametric optimality results in Section 5.3.
Optimality result
To facilitate the link with semiparametric theory, in this section of the paper we operate in a fixed-dimensional setting. Accordingly, we drop the subscript n from L 0 n and R n . For each value of n, we have (X, Y , Z) ∈ R 1+1+p for fixed p. We will seek power against semiparametric GPLM alternatives of the form Here, L x,z is a fixed law, f η is a one-parameter exponential family with natural parameter η ∈ R and log-partition function ψ, β ∈ R and where H g is a linear subspace of the L 2 space of functions on R p with the measure L x,z (Z). The alternatives (29) are those where Y |X, Z follows an exponential family distribution with natural parameter linear in X and potentially nonlinear in Z. Note that GPLMs include linear and generalized linear models as special cases, and therefore cover a broad range of alternative distributions. We focus on power against local alternatives L θn(h) near θ 0 ≡ (0, g 0 ), defined by We leave the dependence of θ n (h) on g 0 implicit. Next, we define asymptotic optimality against such local alternatives following Choi, Hall, and Schick (1996): Definition 2. For h ∈ (0, ∞) × H g , we say a test φ * n is the locally asymptotically most powerful level α test of if φ * n has asymptotic Type-I error control over R at level α and for any other test φ n satisfying the same property we have lim sup If this is true for every h ∈ (0, ∞)×H g , such a test is locally asymptotically uniformly most powerful at g 0 , or LAUMP( We are now ready to state our main optimality result. Theorem 3. Consider the conditional independence testing problem (32), with a collection of null distributions R ⊆ L 0 satisfying some regularity conditions, a linear subspace H g ⊆ L 2 (L x,z (Z)) specifying possible values for the nonparametric component g in the GPLM alternative model (29), and some subset S ⊆ H g . If the following four assumptions hold: assumptions (SP1) and (SP2) hold for all L ∈ R, Let us discuss each of the four assumptions of Theorem 3: • The assumption (35) is a set of regularity conditions on the null distributions R. It is the same set of assumptions made by Shah and Peters (2020) to ensure Type-I error control of the GCM test over R, including the assumption that the conditional means µ n,x and µ n,y are fit accurately enough (SP1) and fairly mild moment assumptions (SP2).
• The assumption (36) is a set of regularity conditions on the alternative distribution (29). These conditions are required for the semiparametric optimality theory to apply. These assumptions allow for GPLMs based on the normal distribution (assuming X has second moment) or any other exponential family (assuming (X, Z) is compactly supported and the functions g are continuous).
• The assumption (37) states that the conditional expectation Z → E Lx,z [X|Z] must belong to the subspace H g . It guarantees that the "least favorable" value of the nonparametric component g is in the space H g , yielding the optimality of the GCM statistic.
• The assumption (38) connects the semiparametric alternative hypothesis to the conditional independence null hypothesis. In some sense it requires L θ 0 ≡ L (0,g 0 ) (derived from the semiparametric alternative distribution (29)) to be an interior point of R (the conditional independence null) for each g 0 ∈ S.
We give an example of when these assumptions hold in the next section.
Example: Kernel ridge regression
We illustrate Theorem 3 with a kernel ridge regression example, borrowed from Shah and Peters (2020, Section 4). Suppose the conditional expectations µ (Wainwright, 2019, Example 12.16). Consider the kernel ridge estimators with λ tuned as described in Shah and Peters (2020, Section 4). Using Shah and Peters (2020, Theorem 11), the following result can be derived as a consequence of Theorem 3.
Hence, the GCM test based on kernel ridge regression does not just control Type-I error (Shah and Peters, 2020, Theorem 11); it is also optimal against local alternatives.
Discussion of Theorem 3
Theorem 3 states that the GCM test of Shah and Peters (2020) is the optimal test of conditional independence against a broad class of semiparametric GPLM alternatives, including linear and generalized linear models. To our knowledge, it is the first result at the intersection of conditional independence testing and semiparametric optimality, although Shah and Peters (2020) have already noted the connection between the GCM test and nonparametric estimation of the expected conditional covariance between X and Y given Z. Our result complements another line of work on minimax optimality for conditional independence testing (Canonne et al., 2018;Neykov, Balakrishnan, and Wasserman, 2021;Kim et al., 2022). In the related model-X context, few optimality results are available. Two existing works show optimality statements based on likelihood ratio statistics; one in the context of the CRT (Katsevich and Ramdas, 2022) and the other in the context of model-X knockoffs (Spector and Fithian, 2022).
Theorem 3 closely parallels results on estimation in semiparametric regression (Robinson, 1988;Bickel et al., 1993;Donald and Newey, 1994;Härdle, Liang, and Gao, 2000;Robins and Rotnitzky, 2001;Van De Geer et al., 2014;Ning and Liu, 2017;Janková and Van De Geer, 2018;Chernozhukov et al., 2018). It follows from Bickel et al. (1993) and Robins and Rotnitzky (2001) that the GCM statistic with the true conditional means µ x and µ y is the efficient score under the null hypothesis β = 0 in the context of GPLMs based on one-parameter exponential families with canonical link. Existing results on semiparametric optimality for hypothesis testing state that tests based on optimal estimators are themselves optimal (Choi, Hall, and Schick, 1996;Van Der Vaart, 1998;Kosorok, 2008).
Despite the similarity between Theorem 3 and existing semiparametric optimality results, we emphasize that this theorem is a statement about optimality for conditional independence testing rather than for semiparametric testing. The semiparametric model (29) plays the role of the alternative distribution with respect to which power is evaluated, and need not hold under the null hypothesis. To bridge this gap, it suffices to find an open ball within the conditional independence null hypothesis containing the semiparametric null hypothesis (38). This allows us to reduce the conditional independence testing problem to a semiparametric testing problem, and therefore to leverage existing semiparametric optimality results (Appendix E).
Note that Theorem 3 gives the power against local alternatives of the GCM test with µ x and µ y estimated in sample. This complements Shah and Peters (2020, Theorem 8), where these authors compute the power of the GCM test against non-local alternatives by resorting to sample splitting, which is not required to show Type-I error control for the GCM test. This sample splitting is necessary under non-local alternatives to avoid Donsker conditions; using either sample splitting or Donsker conditions is also standard practice in the semiparametric literature. By contrast, we avoid sample splitting by exploiting the special structure of the conditional independence null and contiguity arguments to compute limiting power under local alternatives.
While the Type-I error control results in Section 4 are stated in the high-dimensional setting, Theorem 3 is stated only for fixed-dimensional covariate vectors Z. Indeed, semiparametric optimality theory is predominantly low-dimensional. A notable exception is the work of Janková and Van De Geer (2018), which provides a semiparametric theory of estimation in high dimensions. Extending this theory to hypothesis testing is nontrivial, and beyond the scope of the current work. Nevertheless, proving optimality statements for conditional independence testing in high dimensions is an interesting direction for future work. We note in passing that high-dimensional results for lasso-based estimators often assume exact sparsity of the coefficient vector, which poses a problem for condition (38) requiring the regularity class R to have interior points. Finally, we note that Theorem 3 gives the optimality of the GCM statistic against alternative models for Y in which X and Z do not interact. For alternatives where the conditional association between Y and X is modified by Z, the GCM test will no longer be optimal. Variants of the CRT (Zhong, Kuffner, and Lahiri, 2021; Sesia and Sun, 2022), model-X knockoffs (Li et al., 2021), and the GCM test (Lundborg et al., 2022) are designed to improve power in the presence of effect modification are available, although their optimality properties are not described. Optimal tests developed specifically for detecting interaction effects between X and Z (rather than main effects) may be constructed based on Vansteelandt et al. (2008).
Finite-sample performance assessment
The results in the preceding sections are all asymptotic. In this section, we complement these results with a comprehensive simulation-based assessment of Type-I error and power in finite samples. Previous simulation-based assessments of the Type-I error of MX methods have come to differing conclusions: Sesia, Sabatti, and Candès (2019), Romano, Sesia, and Candès (2019), Sesia et al. (2020), andLiu et al. (2022) found broad robustness to misspecification of L n (X|Z) while Li and Liu (2022) found such misspecifications to cause marked Type-I error inflation. We show that differences in the level of marginal association between X and Y implied by the simulation design explain these discrepancies, and then use this insight to inform our own simulation design in Section 6.2. Then, we present the results of our numerical simulations in Section 6.3. Numerical simulation results and instructions to reproduce them are available at https://github.com/Katsevich-Lab/symcrt-manuscript-v1.
Revisiting prior simulations of robustness
The question of robustness of MX methods to the misspecification of L n (X|Z) has been investigated starting from the paper in which the model-X framework was originally proposed (Candès et al., 2018). In this paper, the joint distribution L n (X, Z) was estimated in sample via the graphical lasso, which is similar to estimating the conditional distribution L n (X|Z) via the ordinary lasso. These authors found that "Although the graphical Lasso is well suited for this problem since the covariates have a sparse precision matrix, its covariance estimate is still off by nearly 50%, and yet surprisingly the resulting power and FDR are nearly indistinguishable from when the exact covariance is used...the nominal level of 10% FDR is never violated, even for covariance estimates very far from the truth." Similar conclusions have been drawn from numerical simulations in subsequent papers as well (Sesia, Sabatti, and Candès, 2019;Romano, Sesia, and Candès, 2019;Liu et al., 2022), the latter studying the dCRT specifically. On the other hand, the numerical simulations of Li and Liu (2022) show that the dCRT can suffer significant Type-I error inflation when L n (X|Z) is inaccurately fit. These authors state that "for model-X inference, the dependence of X on Z is not adequately characterized and adjusted [for] due to the shrinkage bias of lasso." To resolve this apparent contradiction, we consider a common data-generating model used in MX literature: Often, (X, Z) are assumed to have a spatial structure (motivated by the GWAS application), with Σ = Σ(ρ) ∈ R (1+p)×(1+p) taken to be the AR(1) covariance matrix with autocorrelation parameter ρ ∈ (−1, 1). This covariance matrix roughly approximates linkage disequilibrium structure among genotypes, where correlations among variables are local with respect to the spatial structure. Conditional independence under this model (44) reduces to H 0 : θ = 0. Furthermore, the conditional distribution L n (X|Z) implied by the normal joint distribution is that of a linear model: In the context of this model, the conditional independence testing problem is nontrivial to the extent that Z induces marginal association between X and Y even in the absence of conditional association. In a causal inference context, this spurious marginal association would be called a confounding effect of Z. This marginal association can be small or large, depending on the correlation structure of Z and the extent to which the supports of β and γ overlap. Properly adjusting for Z is important to the extent that Z induces marginal association between X and Y . We claim that the simulation studies in much of the original MX literature had relatively low levels of marginal association between X and Y , whereas the simulation studies in Li and Liu (2022) were done in a regime with much more marginal association. To illustrate this point, we quantify the level of marginal association in a given problem setup as the Type-I error of the GCM test with intercept-only models for L n (X|Z) and L n (Y |Z). This test is essentially a Pearson test of (marginal) independence between X and Y , and ignores the variables Z altogether. We compute this Type-I error for the data-generating models used to assess robustness by Candès et al. (2018), Liu et al. (2022, and Li and Liu (2022) (Appendix F.1). The former two papers are framed in the variable selection context, where several explanatory variables W j are considered, and the hypothesis H 0 : Y ⊥ ⊥W j | W -j is tested for each j. Therefore, X ≡ W j for each j. On the other hand, Li and Liu (2022) considered a conditional independence testing framework, where X was a single variable of interest.
For the data-generating models used by Candès et al. (2018) and Liu et al. (2022), we evaluate the Type-I error of the marginal GCM test for each hypothesis H 0 : Y ⊥ ⊥W j | W -j , plotting these as a function of j (Figure 2, top row). We superimpose onto these plots a blue horizontal line indicating the Type-I error of the marginal GCM test for the data-generating model used by Li and Liu (2022)
Simulation design
Data-generating model As discussed in the previous section, appropriately setting the marginal correlation between X and Y in a given data-generating model is crucial to properly evaluate the impact of inaccurate estimation of L n (X|Z) on the Type-I error control of a model-X method. Keeping this in mind, we propose the following datagenerating model: We set the first s coefficients of β to be equal to ν and the rest to zero. Therefore, the entire data-generating process is parameterized by the six parameters (n, p, s, ρ, θ, ν) ( Table 1). For both null and alternative simulations, we vary each of the first four across five values each, setting the remaining three to the default value indicated in bold. The fifth parameter θ controls the signal strength and the sixth parameter ν controls the extent of marginal association between X and Y . For the null simulation, we set θ ≡ 0, and for each setting of (n, p, s, ρ), we choose five values of ν equally spaced between 0 (no marginal association) and ν max (computed so that the marginal GCM method has Type-I error 0.99). Note that ν max depends on the parameters (n, p, s, ρ), so not exactly the same values of ν were used across settings of these four parameters. For the alternative simulation, we kept ν fixed at ν max /2 while for each setting of (n, p, s, ρ), we choose five values of θ equally spaced between 0 (no signal) and θ max (computed so that the GCM method with oracle settings of µ n,x and µ n,y has power 0.99). Finally, we complement the linear regression data-generating model (46) with an analogous one based on logistic regression.
Methodologies compared In Section 4, we found that the GCM test and the dCRT are equivalent when applied with the same estimation methods for µ n,x and µ n,y . Using this equivalence, we also showed that the dCRT is robust to errors in µ n,x if they are compensated for by accurate estimates µ n,y . In our simulation to assess Type-I error, we wish to probe the finite-sample Type-I error control of the GCM and the dCRT. We apply both of these methods with the lasso to estimate µ n,x and µ n,y , as this is the most common choice in the MX literature.
In addition to the GCM test and the dCRT, we apply the Maxway CRT (Li and Liu, 2022), designed specifically to improve the Type-I error control of the dCRT in the context The second and third tables denote the values of (θ, ν) used for the null and alternative simulations. Each combination of (n, p, s, ρ) was paired with each of the five values of (θ, ν) displayed for null and alternative simulations.
when µ n,x must be estimated. The Maxway CRT is inherently a semi-supervised method, assuming the existence of an auxiliary unlabeled dataset containing observations of X and Z but not of Y . The methodology (specifically, "Maxway in example 1") proceedsroughly-by fitting L n (X|Z) on the unlabeled data via the post-lasso (i.e. selecting active variables via the lasso and then refitting via ordinary least squares, Belloni and Chernozhukov, 2013), fitting µ ny (Z) on the labeled data via post-lasso, and then applying dCRT on the labeled data based on these two models.
Since the primary focus of this paper is the setting when no auxiliary unlabeled data are available, we implement the Maxway CRT by randomly splitting the data into two equal pieces, using the first as the unlabeled data (in particular, ignoring the response data) and the second as the labeled data. This strategy is consistent with the real data analysis in Li and Liu (2022, Section 6). We also consider a bona-fide semi-supervised setup, in order to compare the GCM test and dCRT to the Maxway CRT in the setting originally considered by Li and Liu (2022). However, in the semi-supervised setting we use all of the available data on (X, Z) (i.e. both unlabeled and labeled data) to fit L n (X|Z). By contrast, Li and Liu (2022) used only the unlabeled data to learn L n (X|Z) in their implementation of the dCRT for semi-supervised data.
Finally, we noted in Section 4 that the dCRT already has a built-in doubly robust property. Therefore, we conjectured that the Type-I error inflation observed in the simulations of Li and Liu (2022) is attributable to poor estimation of µ n (X|Z) and/or µ n (Y |Z) and that the dCRT can achieve Type-I error control if used in conjunction with better estimators of these conditional means. Taking inspiration from Li and Liu (2022), we also considered versions of the dCRT and the GCM test based on the post-lasso in addition to those based on the usual lasso. In summary, we compared five methods: lasso and post-lasso based GCM, lasso and post-lasso based dCRT, and Maxway CRT ( Table 2). As a point of reference for the null simulation, we also included the GCM test with ground truth -ground truth - Table 2: The five methodologies compared, how they estimate µ n,x and µ n,y , and what data they use for each in the context of semi-supervised or fully supervised data. Note that in the fully supervised case, data is split in half to form "unlabeled" and labeled sets for Maxway CRT. In this case, the dCRT and GCM tests still use all of the data available for estimating µ n,x and µ n,y . Two additional tests were used for reference purposes: the GCM test with intercept-only models for µ n,x and µ n,y and the GCM test with µ n,x and µ n,y set to their ground truth values.
intercept-only models for µ n,x and µ n,y ; the Type-I error of this test quantifies the degree of marginal association in the data-generating model (Section 6.1). As a point of reference for the alternative simulation, we also included the GCM test with µ n,x and µ n,y set to their ground truth values; the power of this test is the maximum power achievable by any test and therefore quantifies the signal strength in the data-generating model.
Evaluation of power in the presence of Type-I error inflation
The methodologies compared control Type-I error to differing extents across the variety of simulation parameters in Table 1. This makes it challenging to compare power across methods, since some control Type-I error while others do not. To address this challenge, we chose to compare the power of the test statistics underlying the methods, each under oracle calibration to ensure Type-I error control. Given the composite null, exact oracle calibration is computationally intractable. Therefore, we instead calibrated each test with respect to the point null given by This is the "closest" point in the null to the alternative (46)
Simulation results
We conducted simulations for Gaussian and binary models for the response Y , each within the supervised and semi-supervised settings. We present the Type-I error and power for Gaussian responses in the supervised setting in Figures 3 and 4, respectively, while deferring the other cases to Appendix F.3. Note also that for the sake of brevity Figures 3 and 4 only present three out of the five values for the four parameters n, p, s, ρ; the complete results are presented in Appendix F.3. Next we list the main conclusions regarding Type-I error based on the results in Figures 3 (Gaussian supervised), 8 (Gaussian semi-supervised), 10 (binary supervised), and 12 (binary semi-supervised): • As one would expect, across all simulation settings, all methods have poorer Type-I error control as sample size n decreases, dimension p increases, number of nonzero coefficients s increases, autocorrelation ρ increases, or marginal association strength ν increases.
• For Gaussian responses, the dCRT and GCM methods based on the same test statistics have very similar Type-I error control, echoing the asymptotic equivalence of the two methods (Theorem 2). For binary responses, the lasso-based dCRT has somewhat lower Type-I error than the lasso-based GCM test ( Figure 10). The discreteness of binary responses likely slows down the convergence to normality of the GCM statistic, rendering the resampling-based null distribution of the dCRT a better approximation to the null distribution.
• Across all simulation settings, the dCRT and GCM methods based on the post-lasso have dramatically better Type-I error control than their lasso-based counterparts. This is because the post-lasso tends to more fully regress the confounders Z out of the response Y ; see also Appendix F.2.
• Across all simulation settings, Maxway CRT has better Type-I error control than the lasso-based dCRT (in line with the results of Li and Liu, 2022), but worse Type-I error control than the post-lasso-based dCRT. The latter is likely due to the fact that Maxway CRT uses only half of the available data on (X, Z) to fit L n (X|Z), and therefore does not adjust for Z as accurately.
Next, we list the main conclusions regarding power based on the results in Figures 4 (Gaussian supervised), 9 (Gaussian semi-supervised), 11 (binary supervised), and 13 (binary semi-supervised): • Across all simulation settings, GCM-based methods have somewhat higher power than their dCRT-based methods. This may have to do with the stabilizing effect of the GCM normalization, compared to the unnormalized dCRT statistic. The difference between the two tends to vanish as sample size grows, reflecting the asymptotic equivalence of the two methods (Corollary 2). • Across all simulation settings, the dCRT and GCM methods based on the lasso have lower power than their post-lasso-based counterparts. This is because the post-lasso introduces more variance into the estimation of µ n,y ; see also Appendix F.2.
• Across Gaussian and binary supervised simulation settings (Figures 7 and 11), Maxway CRT has the lowest power among all methods compared. The reason for this is that Maxway CRT relies on data splitting and therefore has half the effective sample size of the other methods. On the other hand, for semi-supervised settings (Figures 9 and 13), Maxway CRT has power comparable to or better than those of the post-lasso-based methods, but still worse than the lasso-based methods. This is due to the additional variance introduced by the refitting step in the post-lasso.
In summary, the methods with the best Type-I error control across all simulation settings are the dCRT and the GCM test based on the post-lasso, although this improved robustness does come with a cost in terms of power when compared to the lasso-based methods. We investigate the associated trade-off in Appendix F.2.
Conclusion
We conclude by summarizing our main findings and highlighting directions for future work.
Model-X inference with L(X|Z) fit in sample can be doubly robust Model-X inference (Candès et al., 2018) is presented as a mode of inference where the assumptions are transferred entirely from L(Y |Z) to L(X|Z); no restrictions are made on the former law (or the test statistic used, at least in the context of the CRT), while the latter law is assumed exactly known. In practice, however, the law L(X|Z) is often fit in sample.
In the context of the dCRT, we show that Type-I error control cannot be guaranteed without restrictions on L(Y |Z) or the test statistic used (Section 3). On the other hand, test statistics based on decent estimates of E[Y |Z] can compensate for errors in the estimation of L(X|Z) and restore Type-I error control (Corollary 3), a double robustness phenomenon. This result brings model-X inference more in line with double regression inferential methodologies: The conditional mean E[X|Z] is estimated in the context of in-sample approximation to the "model for X," and the conditional mean E[Y |Z] is estimated when computing the model-X test statistic. Relatedly, a double robustness property was noted for conditional model-X knockoffs (Huang and Janson, 2020). A doubly robust version of the dCRT has also been recently proposed (the Maxway CRT; Li and Liu, 2022), although we argue that the original dCRT is itself doubly robust.
The GCM test has broadly similar Type-I error and power as the dCRT, but requires no resampling When fitting L(X|Z) in sample, the dCRT is essentially a double regression methodology. This prompts a comparison to the GCM test (Shah and Peters, 2020), another conditional independence test based on double regression. We established that the two tests are asymptotically equivalent under the null (Theorem 2) and under arbitrary local alternatives (Corollary 2). This suggests that the dCRT and the GCM test-when applied with the same estimators for E[X|Z] and E[Y |Z]-should have similar Type-I error control and power. Our numerical simulations (Section 6) largely confirm this behavior in finite samples. A possible exception to this conclusion is the case when small samples or discreteness in the data slows down the convergence of the dCRT resampling distribution to normality (Theorem 1). In such cases, we observed that the dCRT can in fact have better Type-I error control than the GCM based on the same estimators (Figure 10), presumably thanks to a better approximation to the null distribution in finite samples. Nevertheless, the broad similarity between the performances of the GCM test and the dCRT and the fact that the former test requires no resampling suggest that the GCM test may be preferable to the dCRT in practical problems with relatively large sample sizes.
The post-lasso yields much better Type-I error control than the lasso Double robustness results for the GCM test and the dCRT apply only insofar as the estimation methods used in conjunction with these tests are accurate enough (SP1). The default estimation method for E[X|Z] and E[Y |Z] in many model-X applications is the lasso. As was demonstrated by Li and Liu, 2022, the shrinkage bias of the lasso leads to inadequate adjustment of X and Y for Z, which in turn leads to inflated Type-I error. The same authors proposed the Maxway CRT, an extension of the dCRT involving the identification of coordinates of Z impacting X and Y via the lasso followed by least squares refitting. Inspired by this work, we applied the original dCRT with post-lasso estimates for E [X|Z] and E[Y |Z]. We found vastly improved Type-I error control (Figure 6), compared not just to the lasso-based dCRT but also to the Maxway CRT itself. The decreased bias of the post-lasso helps adjust for Z more fully, although we found that the extra variance incurred by refitting does come at a cost in power. Nevertheless, our results suggest that applying the post-lasso in conjunction with model-X methodologies can lead to significant improvements in robustness.
The GCM test is the optimal conditional independence test against alternatives without interactions between X and Z It is widely known in the semiparametric literature that the GCM test is the efficient score test for (generalized) partially linear models. The connection between the GCM test and semiparametric theory was noted briefly by Shah and Peters (2020), though not explored in depth; presumably because the GCM test is a conditional independence test rather than a test of a parameter in a semiparametric model. Nevertheless, we find that if the semiparametric null hypothesis can be embedded within the conditional independence null hypothesis (38), semiparametric optimality theory can be carried over fairly directly to conditional independence testing to establish optimality against semiparametric alternative distributions (Theorem 3). Thanks to this connection, we find that the GCM test has optimal asymptotic power among conditional independence tests against local generalized partially linear model alternatives (29). On the other hand, we leave open the question of optimality against alternatives where X and Z are allowed to interact. We also leave open whether our optimality result can be extended to the high-dimensional regime. (Belloni, Chernozhukov, and Hansen, 2014;Chernozhukov et al., 2018). On the other hand, consistent estimates are typically not available in the regime when n, p, and s grow proportionally (Bayati and Montanari, 2011), causing a failure in traditional debiased estimates (Celentano and Montanari, 2021). An additional limitation of the current work is that we do not directly consider the variable selection problem. For example, application of the GCM test to each variable is much more computationally costly than applying model-X knockoffs. Therefore, the comparison between model-X and doubly robust methodologies for variable selection purposes requires more thought.
A The dCRT with GCM normalization
As an alternative to the dCRT, we consider the ndCRT. This procedure is based on a normalized statistic that coincides exactly with the GCM statistic: The only difference with GCM is that the critical value is given by conditional resampling rather than a normal quantile: Here, and T dCRT n ( X, X, Y, Z) is as defined in equation (7).
Theorem 4. Let L n be a sequence of laws such that the nondegeneracy conditions (NDG1) and (NDG2) and the conditional Lyapunov condition (Lyap-1) hold. Then, and therefore and the ndCRT and GCM tests are equivalent: Corollary 5. Let R n be a sequence of regularity conditions such that for any sequence L n ∈ R n , we have the the nondegeneracy conditions (NDG1) and (NDG2), the conditional Lyapunov condition (Lyap-1), and the assumptions (SP1) and (SP2). Then, the ndCRT has asymptotic Type-I error control over L 0 n ∩ R n in the sense of the definition (4).
Comparing Corollary 5 to Corollary 3, we see that the ndCRT controls Type-I error under weaker assumptions than required for the dCRT; in particular the variance of L n (X|Z) need not be estimated with any degree of accuracy.
B Conditional convergence results
The proofs of our theoretical results rely on the conditional counterparts of several standard convergence theorems. In this section, we state these conditional convergence theorems. We defer their proofs to Appendix G.
First we define a notion of conditional convergence in probability, analogous to our definition of conditional convergence in distribution (Definition 1).
Definition 3. For each n, let W n be a random variable and let F n be a σ-algebra. Then, we say W n converges in probability to a constant c conditionally on F n if W n converges in distribution to the delta mass at c conditionally on F n (recall Definition 1). We denote this convergence by W n | F n p,p −→ c. In symbols, Now we are ready to state the conditional convergence results.
B.1 Statements
For the sake of all results below, let F n be a sequence of σ-algebras.
Theorem 5 (Conditional Polya's theorem). Let W n be a sequence of random variables.
If W n | F n d,p −→ W for some random variable W with continuous CDF, then Theorem 6 (Conditional Slutsky's theorem). Let W n be a sequence of random variables. Suppose a n and b n are sequences of random variables such that a n p → 1 and b n p → 0. If W n | F n d,p −→ W for some random variable W with continuous CDF, then Theorem 7 (Conditional law of large numbers). Let W in be a triangular array of random variables, such that W in are independent conditionally on F n for each n. If for some δ > 0 we have The condition (54) is satisfied when As a corollary of Theorem 7, if we choose F n = {∅, Ω}, we are able to obtain the following version of the weak law of large numbers for triangular arrays.
Corollary 6 (Unconditional weak law of large numbers). Let W in be a triangular array of random variables, such that W in are independent for each n. If for some δ > 0 we have The condition (57) is satisfied when Theorem 8 (Conditional central limit theorem). Let W in be a triangular array of random variables, such that for each n, W in are independent conditionally on F n . Define and assume 0 < Var[W in |F n ] < ∞ almost surely for all i = 1, . . . , n and for all n ∈ N. If for some δ > 0 we have Lemma 1 (Conditional convergence implies quantile convergence). Let W n be a sequence of random variables and α ∈ (0, 1). If W n | F n d,p −→ W for some random variable W whose CDF is continuous and strictly increasing at Q α [W ], then
B.2 Discussion
The above definitions and results on conditional convergence are not particularly surprising, and related results are present in the existing literature. Nevertheless, we have not found any of the above results stated in the literature in exactly this form. Here we discuss the relationships of our definitions and results with existing ones. Notions of conditional convergence in probability and in distribution have been explicitly defined by Nowak and Ziȩba (2005). However, these notions require a single conditioning σ-algebra as well as almost sure convergences of conditional probabilities, whereas in Definitions 1 and 3 we allow the conditioning σ-algebra to change with n and for the conditional probabilities to converge in probability. Our Definition 1 can be viewed as formalizing the notion of conditional convergence in distribution implicitly used by Wang and Janson (2022). Related notions of conditional convergence in distribution allowing for changing conditioning σ-algebra are present implicitly in the works of Dedecker and Merlevede (2002) and Bulinski (2017), though these are based on the convergence of conditional characteristic functions as opposed to conditional cumulative distribution functions.
Turning to the convergence results themselves, we were not able to find conditional Polya's theorem (Theorem 5) in the literature. Conditional Slutsky's theorem (Theorem 6) is a generalization of Wang and Janson (2022, Lemma 5) to the case when a n is not necessarily independent of F n and b n = 0. Versions of the conditional law of large numbers are given by Majerek, Nowak, andZieba, 2005 andPrakasa Rao, 2009, but these involve a single conditioning σ-algebra and do not allow for triangular arrays, unlike Theorem 7. Remarkably, we could not find even the unconditional triangular array law of large numbers (Corollary 6) in the literature; existing results either assume a second-moment condition or use truncation (Durrett, 2010, Theorems 2.2.4 and 2.2.6, respectively) instead of a 1 + δ moment condition or are not applicable to triangular arrays (Shah and Peters, 2020, Lemma 19). As for central limit theorems, Grzenda and Zieba (2008), Prakasa Rao (2009), and Yuan, Wei, and Lei (2014 give non-triangular array versions of the conditional central limit theorem that require a single conditioning σ-algebra, unlike Theorem 8. Versions of the conditional central limit theorem appropriate for varying conditioning σ-algebras and triangular arrays are given by Dedecker and Merlevede (2002) and Bulinski (2017), those these involve different notions of conditional convergence in distribution. Results similar to Theorem 8 for δ = 1 are presented in a recent line of work on sample-splitting-based inference (Kim and Ramdas, 2020;Shekhar, Kim, and Ramdas, 2022b;Shekhar, Kim, and Ramdas, 2022a); these can be proved via the Berry-Esseen theorem. Finally, we note that our result that conditional convergence in distribution implies in-probability quantile convergence (Lemma 1) is a generalization of Wang and Janson (2022, Lemma 3) to general conditioning σ-algebras.
C Proofs for Section 2 C.1 Proofs of main results
Proofs of Theorem 1 and Corollary 1. We prove instead the stronger Theorem 9 and Corollary 7 below.
Theorem 9. Let L n be a sequence of laws and L n be a sequence of estimates. Suppose either of the following two sets of assumptions is satisfied: 1. The nondegeneracy conditions (NDG1) and (NDG2) and the conditional Lyapunov condition (Lyap-1) hold.
The assumptions of Theorem 2 hold.
Then, the normalized T dCRT n ( X, X, Y, Z) converges conditionally to a normal distribution (14) and therefore the dCRT critical value C dCRT n (X, Y, Z) converges to z 1−α (15).
Proof. It suffices to prove the conditional convergence in distribution (14), as the convergence of the critical value (15) follows from Lemma 1 because the normal distribution has continuous and strictly increasing CDF. We prove the conditional convergence (14) for each of the two sets of assumptions.
Assumption 1 We proceed by applying the conditional CLT (Theorem 8) with and F n ≡ σ(X, Y, Z). To verify the assumptions of the conditional CLT, note first that W in are independent conditionally on F n by construction and satisfy 0 < Var[W in | F n ] < ∞ by the nondegeneracy assumption (NDG2). Next, recalling definition (11), we have This quantity converges to zero in probability due to the nondegeneracy condition (NDG1) and the Lyapunov condition (Lyap-1). Hence, the conditional CLT gives the desired conditional convergence (14). Assumption 2 We begin by decomposing T dCRT n ( X, X, Y, Z): We claim that J n p → 0. Indeed, from E[J n | X, Y, Z] = 0 and the assumption E n,y Hence by Lemma 2 we have J 2 n p → 0, so that J n p → 0, as claimed. Next, we claim that an appropriately rescaled I n converges conditionally to N (0, 1). To this end, we apply the conditional CLT (Theorem 8) with and F n ≡ σ(X, Y, Z). To verify the assumptions of the conditional CLT, note first that W in are independent conditionally on F n by construction and satisfy 0 < Var[W in | F n ] < ∞ by the nondegeneracy assumption (NDG2). Next, observe that Since the first factor is stochastically bounded (conclusion (98) from Lemma 7), it suffices to show that the second factor converges to zero in probability. To this end, by Lemma 2 it suffices to note that L n ∈ L 0 n and the Lyapunov assumption (Lyap-2) give Therefore, we may apply the conditional CLT to obtain that Furthermore, equation (97) from Lemma 7 gives S dCRT n / S dCRT n p → 1, so by conditional Slutsky's theorem (Theorem 6) we conclude that as desired.
Corollary 7. Under the assumptions of Theorem 9, if the non-accumulation condition (17) is satisfied then the dCRT is asymptotically equivalent to the MX(2) F -test.
Proof. The desired equivalence is given by Lemma 3 with T n (X, Y, Z) ≡ ( S dCRT n ) −1 T dCRT n and C n (X, Y, Z) ≡ C dCRT n (X, Y, Z), since φ 1 n and φ 2 n in the lemma statement reduce to dCRT and the MX(2) F -test, respectively. This lemma is applicable because the convergence of the critical value (75) is given by Theorem 9 and the non-accumulation condition (76) is assumed.
C.2 Auxiliary lemmas
Lemma 2. Let W n be a sequence of nonnegative random variables and let F n be a sequence of σ-algebras. If E[W n | F n ] p → 0, then W n p → 0.
Proof. For any > 0, we have where the last convergence is due to bounded convergence theorem and the assumption Lemma 3 (Asymptotic equivalence of tests). Consider two hypothesis tests based on the same test statistic T n (X, Y, Z) but different critical values: If the critical value of the first converges in probability to that of the second: and the test statistic does not accumulate near the limiting critical value: then the two tests are asymptotically equivalent: Proof. Note that for any δ > 0, we have To justify the last step, suppose without loss of generality that z 1−α ≤ C n . Then note that if z 1−α < T n ≤ C n and C n − z 1−α ≤ δ then Taking a limsup on both sides in equation (78) and using the assumed convergence (75), we find that lim sup Letting δ → 0 and using our assumption (76), we arrive at the claimed asymptotic equivalence. This completes the proof.
D Proofs for Section 4 and Appendix A
For the sake of this section, we define
D.1 Proofs of main results
Proof of Theorem 2. To show the asymptotic equivalence of variance estimates (25) it suffices to show that The first of these statements is given by equation (89) in Lemma 6, the second follows from the proof of Theorem 6 in Shah and Peters (2020), and the third is a consequence of assumption (SP2) and conditional independence. Given the asymptotic equivalence of the variance estimates (25), we can show using Lemma 3 that the GCM test is asymptotically equivalent to the MX(2) F -test. Indeed, set Then, φ 1 n is the MX(2) F -test and φ 2 n is the GCM test. The asymptotic equivalence of the variance estimates (25) then implies the critical value convergence assumption (75) of Lemma 3. The non-accumulation assumption (76) is a consequence of the fact that, under the assumptions of Theorem 2, we have T GCM n d → N (0, 1) (Shah and Peters, 2020). On the other hand, by Corollary 7 (whose conclusion holds under the assumptions of Theorem 2), we also know that dCRT is asymptotically equivalent to the MX(2) F -test. Hence, both the GCM test and the dCRT are asymptotically equivalent to the MX(2) F -test, so they are asymptotically equivalent to each other as well.
Proof of Proposition 1. We treat the two cases separately.
where the third line follows from the convergences (86) and (83) from Lemma 6. This shows the variance consistency property (23).
Case 2. We need to show that By the Cauchy-Schwartz inequality, we have Given the assumption that sup n E Ln [|Y − µ n,y (Z)| 2+δ ] < ∞ for some δ > 0, Jensen's inequality gives Therefore, the weak law of large numbers (Corollary 6) gives Furthermore, On the other hand, we know supp(µ n,x (Z i )) ⊆ Conv(supp(L n (X)) for every i and n and by assumption µ n,x (Z) ⊆ Conv(supp(L n (X))) almost surely. Together with the fact that f is Lipschitz (say with Lipschitz constant L) on ∪ ∞ n=1 Conv(supp(L n (X))), it follows that Combining the last two displays with equation (82) gives us the desired result.
Proof of Corollary 2. Define the event
The conclusion (26) of Theorem 2 implies that P Ln [A n ] → 0, so by the assumed contiguity of L n to L n we also have P L n [A n ] → 0. This shows the desired asymptotic equivalence of tests (27). To show the asymptotic equivalence of powers, we derive This completes the proof.
Proof of Corollary 3. Fix > 0, and for each n let L * n ∈ L 0 n ∩ R n be such that Applying Theorem 2 to the sequence L * n and using the asymptotic Type-I error control of the GCM test, we obtain lim sup Sending → 0 gives the desired conclusion.
Proof of Theorem 4. We have The first factor converges to 1 in probability (Lemma 8), whereas the second factor converges conditionally on X, Y, Z to N (0, 1) (Theorem 1). Putting these two statements together with conditional Slutsky's theorem (Theorem 6), we arrive at the convergence (48).
Since the standard normal has continuous CDF we can use Lemma 1 to conclude the convergence of the critical value (49). The equivalence statement (50) follows from the convergence (49) and Lemma 3 applied with T n = T GCM n and C n = C ndCRT n .
Proof of Corollary 5. The proof of this corollary is directly analogous to that of Corollary 3, so we omit it for the sake of brevity.
D.2 Auxiliary lemmas
Lemma 4 (Conditional Jensen inequality, Davidson, 2003, Theorem 10.18). Let W be a random variable and let φ be a convex function, such that W and φ(W ) are integrable. For any σ-algebra F, we have the inequality Lemma 5. Let W n be a sequence of random variables and F n a sequence of σ-algebras.
Proof. Let > 0. Because of the assumed conditional convergence in probability, we have By the bounded convergence theorem, it follows that from which the conclusion follows.
Lemma 6. Consider a sequence of laws L n ∈ L 0 n . Given assumption (SP2), we have 1 n Given additionally assumption (SP1), we also have Given additionally the conditional Lyapunov condition (Lyap-2) and the variance consistency condition (23), we also have Given additionally the assumption E n,y p → 0, we also have Proof. We prove the convergence statements in order.
Proofs of statements (83), (84), (85). These statements are consequences of the weak law of large numbers (Corollary 6). To verify the statement (83), we note that The inequality in the third line follows from the conditional Jensen inequality (Lemma 4), the equality in the fourth line follows from the conditional independence assumption, and the inequality in the fifth line follows from the 2 + δ moment assumption (SP2). Hence we have verified the sufficient condition (59) for the WLLN, so the convergence (83) follows. Statements (84) and (85) can be verified with similar arguments. (86) and (87). We prove only the first, as the second will follow by symmetry. Given the convergence (84), the statement (86) will follow if we show that
Proofs of statements
The convergence E n,x p → 0 is assumed (SP1). To verify the convergence of the second term, note first that conditional Hölder (Lemma 12) and the derivation (90) give which combined with the convergence (84) implies that Therefore, by the Cauchy-Schwartz inequality we find that This proves the convergence (91), which in turn implies the claimed convergence (86).
Proof of statement (88). Note that where we used the variance consistency assumption (23) and the convergence result (83) to obtain the last line. Hence, it suffices to show that To this end, we apply the conditional WLLN (Theorem 7) with F n = σ(X, Z) and We check the required 1 + δ moment condition (54): The inequality in the third line follows from the conditional Jensen inequality (Lemma 4), the equality in the fourth line from the assumed conditional independence, and the convergence in the fifth line from the conditional Lyaponov assumption (Lyap-2). Therefore, the conditional WLLN gives where the equality follows from the assumed conditional independence. Since conditional convergence in probability implies unconditional convergence in probability (Lemma 5), this verifies the claimed convergence statement (94) and completes the proof of the statement (88).
Proof of statement (89). Given the convergence (88), it suffices to show that Given the assumption that E n,y p → 0, this statement's proof is analogous to that of statement (91), so we omit it for the sake of brevity. This completes the proof of the lemma.
Lemma 7. Define Under the assumptions of Theorem 2, we have and Proof. The equivalence of variances (97) follows from the convergences (88) and (89) (Lemma 6), as well as the observation that conditional independence and the assump- The stochastic boundedness from below (98) follows from the latter fact and the convergence (88).
Lemma 8. If the nondegeneracy conditions (NDG1) and (NDG2) and the conditional Lyapunov condition (Lyap-1) hold, then the variance estimates ( S GCM n ) 2 and ( S dCRT n ) 2 are equivalent under resampling: First we claim that 1 n n i=1 W in p → 0. We will use conditional WLLN (Theorem 7) with F n ≡ σ(X, Y, Z). First note that E[W in | F n ] = 0 by construction. We also check the moment condition (54): where the latter convergence is by the assumption (Lyap-1). Hence we have that 1 We will use conditional WLLN with F n = σ(X, Y, Z), observing that E 1 n n i=1 W 2 in | F n = ( S dCRT n ) 2 . Next we verify the moment condition (54): where the latter convergence is by the assumption (Lyap-1). Hence we have that 1 Combining both of these results we find that Now using the nondegeneracy condition (NDG1) we can conclude that (100) holds true, as desired.
E Proofs for Section 5
The goal of this section is to prove our main optimality result (Theorem 3) and Corollary 4. The idea of the proof of Theorem 3 is to reduce the problem to a semiparametric testing problem, and then to use existing semiparametric optimality theory. To this end, we first review the relevant semiparametric theory (Section E.1). Then we leverage this theory to prove Theorem 3 (Section E.2) and verify Corollary 4 (Section E.3). Finally, we carry out deferred semiparametric computations (Section E.4).
E.1 Semiparametric preliminaries
Consider a semiparametric model parameterized by where ν is a measure on R p and H g ⊆ L 2 (ν) is a linear subspace. First, we define a notion of local Type-I error control within the context of the semiparametric model.
Definition 4. Fix a point g 0 ∈ H g , and define θ 0 ≡ (0, g 0 ). A sequence of tests φ n of H 0 : β = 0 has asymptotic Type-I error control at θ 0 relative to the tangent spaceL θ 0 if, for each submodel t → L (0,gt) with score inL θ 0 along which β is differentiable, we have This definition is most similar to that of Choi, Hall, and Schick (1996), except the latter paper does not explicitly use the language of tangent spaces; our definition accommodates Type-I error control over more restricted sets of null distributions reflecting regularity conditions. Next we state a version of the classic semiparametric optimality result: Theorem 10 (Theorem 1 in Choi, Hall, and Schick, 1996, Theorem 25.44 in Van Der Vaart, 1998, Theorem 18.12 in Kosorok, 2008. Consider a semiparametric model {L β,g : (β, g) ∈ R × H g } and a point θ 0 ≡ (0, g 0 ) for some g 0 ∈ H g . Suppose β is differentiable at L θ 0 relative to the tangent spaceL θ 0 with efficient influence function S/ I(θ 0 ), where S is the efficient score and I(θ 0 ) > 0 is the efficient information. For any sequence of tests φ n of H 0 : β = 0 with asymptotic Type-I error control at θ 0 relative to the tangent spacė L θ 0 and any differentiable submodel L t = L (th β ,gt) with score inL θ 0 we have This bound is achieved by the efficient score test φ opt In other words, This result is like Choi, Hall, and Schick (1996, Theorem 1), except it explicitly deals with tangent spaces. On the other hand, the result is like Van Der Vaart (1998, Theorem 25.44) or Kosorok (2008, except it is written in terms of semiparametric models and assumes Type-I error control in the sense of Definition 4 above. By comparison, Van Der Vaart (1998) andKosorok (2008) assume Type-I error control at each point (0, g) for g ∈ H g . By inspection of the proof of Van Der Vaart (1998, Theorem 25.44), only local Type-I error control (Definition 4) is actually needed. In this sense, Theorem 10 can be verified using the same proof as that of Van Der Vaart (1998, Theorem 25.44), specializing to the case of semiparametric models.
E.2 Proof of Theorem 3
To apply the semiparametric theory from the previous section, the following lemma (proved in Section E.4) identifies the tangent space, the efficient score, and the efficient information at L θ 0 . These results are not novel or surprising; similar results are stated, for example, by Robins and Rotnitzky (2001) in the cases of linear, logistic, and Poisson regressions. Nevertheless, we state and prove Lemma 9 for a self-contained exposition and for precisely tracking the technical assumptions used.
Lemma 9. In the context of the semiparametric model (29), suppose the following assumptions hold: For each h = (h β , h g ) ∈ R × H g , the parametric submodel t → L (th β ,g 0 +thg) is differentiable in quadratic mean at t = 0 with score function and satisfies the following local asymptotic normality: The parameter β is differentiable at L θ 0 relative to the tangent spacė with efficient score function efficient information and efficient influence function equal to the ratio of the efficient information and the efficient score.
Note that assumptions (109) and (110) of Lemma 9 are the same as assumptions (36) and (37) of Theorem 3 in the main text; they are restated here for the reader's convenience. Using Lemma 9 in conjunction with Theorem 10, we can prove Theorem 3.
Proof of Theorem 3. Let φ n be a level α test of H 0 as defined in equation (32), and fix g 0 ∈ S. By assumption (38), θ n (0, h g ) ∈ R for all h g ∈ H g for all sufficiently large n. Therefore, φ n also has asymptotic Type-I error control at θ 0 ≡ (0, g 0 ) relative to the tangent spaceL θ 0 (113) in the sense of Definition 4. Indeed, it suffices to take submodels t → L (0,gt) for g t = g 0 + th g and h g ∈ H g , so that L (0,g 1/ √ n ) = L θn(0,hg) . By Lemma 9 (applicable because its first assumption (108) is implied by assumption (35) of Theorem 3 and its last two assumptions are also assumed by Theorem 3), the assumptions of Theorem 10 are met with efficient score S (114) and efficient information s 2 (θ 0 ) (115), so taking submodels t → L (th β ,g 0 +thg) we find On the other hand, because L θ 0 ∈ R, it follows that The first equality follows from the proof of Theorem 6 in Shah and Peters (2020), the second follows from the derivations of the efficient score (114) and efficient information (115) in Lemma 9, and the third from equation (106) in Theorem 10. From the local asymptotic normality (112) it follows that n i=1 L θn(h) and n i=1 L θ 0 are contiguous by Le Cam's first lemma (Van Der Vaart, 1998, Example 6.5). It follows that We therefore find that The first inequality follows from the conclusion (107) of Theorem 10 and the second equality follows from equation (117) and Le Cam's first lemma. Therefore we have shown that for any h ∈ (0, ∞) × H g and any level α conditional independence test φ n , we have This shows that φ GCM n is LAUMP(g 0 ) and verifies the claimed asymptotic power (39). Furthermore, since g 0 ∈ S was chosen arbitrarily, it follows that φ GCM n is also LAUMP(S). This completes the proof.
E.3 Proof of Corollary 4
It suffices to verify each of the four assumptions of Theorem 3.
Verification of assumption (35). Note that assumption (SP2) is satisfied because by construction, X − µ x (Z) and Y − µ y (Z) are independent standard normal random variables for any L ∈ R. Next, let be an eigendecomposition of the Sobolev kernel k with eigenfunctions e j orthonormal with respect to Unif[0, 1]. To verify assumption (SP1) given assumption (SP2), it suffices to prove the following statements (Shah and Peters, 2020, Theorem 11 and Remark 12): The first two of these statements follow directly from the construction of R. The third follows from the eigendecomposition of the Sobolev kernel under the uniform measure on [0, 1] (Wainwright, 2019, Example 12.23) with λ j = ( 2 (2j−1)π ) 2 .
Letting λ y be the base measure of the exponential family f η , we denote λ ≡ L x,z × λ y and dL (th β ,g 0 +thg) (x, y, z)/dλ the density of the parametric model for (X, Y , Z) with respect to λ. According to Van Der Vaart (1998, Lemma 7.6), this submodel is differentiable in quadratic mean at t = 0 if the map is continuously differentiable at t = 0 for each (x, y, z) ∈ R 1+1+p and the elements of the Fisher information matrix are well-defined and continuous at t = 0. To show continuous differentiability of the square root density, we compute that ∂ ∂t The linearity of η t in t and the smoothness of ψ imply the continuous differentiability of the above function in t.
Next consider the information matrix We must show that I t is well-defined and continuous at t = 0. By assumption (109), either ψ = K > 0 and E Lx,z [X 2 ] < ∞ or (X, Z) is compactly supported and H g ⊆ C(R p ). If ψ = K > 0 and E Lx,z [X 2 ] < ∞, then we have L 2 (ν) < ∞. Note that, for the sake of this proof, we denote Hence, I 0 is well-defined. I t is also continuous at t = 0 because it does not depend on t.
On the other hand, suppose (X, Z) is compactly supported and H g ⊆ C(R p ). The quantity inside the expectation defining I 0 is a bounded random variable, because the assumed continuity of h g implies that this quantity is a continuous function of a random vector (X, Z) with compact support. Hence, I 0 is well-defined because it is the expectation of a bounded random variable. To show continuity of I t , note that by the assumed continuity of h g and compact support of (X, Z) we have sup t≤1 η t (X, Z) ≤ B < ∞ almost surely. Therefore, for t ≤ 1, we have We have sup |b|≤B | ... ψ (b)| < ∞ because ... ψ is a continuous function, and Xh β +h g (Z) almost surely bounded as before. Therefore, we conclude that |I t − I 0 | → 0 as t → 0, so I t is indeed continuous at 0.
Hence, we conclude by Van Der Vaart (1998, Lemma 7.6) that the parametric submodel t → L (th β ,g 0 +thg) is differentiable in quadratic mean at t = 0 with score function as claimed (111).
Local asymptotic normality of parametric submodels. The local asymptotic normality of parametric submodels (112) follows from the previously established quadratic mean differentiability (Van Der Vaart, 1998, Theorem 7.2).
Efficient score and information. For (h β , h g ) ∈ R × H g , define the score operator The tangent spaceL θ 0 (113) can then be expressed as the range of A: As discussed in Van Der Vaart, 1998, Section 25.4, the efficient score for β is where Π β,g is the orthogonal projection onto the closure A g (H g ) of the nuisance tangent space A g (H g ) in L 2 (L θ 0 ). In other words, To compute this projection, we first claim that the extended operator A g : L 2 (ν) → L 2 (L θ 0 ) is continuous and that A * g A g is continuously invertible. To verify continuity of A g , note that for h g ∈ L 2 (ν) we have Next, we derive the adjoint operator A * g : L 2 (L θ 0 ) → L 2 (ν). For a random variable W ∈ L 2 (L θ 0 ), we have It follows that Next we derive that The assumption (109) implies thatψ(g 0 (Z)) ≥ c > 0 almost surely, since eitherψ is a nonzero constant or g 0 (Z) belongs to a compact set almost surely and thereforeψ(g 0 (Z)) belongs to the range of a positive continuous function applied to a compact set. From this it follows that S * g S g is continuously invertible. Because A g is a continuous linear operator with continuously invertible A * g A g , it follows that A g (L 2 (ν)) is closed and that A g (A * g A g ) −1 A * g is the orthogonal projection onto this space. Next let us compute the orthogonal projection of the score A β onto A g (L 2 (ν)). We have and therefore Therefore, we have arg min W ∈Ag(L 2 (ν)) Since A g (L 2 (ν)) is closed, it follows that A g (H g ) ⊆ A g (L 2 (ν)). Together with the assumption (110) that E L θ 0 [X | · ] ∈ H g and the definition of the effective score as a projection onto A g (H g ) (124), we deduce that Therefore, the efficient score is as claimed (114). From there we find that the efficient information is also as claimed (115).
Differentiability of β and efficient influence function. By Van Der Vaart (1998, Lemma 25.25), the differentiability of β at L θ 0 with respect to the tangent setL θ 0 follows from the quadratic mean differentiability proved above and the assumption that I θ 0 = s 2 (θ 0 ) > 0 (108). The same lemma gives the efficient influence function as the ratio of the efficient score and the efficient information. This completes the proof.
F Additional material related to simulations.
In this section, we present details about existing robustness simulation setups (Section F.1), investigate the trade-off between using lasso and post-lasso (Section F.2) and present the complete simulation results (Section F.3).
F.1 Simulation setup in literature
Here we provide details about the simulation setups considered in Candès et al., 2018;Liu et al., 2022;Li andLiu, 2022. Liu et al. (2022) In this paper, the authors consider the double high-dimensional linear model. Suppose {Z i , Y i } n i=1 is n = 800 iid data and Z i ∼ N (0, Σ p ) where p = 800 and Σ p is chosen to be AR(1) and the autocorrelation is set to be 0.5. Then they consider Y i = Z i β + where β is vector of dimension p and only s = 50 is set to be nonzero with magnitude ν = 0.175 and random sign. They consider two ways to set the nonzero components of β: spacing these non-zero coefficients equally or choosing them to be the first 50 coefficients of β. The authors consider, rather than testing conditional independence, the false discovery rate (FDR) of variable selection. Candès et al. (2018) In this paper, the authors consider a bit different setting where Y |Z is now a high-dimensional logistic model. The sample size n = 800 and Z i ∼ N (0, Σ p ) where p = 1500 and Σ p is chosen to be AR(1) and the autocorrelation is set to be 0.3. After the sampling, the design matrix is centered and every column is normalized to have norm 1. Similarly, only s = 50 coordinates of β are set to be nonzero and the sign is random whereas the magnitude is set to be ν = 20. They set a randomly-chosen set of s = 50 coefficients of β to be nonzero and consider again the FDR control.
Li and Liu (2022) In this paper, a similar setting is considered while Z is a data matrix with row n = 250 and column p = 500 where each row is sampled from N (0, Σ p ) and Σ p is an AR(1) matrix with autocorrelation 0.5. However, a crucial difference in this paper is the way to set E(X|Z) and E(Y |Z). As for X, it is generated by a linear predictor X = Zγ + where γ is a p dimensional vector with first s = 5 is nonzero and the other coordinate remains zero and follows the standard normal distribution. The sign of each coordinate is randomly chosen and the magnitude is set to be ν = 0.3. As for Y , we set β = γ and Y = Zβ + ξ where ξ follows standard normal distribution such that ξ is independent of . We can see that X and Y support on the same subset of Z so that the marginal association between X and Y is much larger than that in first two simulation designs.
F.2 Comparing the lasso and post-lasso estimation methods
Compare to the lasso estimation method for µ n,x and µ n,y , the post-lasso estimation method results in estimates with lower bias but higher variance. This impacts the Type-I error and power of the inferential methods in different ways. For Type-I error, it is only important to have good estimates for (Katsevich and Ramdas, 2022). Therefore, we examine the mean-squared estimation error for Z T A β A and Z T A γ A in one of our null simulation settings as well as the mean-squared estimation error for Z T β and Z T γ in one of our alternative simulation settings ( Figure 5). We find that the post-lasso does a better job estimating the shared active coefficients in the null setting, so the reduced bias in estimating these shared coefficients outweighs the increased variance. On the other hand, the lasso does a better job estimating the entire set of coefficients, so in this case the increased variance outweighs the reduced bias. This explains why the post-lasso-based methods have improved Type-I error control but worse power than the lasso-based methods.
F.3 Additional simulation results
Figures 6-13 present the complete simulation results across the null and alternative, Gaussian and binary, and supervised and unsupervised settings.
G Proofs of conditional convergence results
In this section, we present the proofs of the conditional convergence results from Appendix B. We proceed by stating and proving necessary lemmas in Section G.1 and then proving the convergence results themselves in Section G.2.
G.1 Auxiliary lemmas
First we state a few known results for the reader's convenience.
Lemma 10 (Durrett, 2010, Theorem 2.3.2). A sequence of random variables W n converges to a limit W in probability if and only if every subsequence of W n has a further subsequence that converges to W almost surely.
Lemma 11 (Conditional Markov inequality, Davidson, 2003, Theorem 10.17). Let W be a random variable and let F be a σ-algebra. If for some q > 0 we have E[|W | q ] < ∞, then for any we have P(|W | ≥ |F) ≤ E[|W | q |F] q almost surely.
Lemma 13 (Romano and Lehmann, 2005, Lemma 11.2.1). Suppose W n d → W . If for given α ∈ (0, 1) the CDF of W is continuous and strictly increasing at Q α [W ], then Next we establish that, without of loss of generality, all random variables, σ-algebras, and conditional expectations in a triangular array may be viewed as being defined on a common probability space.
Lemma 14 (Embedding into a single probability space). Consider a sequence of probability spaces {(P n , Ω n , G n ), n ≥ 1}. For each n, let {W i,n } i≥1 be a collection of integrable random variables defined on (P n , Ω n , G n ) and let F n ⊆ G n be a σ-algebra. Then there exists a single probability space ( P, Ω, G), random variables { W i,n } i,n≥1 on ( P, Ω, G), and σ-fields F n ⊆ G for n ≥ 1, such that for each n, the joint distribution of ({W i,n } i≥1 , {E[W i,n | F n ]} i≥1 ) on (P n , Ω n , G n ) coincides with that of ({ W i,n } i≥1 , {E[ W i,n | F n ]} i≥1 ) on ( P, Ω, G).
Proof. Define the Cartesian product Ω ≡ ∞ n=1 Ω n , the σ-algebra G generated by measurable cylinders ∞ n=1 A n for A n ∈ G n and A n = G n for all but finitely many n, and the infinite product measure P on the measurable space ( Ω, G) (Saeki, 1996). On this probability space, define σ-algebras F n ≡ {G 1 × · · · × G n−1 × A n × G n+1 × · · · : A n ∈ F n } (133) and random variables W in (ω) ≡ W in (ω n ) for each i, n ≥ 1. Next we claim that for each i, n ≥ 1, the random variable is in fact a version of the conditional expectation E P [ W in | F n ]. Indeed, it suffices to check that for each A ≡ G 1 × · · · × G n−1 × A n × G n+1 × · · · ∈ F n we have From the ω-wise embeddings (134) and (135), it is easy to verify the claimed equality between the joint distributions on (P n , Ω n , G n ) and ( P, Ω, G).
Finally, we state a conditional version of the truncated weak law of large numbers: Lemma 15. For each n, let W in , 1 ≤ i ≤ n be a set of random variables independent conditionally on F n . Let b n > 0 with b n → ∞ and letW in = W in 1(|W in | ≤ b n ). Suppose that as n → ∞ we have If we set S n ≡ n i=1 W in and a n ≡ n i=1 E[W in ] then S n − a n b n | F n p,p −→ 0.
Proof. LetS n ≡ n i=1W in . We first write P S n − a n b n > F n ≤ P S n =S n |F n + P S n − a n b n > F n .
To estimate the first term, we note that by the first assumption. For the second term, we note that conditional Markov's inequality (Lemma 11), a n = E[S n |F n ] implies that P S n − a n b n > |F n ≤ −2 E S n − a n b n 2 F n where the convergence in the last line is given by the second assumption. This completes the proof. By assumption, we have
G.2 Proofs of conditional convergence results
Therefore, This completes the proof.
Proof of Theorem 6. Fix t ∈ R. Letting F (t ) ≡ P[W ≤ t ] be the CDF of W , Theorem 5 gives P W n ≤ t − b n a n | F n − F t − b n a n ≤ sup t ∈R |P[W n ≤ t |F n ] − F (t )| p → 0.
By the continuous mapping theorem, we have F t−bn an p → F (t), so that P W n ≤ t − b n a n | F n p → F (t).
Noting that P[a n ≤ 0|F n ] is a sequence of nonnegative random variables whose expectations converge to zero, it follows that P[a n ≤ 0|F n ] p → 0 and so P [a n W n + b n ≤ t | F n ] = P [a n W n + b n ≤ t, a n > 0 | F n ] + o p (1) = P W n ≤ t − b n a n , a n > 0 | F n + o p (1) = P W n ≤ t − b n a n | F n + o p (1) p → F (t), as desired.
Proof of Theorem 7. We apply Lemma 15 with b n = n. We first verify the first assumption in Lemma 15 by conditional Markov's inequality (Lemma 11): For the second condition, we have Therefore, Lemma 15 yields By conditional Slutsky (Theorem 6), it now suffices to show that To see this, applying conditional Markov's and Hölder's inequalities (Lemmas 11 and 12, respectively) we obtain where the last convergence is by assumption. Finally, we verify that the condition (56) is sufficient for the conditional WLLN assumption (54) by noting that it implies This completes the proof.
Now, let {n k } k≥1 be a subsequence of N. By the conditional Lyapunov assumption (61) and Lemma 10, there is a further subsequence n k j such that 1 S 2+δ Hence, it follows that Applying the usual Lyapunov CLT to the triangular array { W in k j (ω)} i,n k j , we find that and therefore that, for each t ∈ R, we have → Φ(t) for almost every ω ∈ Ω. (141) Using Klenke (2017, Theorem 8.38) again, it follows that for each t ∈ R, we have | 2022-11-29T06:43:08.095Z | 2022-11-27T00:00:00.000 | {
"year": 2022,
"sha1": "a91b82f992f64a2b4cd43b9c60e651c42e68ad31",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "658943809cdcbb1eeb747874d0928c90cb1df10b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
246815678 | pes2o/s2orc | v3-fos-license | Overview of infections as an etiologic factor and complication in patients with vasculitides
Vasculitides, a form of inflammatory autoimmune disease targeting the vessels, constitute an entity with significant morbidity and mortality. Infections have long been associated with vasculitides as a result of the incident immunosuppression following treatment induction and maintenance. Several microbial pathogens have been described as etiologic factors of infections in this patient population according to the type of vessels affected. Intense research has also been recently conducted in the interplay between vasculitides and certain viral infections, namely human immunodeficiency virus and severe acute respiratory syndrome coronavirus 2. Of note, a plethora of scientific evidence is available regarding the role of infections as triggering factors for vasculitides. Among the main mechanisms implicated in this direction are the activation of B and T cells, the direct endothelial insult, the immune complex-mediated vascular injury, and the cell-mediated, type IV hypersensitivity vessel damage. Therefore, this review aims to summarize all the available evidence concerning this bidirectional interplay between infections and vasculitides.
Introduction
Vasculitides are a heterogeneous group of autoimmune diseases with great morbidity and mortality, characterized by fibrinoid necrosis and the presence of inflammatory leukocytes, leading to damage of the vessel wall [1]. Inflammation in this patient population leads to narrowing or even obstruction of the vessels resulting in reduced blood perfusion and the development of ischemic damage to the tissues and organs.
Both the size of the vessels and the organs affected are different between the different types of vasculitis and, therefore, their clinical manifestations vary [1]. Vasculitides may be pronominally classified into large-, medium-, and smallsized vessel vasculitides, as well as into primary or secondary vasculitides, including infectious or paraneoplastic vasculitides. In this review article, we first provide the latest evidence regarding the association between infections and vasculitides in an effort to increase awareness of the infectious etiology of vasculitis.
Search strategy
We performed a literature search in PubMed, Google Scholar, and Web of Science from inception till 15th January 2022, using the following keywords: vasculitis, infection, bacterial infection, viral infection, coronavirus disease-19, COVID-19, SARS-CoV-2, human immunodeficiency virus, HIV. Additionally, articles were extracted from the reference list of the retrieved articles if they were deemed relevant. Specifically, we focused our attention on papers published within the last 5 years.
Evidence for the relationship between infections and vasculitis
In the past, historical diseases such as tuberculosis or syphilitic aortitis have long been suspected to be triggering factors for many types of vasculitis [2].
Many different types of vasculitides may be associated with infections (Table 1). On the other hand, patients with vasculitis may develop infections, which sometimes mimic a relapse of the disease.
When an infection has been diagnosed in a patient with vasculitis, the subsequent question is to determine whether vasculitis is related or not to the infection and how to treat both conditions in the most appropriate and safest ways [10,22,23]. So far, a causal relationship between infection and vasculitis has only been established in a few instances, and the pathophysiologic mechanisms remain hypothetical [2].
Type of vasculitis and different infectious agents involved
In the small-vessel vasculitis, including ANCA-associated vasculitides (AAV) [24,25] and/or small-vessel, immune complex-mediated vasculitis [4,5,11], the possible role of infection in triggering de novo disease and relapse has been extensively investigated and a clear association has been demonstrated. On the other hand, there is only indirect evidence to support an infectious etiology for Kawasaki's disease, even though numerous organisms have been proposed but not proven [26].
In patients with granulomatosis with polyangiitis (GPA), S. aureus was the most commonly isolated organism in cultures from the upper airways that have been associated with an increased risk of relapse [24]. In light of this evidence, a cyclical pattern of GPA occurrence in cases of an infectious etiology is supported and is exhibited periodically with a maximum peak every 7.7 years [25].
Moreover, microscopic polyangiitis (MPA) is also associated with infections and environmental factors. It is suggested that infection is a major causal factor in the formation of ANCA. Importantly, substantial differentiation of glomerulonephritis due to infection and immune complex deposition versus ANCA-associated vasculitis could be easily performed with a kidney biopsy [6]. Indeed, different case reports have been described, whereby infective endocarditis caused by E. faecalis, Streptococcus, and Bartonella species were also related to ANCA vasculitis and markedly elevated levels of PR3-ANCA [6].
On the contrary, in the setting of eosinophilic granulomatosis with polyangiitis (EGPA), the majority of cases are idiopathic, associated with inhaled antigens rather than infections. Herein, vaccination and desensitization have been reported as triggering factors [27]. However, in EGPA the epidemiological data may be confounded by the severity of preceding asthma [28].
In IgA vasculitis-Henoch-Schonlein purpura (HSP), it is suggested that infections can trigger immune complex [3]. Concerning children with IgA vasculitis, specific inflammatory factors may be attributed and have a lasting effect on immune competence [4]. Another point of importance is that the incidence of IgA vasculitis exhibited seasonal increases in the spring and decreases in the winter, which may be related to a higher frequency of upper respiratory tract infections during the coldest months of the year. Interestingly, in the majority of patients with HSP there is an infection of the upper respiratory tract, indicative of a potential microbial etiology of the disease. Among children aged less than 10 years, 99.5% of cases suffer from either IgA vasculitis or Kawasaki disease, both exhibiting a seasonal pattern paralleling infections [29]. IgA vasculitis has also been associated with influenza infection [11]. Last but not least, leukocytoclastic vasculitis may also be a manifestation of bacterial infection [5]. Etiologically, the appearance of small-vessel cutaneous vasculitis is associated with drug reactions or certain viral or bacterial infections. Among patients with cutaneous vasculitis, beta-lactams, analgesics, or non-steroidal anti-inflammatory agents are common drugs associated with the disease, which usually have a good clinical outcome [30]. In leukocytoclastic vasculitis, the presence of upper respiratory tract infections shortly before the development of vasculitis was more common than in those with IgA vasculitis [5]. Cutaneous vasculitis has been described in patients with cystic fibrosis due to respiratory infections due to Pseudomonas aeruginosa, S. aureus, Haemophilus influenza, and Burkholderia cepacia complex [8,31,32]. Furthermore, a cutaneous small-vessel vasculitis may occur in childhood after an infection caused by the atypical bacterial pathogen Mycoplasma pneumoniae [33].
Hepatitis B virus-associated polyarteritis nodosa
Polyarteritis nodosa (PAN), a medium-vessel vasculitis, frequently results from hepatitis B virus infection (HBV-PAN). HBV-PAN has become the most convincingly documented, immunologically-mediated, infection-associated systemic vasculitis and the single most important feature in confirming the diagnosis of PAN [13]. One of the highest rates reported comes from an area endemic for HBV infection and accounts for 30% to be HBV positive [34]. Control of the viral infection is mainly based on the use of antiviral drugs with the current availability of potent agents. Other rare cases of PAN have been associated with nontuberculous mycobacterial infection [14], with cutaneous polyarteritis nodosa occurrence in the setting of Mycobacterium tuberculosis infection also being described [35].
Hepatitis C virus-associated cryoglobulinemic vasculitis
Cryoglobulinemic vasculitis is the most frequent extrahepatic manifestation in patients infected with the hepatitis C virus (HCV) [10]. This causative relationship between hepatitis C and cryoglobulinemic vasculitis is well-established in more than 80% of the patients [12]. Its incidence has decreased over the past few decades, with direct-acting antivirals not always effective at eradicating or preventing the relapse of this complication despite achieving a sustained viral response [36][37][38].
Human immunodeficiency Virus-associated vasculitides
Human immunodeficiency virus (HIV) positive patients may also develop vasculitis, either mediated by immunological factors or by direct vascular injury, as well as by the restoration of protective pathogen-specific cellular immune response owing to the immune restoration inflammatory syndrome induced by antiretroviral treatment [39]. Hypersensitivity reactions should be considered as a possible etiology of vasculitis in HIV-infected patients [40]. HIV-associated PAN-like disease and Kawasaki-like syndrome have also been described, with significant differences when compared to the classic manifestations of PAN and Kawasaki disease, respectively [39]. Regarding large vessel vasculitis, it is more commonly encountered in HIV patients at an advanced disease stage, with multiple aneurysms and occlusions in less typical locations, sharing histologic similarities with Takayasu arteritis [41]. It should be noted that this large vessel vasculitis occurs independently of CD4 count and viral load, as well as in patients after starting antiviral therapy indicating the immune reconstitution inflammatory syndrome (IRIS) [42]. Physicians should be aware that vasculitis may have a heterogeneous presentation and can occur in association with HIV infection [15].
Coronavirus disease-19-associated vasculitides
The role of the recent severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), characterized by an inflammatory cytokine storm and incident endotheliitis [43,44], in the pathogenesis of IgA-mediated vasculitis needs to be determined. Recently the occurrence of Henoch-Schonlein purpura secondary to a SARS-CoV-2 infection in children and adults was mentioned in several case reports [45]. Since SARS-CoV-2 causes a mucosal infection, the incident IL-6 production might be responsible for the 1 3 stimulation of poor glycosylation/galactosylation of IgA1, leading to the formation of galactose-deficient IgA1, thus contributing to IgA vasculitis development and progression [46]. Case reports of AAV following coronavirus disease-19 (COVID- 19) have also been described, presenting with renal involvement and contrasting prognosis [47]. Moreover, COVID-19 has been linked with the manifestation of other forms of vasculitis, such as Kawasaki disease and leukocytoclastic vasculitis [48][49][50]. SARS-CoV-2 may also lead to a small vessel vasculitis not involving the main coronaries [51]. Evidence of central nervous system vasculitis following COVID-19 was confirmed via biopsy despite a mild form of SARS-CoV-2 infection [52]. Gracia-Ramos et al. reviewed the available evidence on the incidence of autoimmune diseases in COVID-19 patients [53]. In incidence vasculitis patients, they noted a male sex predisposition, with the majority of patients exhibiting a mild-to-moderate COVID-19 disease course [53]. Vasculitis onset was reported approximately one month after SARS-CoV-2 infection and completely resolved in the vast majority of cases [53]. In a recently reported positron emission tomography/computed tomography study of recovered adult COVID-19 patients with residual symptoms, a higher target-to-blood pool ratio was observed in the thoracic aorta, right iliac artery, and femoral artery of COVID-19 patients compared to control, indicating a persistent vascular inflammation which could be responsible for vasculitic manifestations in the long-term recovery period [54]. AAV may as well be a complication of the Long-COVID-19 syndrome, with deleterious outcomes [55].
In children, Multisystem Inflammatory Syndrome (MIS-C) is a newly defined post-viral myocarditis inflammatory vasculopathy following COVID-19 infection [56], which can be complicated with giant coronary aneurysm formation [57]. Importantly, this syndrome differs in several aspects (epidemiology, clinical, and immunological findings) from Kawasaki disease [58]. This syndrome can also be found in adult patients (MIS-A), usually four months after the acute phase of COVID-19, presenting with multiorgan dysfunction frequently requiring intensive care unit admission, while a minority of patients may present with Kawasaki disease [59]. It should be stressed, however, that prompt immunomodulatory treatment may lead to the regression of myocardial injury and the reversibility of left ventricular dysfunction [60].
Further research in this field is required, however, to better define the pathophysiologic basis of autoimmune diseases incidence in COVID-19 [61]. It should also be stated that vaccination against SARS-CoV-2 may also be complicated with cutaneous vasculitic phenomena or even IgA vasculitis and AAV [62][63][64][65][66].
Other virus-associated vasculitides
Human T-cell lymphotropic virus type 1 (HTLV-1) has been shown to infect human endothelial cells and may explain the CNS vasculitis associated with the progressive spastic paraplegia seen with HTLV-1 infections [67]. Other viruses such as Erythrovirus B19, cytomegalovirus, and varicellazoster virus have also been reported to be associated with or implicated in the development of vasculitides [10]. Additionally, norovirus infection can precipitate the worsening of an underlying HSP vasculitis and lead to acute clinical decompensation [68].
Fungal and parasitic disease-associated vasculitides
Fungi diseases can also cause vasculitides. Herein, the proposed mechanism involves the direct invasion of blood vessels or septic embolization, leading to the well-known feature of 'mycotic aneurysm' [17]. It is noteworthy that various induction therapies in AAV also result in opportunistic fungal infections [69].
Mechanisms of infection-associated vasculitis
Studies have provided valuable information on the mechanism of inflammation and innate immunity (Fig. 1). The mechanisms of infection-related vasculitis involve direct and indirect pathogen invasion. Direct pathogen invasion is detected within the blood vessel walls and is characterized by smooth muscle cell (SMC) accumulation, endothelial dysfunction, expression of reactive oxygen species (ROS), cytokines, chemokines, cellular adhesion molecules, and damage of the vessel wall [71]. It is also suggested that the pathogenesis of vasculitis differs according to the pathogenic organisms. The indirect mechanisms of infectionrelated vasculitis involve immune-mediated injury to the vessel walls, such as immune-complexes molecular mimicry, ANCAs, cytokines, superantigens, autoantigen complementarity, and T cell immune response [71].
Activation of T and B cells
Infections stimulate autoimmune responses with shared epitopes between pathogens and host, upregulated heat shock proteins, and stimulated lymphocytes [22]. Circulating 1 3 toxins often act as superantigens which are exotoxins produced by a small group of bacteria, primarily S. aureus and Streptococcus pyogenes [72]. These induce oligoclonal activation of T cells through an antigen-independent mechanism [73].
Superantigens also stimulate B cells to produce autoantibodies that may be involved in the pathogenesis of vasculitis (Fig. 2, Panel a) [22,23]. The 'classic neutrophil pathway' has been described as a cause of necrotizing vasculitis and has been studied and confirmed by several groups. An additional 'T-cell pathway' has also been proposed, mainly causing granulomatous inflammation, promoting necrotizing vasculitis [74]. Infections are the starting point of both pathways and trigger the priming of neutrophils [74].
Immune complex-mediated damage
Immune complex-mediated damage represents a reaction of type III hypersensitivity where the antigens are the infectious agents or antigenic portions of them after the zone of equivalence is reached. The immune complexes precipitate and become trapped within vessel walls, stimulating an immune response (Fig. 2, Panel b) [75]. Type III hypersensitivity produces soluble immune complexes and, under certain conditions, may occasionally produce devastating tissue damage. A disease caused by soluble immune complexes may also be due to autoimmunity when the antigen involved is self-derived. Studies of circulating complement levels have shown a correlation with disease activity [75].
Defective handling of antigenic peptides, whether due to high antigenic load or abnormally functioning immune regulatory mechanisms, in chronic viral infections such as hepatitis C or hepatitis B, might contribute to a state of persistent antigenemia, stimulation of an immune response with subsequent release of antigen-directed antibodies, and the formation of immune complexes. Subsequent complement activation generates the chemotactic factor C5α, which promotes the accumulation of circulating neutrophil and monocytes-macrophages [76]. It has been proposed that factors released by ANCA-activated neutrophils activate the alternative pathway of complement, resulting in enhanced recruitment and activation of neutrophils [77]. The alternative pathway is activated by a variety of cellular surfaces, including those of certain bacteria, viruses, fungi, and parasites. Sequential non-enzymatic protein-protein interactions among the terminal complement proteins initiated by C5a lead to the formation of the membrane attack complex (MAC). Because biological activities of C5a and MAC play a role in the pathogenesis of endothelial injury, this alternative pathway is now considered pathogenically important, and complement inhibitors (such as anti-C5a) may have efficacy in AAV, with C5a potentially being an important therapeutic target [78]. Immunosuppressive therapy with glucocorticoids underpins current treatment for AAV. Agents targeting the complement system such as eculizumab, a monoclonal antibody that prevents the formation of C5a and C5b-9, and avacopan, a small molecule inhibitor of C5aR, may represent an alternative to steroids [9,79].
Neutrophils are required for the development of bacteriaassociated vasculitides. They can sometimes be attributed to an antigen excess which in turn causes a rise in circulating immune complexes and inflammatory mediator deposition within the vessel walls, with subsequent complement activation and blood vessel wall damage [75].
Direct invasion of the endothelial cells
Vasculitides following bacterial infections may occur from direct invasion of the endothelial cells as a result of an extension of a localized focus of infection affecting blood vessels or due to blood-borne septic embolization (Fig. 2, Panel c) [80].
Direct necrotizing vessel wall destruction can be induced by Pseudomonas aeruginosa or Legionella pneumophila in the lungs or Fusobacterium necrophorum in the internal jugular veins after pharyngitis [80]. In addition, fungi, protozoa, and helminths usually cause vasculitis through direct invasion of the vessel walls. Several bacteria, including Pseudomonas aeruginosa, S. aureus, Burkholderia cepacia, L. monocytogenes, and Haemophilus influenza, have been incriminated [6]. Direct endothelial cell invasion can also be the main pathogenic process in infections caused by CMV and herpes simplex virus [80]. A study reported that intracerebral varicella-zoster virus (VZV) antigens and DNA could be isolated from the walls of temporal arteries of a large proportion of patients affected with giant cell arteritis or aortitis [16].
Cell-mediated hypersensitivity type IV
Antigenic exposure may attract lymphocytes which liberate cytokines causing tissue damage and further activation of macrophages and lymphocytes (Fig. 2, Panel d) [81]. The abnormal expression of adhesion molecules and cytokines in vascular endothelium in most vasculitis syndromes as a manifestation of endothelial dysfunction can be triggered by a variety of stimuli, including infectious agents, immune complexes, and anti-endothelial cell antibodies [2].
Infections in immunosuppressed patients with vasculitis
The interest in infection-related vasculitides has been boosted over the last 2 decades by the development of new molecular techniques and the proof of true associations between viral infections and systemic vasculitis [82]. The aggressive treatment of vasculitis incurs a variety of adverse effects, among which infection is the most common and prevailed as a major complication in every stage of vasculitis. In long-term follow-up studies, the incidence of infection was approximately 30% [7]. Most of the infections occurred during the first six months of vasculitis onset with high mortality [7]. Each type of vasculitis perhaps is associated with different infectious agents. Infections in vasculitides can be further divided into categories according to the infective agents: bacteria, fungi, and viruses [7].
Bacteria are the main infectious agents, accounting for 75.9% of all the cases [7,22]. Gram-negative bacteria were the leading causative agents. Mixed infection was also noticeable. The distribution of infective sites includes primarily the lungs and the skin [7]. Opportunistic infections like Pneumocystis jirovecii pneumonia (PJP) occur early in the course of AAV, and it is frequently associated with the induction of immunosuppression [83]. However, PJP is not limited to the first six months following AAV diagnosis, and late-onset infection can occur in the context of augmented immunotherapy, particularly with concurrent lymphopenia [83]. It should be noted that AAV treatment with rituximab has been associated with a high frequency of bacterial infections in a recently reported meta-analysis [84]. At the same time, prophylaxis with trimethoprim-sulfamethoxazole may be associated with a lower risk of bacterial infections in this subgroup of patients [85].
Risk factors for infections in vasculitides
In a recent study, risk factors for infection in patients with vasculitis at the time of the diagnosis included age, smoking, kidney dysfunction, low CD4 counts, and cyclophosphamide (CYC) therapy [17]. Other factors associated with infection in patients with vasculitis included dialysis dependence, anemia, and hypoalbuminemia [7]. Interference with cell-mediated immunity, particularly with low CD4 counts, predisposes to infections [86]. Thus, close monitoring of lymphocyte counts could improve the prognosis of patients with vasculitis with regard to infection incidence [7]. It is now apparent that prophylaxis should be considered for other patients who are receiving intense immunosuppressive therapy, especially if they are lymphopenic or have a low CD4 count [69].
With regards to factors predisposing to a worse prognosis in patients with systemic vasculitis and COVID-19, important insight was provided by the latest retrospective cohort study (1202 patients), with 49.8% of patients requiring hospitalization [87]. Of the hospitalized patients, approximately 22% died. Poor COVID-19 outcomes were seen in older male subjects [87]. Moreover, the presence of an increasing number of comorbidities was related to an adverse COVID-19 prognosis, while at least moderate vasculitic disease activity and the intake of 10 mg/day of prednisolone led to a higher odd of COVID-19 severity and mortality [87]. Moreover, the presence of comorbid respiratory disease might represent an additional risk factor, as shown by Rutherford et al. in a multicenter cohort of 65 patients with systemic vasculitis and COVID-19 [88].
Immunosuppressive agents and infections in vasculitides
The rate of severe infections and infection-related mortality in patients with severe vasculitis treated with immunosuppressive agent induction therapy was high. A recent metaanalysis including four studies and a total of 888 subjects estimated that in randomized controlled trials of necrotizing vasculitis, GPA, MPA, EGPA, and PAN treated with CYC combined with high dose glucocorticoids (GC) (mean cumulative doses of CYC were 2.7-50.4 g and of GC were 6-13 g), the risk of severe infection was 2.2%, the risk of any infection was at 5.6%, and an infection-related death was estimated at 1.7% per year per gram of CYC [23]. Interestingly, age, serum creatinine, and cumulative GC dose were not significantly associated with the rate of severe infections [23]. It is also suggested that oral CYC, which had a higher cumulative CYC dose compared to intravenous CYC, showed lower infection rates [89].
As far as rituximab therapy is concerned, a considerable percentage of patients with AAV may be susceptible to severe infections, with increasing age, impaired renal function, and lower body mass index acting as risk factors [90,91]. Dosing regimens (four-dose and two-dose) do not differ in the rate of incident infections [92]. In a recently reported Bayesian network meta-analysis of randomized controlled trials, mycophenolate mofetil was likely the safest treatment option with regard to severe infections in patients with AAV, followed by CYC, azathioprine, rituximab, and methotrexate [93]. The differences, however, were marginal and the only statistically significant comparison involved mycophenolate mofetil and methotrexate [93]. A combination of immunosuppressants (rituximab, CYC, GC) was not associated with a higher rate of severe infections in a cohort of patients with life-threatening AAV [94]. However, highdose GC added to rituximab may be related to a greater incidence of severe infections in a randomized clinical trial of patients with AAV but without severe glomerulonephritis or alveolar hemorrhage [95]. At the same time, the low-dose GC regimen was non-inferior regarding disease remission induction at six months [95].
COVID-19 in patients with vasculitides
The impact of SARS-CoV-2 infection and COVID-19 in patients with preexisting vasculitis also deserves an honorable mention. The presence of vasculitis was associated with 1.6-fold higher odds of hospitalization, while the use of disease-modifying anti-rheumatic drugs did not affect the need for hospitalization [96]. In the retrospective cohort study by Sattui et al. approximately 50% of patients required hospitalization and 15% died [97]. Among the identified risk factors were the increased age, male sex, comorbidity burden, and high-dose glucocorticoids [97]. Similar risk factors were proposed by the results of the COVID-19 Global Rheumatology Alliance physician-reported registry of patients with rheumatic diseases and confirmed or suspected SARS-CoV-2 infection, where the use of rituximab, sulfasalazine, and other immunosuppressants (azathioprine, cyclophosphamide, ciclosporin, mycophenolate, or tacrolimus) was additionally linked to increased death rates compared to methotrexate monotherapy [98]. Interestingly, the use of anti-tumor necrosis factor (TNF) agents was associated with a decreased odds of hospitalization [96]. Additionally, the use of the monoclonal antibody bamlanivimab was proven efficacious in a small cohort of patients with COVID-19 and AAV on rituximab therapy [99], a subgroup characterized by an impaired immune response to SARS-CoV-2 vaccination [100]. However, a booster dose may be met with greater success in these patients and is, therefore, warranted [101]. It is also worthwhile mentioning that vaccination against SARS-CoV-2 is well-tolerated and generally safe in patients with inflammatory rheumatic diseases, including vasculitides, with low rates of breakthrough infections [102]. However, disease relapses have been reported in patients with AAV and HSP [103,104]. Finally, the impact of the pandemic in patients with vasculitides may be indirect through the interruption of immunosuppressive treatment, subsequently leading to increased rates of disease flares [105].
Conclusions
The relationship between infection and vasculitis is complex. Infections have long been suspected to be triggering factors for many types of vasculitides. A causal relationship has only been firmly established in a few instances using an epidemiological approach. Of great importance is the recognition of the infectious origin of vasculitides. Treatment strategies differ from those applied to non-infectious vasculitides. A high index of suspicion is required because clinical features are not always specific. Thus, an infection should always be excluded based on appropriate examinations, and, if confirmed, early and aggressive treatment should be administered. This accurate etiological diagnosis is important since immunosuppressive treatment may lead to further deterioration if an infection is the cause of vasculitis. Moreover, vasculitides and the need for immunosuppression represent a major risk factor for the development of common and opportunistic infections, including the highly prevalent SARS-CoV-2, which may additionally hinder the prognosis of these patients. Therefore, their prompt recognition and treatment are of utmost importance in this patient population to prevent excess morbidity and mortality.
Author contributions Panagiotis Theofilis: Design, drafted the work. Aikaterini Vordoni: Design, drafted the work. Maria Koukoulaki: Design, critically revised the work. Georgios Vlachopanos: Design, critically revised the work. Rigas G Kalaitzidis: Concept, design, drafted and critically revised the work. All authors approved the final manuscript as submitted and agree to be accountable for all aspects of the work.
Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Data availability Not applicable.
Conflict of interest
The authors declare that there is no conflict of interest relevant to the manuscript.
Research involving human participants and/or animals Not applicable.
Informed consent Not applicable. | 2022-02-15T14:49:18.957Z | 2022-02-14T00:00:00.000 | {
"year": 2022,
"sha1": "c1038d4da887008266454d706606d7105ea55969",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00296-022-05100-9.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "c1038d4da887008266454d706606d7105ea55969",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119363122 | pes2o/s2orc | v3-fos-license | Variability in the Milky Way: Contact binaries as diagnostic tools
We used the 50 cm Binocular Network (50BiN) telescope at Delingha Station (Qinghai Province) of Purple Mountain Observatory (Chinese Academy of Sciences) to obtain simultaneous $V$- and $R$-band observations of the old open cluster NGC 188. Our aim was a search for populations of variable stars. We derived light-curve solutions for six W Ursae Majoris (W UMa) eclipsing-binary systems and estimated their orbital parameters. The resulting distance to the W UMas is independent of the physical characteristics of the host cluster. We next determined the current best period--luminosity relations for contact binaries (CBs; scatter $\sigma<0.10$ mag). We conclude that CBs can be used as distance tracers with better than 5\% uncertainty. We apply our new relations to the 102 CBs in the Large Magellanic Cloud, which yields a distance modulus of $(m-M_V)_0=18.41\pm0.20$ mag.
Contact binary systems
Contact binaries (CBs) are binary systems where both stellar components overfill and transfer material through their Roche lobes. They are rather common among the Milky Way's field stellar population. In the solar neighborhood and the Galactic bulge, their population density is approximately 0.2%. In the Galactic disk it is somewhat lower, on average, ∼0.1% (Rucinski 2006). CBs can be classified as early-and late-type systems; late-type CBs are also known as W Ursae Majoris (W UMa) systems. Observational evidence suggests that both binary components have very similar temperatures although their masses differ, a conundrum known as Kuiper's paradox (Kuiper 1941). As a solution to this paradox, Lucy (1968) proposed convective common-envelope evolution as the key underlying physical scenario of CB theory. Our modern view is that CBs have most likely been formed through loss of angular momentum (Stȩpień 2006;Yıldız & Dogan 2013).
W Ursae Majoris systems
The late-type, low-mass W UMa variables are, in essence, 'overcontact' binary systems. Both of their binary components usually rotate rapidly, characterized by periods in the range from P = 0.2 days to P = 1.0 day. One can indeed easily obtain complete, highquality W UMa light curves in just a few nights of observing time on relatively small telescopes. In this contribution, we present such observations of the six W UMa binary systems that reside in the old open cluster (OC) NGC 188.
As highlighted above, W UMa systems are common in both OCs and the Galactic field. This implies that they have great potential as potential distance indicators. Indeed, approximately 0.1% of the F-K-type Galactic field dwarfs are late-type CBs (Duerbeck 1984), while in OCs their frequency may as high as ∼ 0.4% (Rucinski 1994). If we could establish a reliable (orbital) period-luminosity relation (PLR) for such W UMa systems, they might potentially be employed to adopt a similarly important role as the often used Cepheids or RR Lyrae variables in the context of measuring the distances to old structures in the Milky Way. We note that while distances to individual W UMa systems cannot be derived to the same level of accuracy as those resulting from Cepheid analysis, the high CB frequency in old stellar populations could potentially allow us to overcome this disadvantage.
NGC 188
NGC 188 is located at a distance of ∼ 2 kpc. It contains a significant number of late-type CBs. Of these, seven W UMas near the cluster's center were first found by Hoffmeister (1964) and Kaluzny & Shara (1987). Subsequently, Zhang et al. (2002Zhang et al. ( , 2004 surveyed approximately 1 deg 2 around the center, yielding a CB haul of 16 W UMa systems. Branly et al. (1996) then used the Wilson-Devinney code to calculate light-curve solutions for five of the central W UMas and offered a discussion of the average W UMa distance compared with that to the cluster as a whole. Liu et al. (2011) and Zhu et al. (2014) published orbital solutions for EQ Cep, ER Cep, and V371 Cep, and for EP Cep, ES Cep, and V369 Cep, respectively. We observed NGC 188 over a continuous period of more than 2 months using the 50 cm Binocular Network telescope (50BiN; Deng et al. 2013) at the Delingha Station (Qinghai Province, China) of Purple Mountain Observatory (Chinese Academy of Sciences). We obtained simultaneous time-series light-curve observations based on an unprecedented number of 2700 frames, representing an effective observing time of 44 hr. Details of the observations are included in Chen et al. (2016a). The telescope's field of view, 20 × 20 arcmin 2 , is adequate to cover the cluster's central region.
To only select genuine cluster members, we performed detailed radial-velocity and proper-motion analyses (Chen et al. 2016a). We eventually concluded that of our total sample of 914 stars, 532 stars are probable cluster members. The latter delineate an obvious cluster sequence in the color-magnitude diagram down to V = 18 mag. We used the Dartmouth stellar evolutionary isochrones (Dotter et al. 2008) to ascertain the nature of the cluster members, adopting an age of 6 Gyr and solar metallicity. We derived a distance modulus (m− M) 0 V = 11.35±0.10 mag and a reddening of E(V −R) = 0.062 ± 0.002 mag. Rucinski (2006) published a simple M V = M V (log P) calibration, i.e. M V = (−1.5 ± 0.8) − (12.0 ± 2.0) log P, σ = 0.29 mag, based on his observations of 21 W UMa systems with good Hipparcos parallaxes and All Sky Automated Survey (ASAS) V-band photometry (maximum magnitudes). In Chen et al. (2016a), we established the equivalent relationship using our own (50BiN) V-band data, combined with the independently determined OC distance and the cluster's average extinction.
Distance determination
We obtained accurate light-curve solutions for six W UMas of the NGC 188 variables. We used these to estimate the CBs' physical parameters, including their mass ratios and the components' relative radii. We subsequently estimated the distance modulus to the W UMa systems as a whole, independently of the cluster distance. W UMas can be used to derive distance moduli with an accuracy of often significantly better than 0.2 mag. For this aspect of our distance-modulus analysis, we excluded ER Cep given its low cluster-membership probability; in addition, we suspect its nature as an eclipsing binary-type system. For the remaining five W UMas-specifically, EP Cep, EQ Cep, ES Cep, V369 Cep, and V370 Cep-we obtained a combined distance modulus of (m − M) 0 V = 11.317 ± 0.119 mag. This value is indeed comparable to the result from our isochrone fits, (m− M) 0 V = 11.35±0.10 mag, and also with previous results from the literature. The accuracy resulting from our new analysis is much better than that from application of the previously well-established empirical parametric approximation.
To carefully check our results for the cluster as a whole and the specific applicability of W UMas as distance tracer, we applied it to the OC Berkeley 39. Based on four of the latter cluster's W UMas, we derived a distance modulus of (m − M) 0 V = 13.09 ± 0.23 mag. This is also in accordance with literature results. Thus, W UMas as potential distance tracers have indeed significant advantages for the most poorly studied clusters. Based on our initial analysis, we found that five of our NGC 188 W UMa systems obey the overall W UMa PLR. Armed with the latter, we were hopeful that W UMas could indeed play an important role in measuring distances and to map Galactic structures on more ambitious scales than done to date.
Period-luminosity relations
Although CBs are of order seven magnitudes fainter than the often used Cepheid variables, within the same distance range their number is three orders of magnitude larger. Cepheids trace young ( ∼ < 20 Myr-old) features; CBs are instead found in 0.5-10 Gyrold stellar populations. Although RR Lyrae stars are also members of structures older than 1-2 Gyr, very few of the latter variables have been found in either open clusters (OCs) or the solar neighborhood.
Since Eggen (1967)'s seminal work, these considerations and observations have triggered a number of attempts at using CB period-luminosity-color (PLC) relations to determine distances to such old structures. Rucinski & Duerbeck (1997) employed observations of 40 W UMa-type CBs characterized by Hipparcos parallaxes with an accuracy in the corresponding distance moduli of ǫ M < 0.5 mag to improve their PLC relation. Subsequently, Rucinski (2006) derived a luminosity function composed of CBs sourced from the ASAS data. He then explored the viability of a V-band PLR. However, his 'PLR' exhibited only a weak correlation and was affected by large uncertainties and significant scatter. 5%, and 95% have distance uncertainties of less than 10%. The remaining 5% may be CBs associated with poor-quality photometry, variables affected by high or complicated differential extinction, or objects that could have been misidentified as CBs, e.g. semidetached binaries and-for small amplitudes-RR Lyrae and ellipsoidal binaries. Graczyk et al. (2011) published a catalog of 26,121 eclipsing binaries in the Large Magellanic Cloud (LMC), which ad been identified based on visual inspection of the Optical Gravitational Lensing Experiment III catalog. Their 1048 type-EC eclipsing binaries are CBs, although they only included CBs with long periods (log P > −0.2 [days]). To select CBs that can be used as reliable distance tracers, we adopted our period-color selection and imposed period limits of −0.13 < log P < 0.2. Here, the upper limit is at the long-period end of the CB distribution and the lower limit is the magnitude limit used for detecting LMC CBs.
Application to the Large Magellanic Cloud
This resulted in a total sample of 102 LMC CBs and a distance modulus of (m − M V ) 0 = 18.41 ± 0.20 mag. This is first distance to the LMC based on CBs. It is entirely consistent with the current best LMC distance modulus (de Grijs et al. 2014), (m − M) 0 = 18.49 ± 0.09 mag. | 2016-11-25T10:41:14.000Z | 2016-11-25T00:00:00.000 | {
"year": 2016,
"sha1": "cd05871b49361304a565430e4f3d05fddd8dfc9d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "cd05871b49361304a565430e4f3d05fddd8dfc9d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
222256756 | pes2o/s2orc | v3-fos-license | openEHR Archetype Use and Reuse Within Multilingual Clinical Data Sets: Case Study
Background Despite electronic health records being in existence for over 50 years, our ability to exchange health data remains frustratingly limited. Commonly used clinical content standards, and the information models that underpin them, are primarily related to health data exchange, and so are usually document- or message-focused. In contrast, over the past 12 years, the Clinical Models program at openEHR International has gradually established a governed, coordinated, and coherent ecosystem of clinical information models, known as openEHR archetypes. Each archetype is designed as a maximal data set for a universal use-case, intended for reuse across various health data sets, known as openEHR templates. To date, only anecdotal evidence has been available to indicate if the hypothesis of archetype reuse across templates is feasible and scalable. As a response to the COVID-19 pandemic, between February and July 2020, 7 openEHR templates were independently created to represent COVID-19–related data sets for symptom screening, confirmed infection reporting, clinical decision support, and research. Each of the templates prioritized reuse of existing use-case agnostic archetypes found in openEHR International's online Clinical Knowledge Manager tool as much as possible. This study is the first opportunity to investigate archetype reuse within a range of diverse, multilingual openEHR templates. Objective This study aims to investigate the use and reuse of openEHR archetypes across the 7 openEHR templates as an initial investigation about the reuse of information models across data sets used for a variety of clinical purposes. Methods Analysis of both the number of occurrences of archetypes and patterns of occurrence within 7 discrete templates was carried out at the archetype or clinical concept level. Results Across all 7 templates collectively, 203 instances of 58 unique archetypes were used. The most frequently used archetype occurred 24 times across 4 of the 7 templates. Total data points per template ranged from 40 to 179. Archetype instances per template ranged from 10 to 62. Unique archetype occurrences ranged from 10 to 28. Existing archetype reuse of use-case agnostic archetypes ranged from 40% to 90%. Total reuse of use-case agnostic archetypes ranged from 40% to 100%. Conclusions Investigation of the amount of archetype reuse across the 7 openEHR templates in this initial study has demonstrated significant reuse of archetypes, even across unanticipated, novel modeling challenges and multilingual deployments. While the trigger for the development of each of these templates was the COVID-19 pandemic, the templates represented a variety of types of data sets: symptom screening, infection report, clinical decision support for diagnosis and treatment, and secondary use or research. The findings support the openEHR hypothesis that it is possible to create a shared, public library of standards-based, vendor-neutral clinical information models that can be reused across a diverse range of health data sets.
Background
Despite electronic health records being in existence for over 50 years, our ability to exchange health data remains frustratingly limited. Semantic interoperability, as defined by the Healthcare Information and Management Systems Society [1], "provides for common underlying models and codification of the data including the use of data elements with standardised definitions from publicly available value sets and coding vocabularies, providing shared understanding and meaning to the user." We have many long-established terminologies from which we can draw coded value sets, such as SNOMED-CT (SNOMED Clinical Terms) [2], LOINC (Logical Observation Identifiers Names and Codes) [3], or ICNP (International Classification for Nursing Practice) [4]. Commonly used clinical content standards, and the information models that underpin them, are primarily related to health data exchange, and so are usually document-or message-focused [5]. In contrast to this, there have been two primary efforts to develop standards-based atomic clinical information models-the HL7 [6] Clinical Information Modelling Initiative (CIMI) [7] and the openEHR International [8] Clinical Models program [9]. Each of these groups aims to establish an open and shared library of standards-based, vendor-neutral, and use-case agnostic information models representing clinical concepts.
The vision of creating a public library of information models that potentially hold the whole scope, breadth, depth, and range of the health care domain is, at the very least, rather daunting. It is effectively seeking to establish a governed, coordinated, and coherent ecosystem of health data definitions. The goal is to develop each information model once and reuse them across various health data sets, potentially including those for data exchange, health record persistence, data registries, population health, and research. Due to the novelty of this approach, there is only anecdotal evidence so far on its feasibility.
In response to the COVID-19 pandemic, between February and July 2020, 7 openEHR templates were independently created to represent COVID-19-related data sets for symptom screening, confirmed infection reporting, clinical decision support, and research. Each of the templates prioritized reuse of existing use-case agnostic archetypes found in openEHR International's online Clinical Knowledge Manager [10] (CKM) tool as much as possible.
This case study aims to investigate the use and reuse of openEHR archetypes across the 7 openEHR templates as an initial investigation on the reuse of information models across data sets used for a variety of clinical purposes.
The openEHR Approach: Archetypes and Templates
Since 2008, the openEHR Clinical Models program has developed a comprehensive and collaborative methodology to develop clinical information models known as openEHR archetypes [11]. It has gradually developed an extensive library of high-quality, multilingual, and use-case agnostic archetypes that can then be aggregated, constrained, and reused in implementable data sets known as openEHR templates.
An openEHR archetype is a computable specification for a single clinical concept, based on the ISO 13606-2 Archetype interchange specification [12]. The archetypes represent clinical knowledge in a consistent, formal, computable format, independent of any software application or technical implementation. Combined with terminology, they provide a standardized and consistent way to capture, store, display, exchange, aggregate, and analyze health data.
The openEHR approach is unique in that an archetype design strategically aims for a notional maximal data set of relevant data elements with a use-case agnostic mindset to support all possible use-the universal use-case. Achieving a complete maximal data set or inclusion of all use cases is impossible to determine, except perhaps with the benefit of hindsight; however, it is the philosophical avoidance of a minimum data set approach that is critical. Best practice in archetype design aims for each archetype to include all data elements useful to express all attributes about the clinical concept, associated metadata describing the concept, use and misuse, and translations from the original authoring language.
Templates represent a specific data set, comprising one or more archetypes that are constrained to accurately match the data set requirements for a particular clinical use case, health domain, profession, or geographical location. The number of archetypes used in a template reflects the required scope of content and level of detail. Some simple templates representing a laboratory test report may comprise only a single archetype. Theoretically, there is no upper limit to the number of archetypes included in a single template. In practice, a consultation note for a first antenatal visit could comprise data elements from 50 or more archetypes to embrace the diversity and detail of clinical information required for an initial pregnancy assessment.
Principles of template development methodology strongly encourage reuse of existing published openEHR archetypes where available, customize existing archetypes to fit the clinical use case, and develop new archetypes only in situations where no previous archetype exists.
The two-level modeling approach described in the openEHR Archetype formalism [13]-defining and standardizing archetypes first, followed by combining and constraining them to create clinical templates-is unique to the openEHR approach. The rigorously governed, published archetypes held in the CKM provide a robust clinical knowledge foundation. Simultaneously, the templates enable modelers to represent diverse and complex real-world clinical information in standardized data sets.
The underlying crowdsourcing approach highlights openness, transparency, and accountability to the openEHR community. The CKM is a critical enabler: an online hub providing a shared library of archetypes and templates; a collaboration portal receiving contributions of models and expertise from the international member community; and a governance tool to manage clinical content publication, language translation, and artifact versioning.
The CKM volunteer community has over 2500 registered users from 103 countries-comprising clinicians, informaticians, software engineers, terminologists, academics, students, and consumers. There are 535 governed archetypes in CKM: 115 of these archetypes have completed peer review and have been published; 26 are currently undergoing a peer-review process of the content; the remainder are candidates for future publication. With an average of 15 data points per archetype, the CKM library equates to more than 8000 data points.
English is the original language of the international CKM, but each archetype can be multilingual. Translation of archetypes is a significant activity by community volunteers. Currently, CKM contains archetypes in 29 languages, with the most common translations into Norwegian Bokmål, German, and Portuguese (Brazil).
Case Study
In response to the COVID-19 pandemic, several implementers within the openEHR community openly shared their COVID-19-related templates in CKM. It started with one vendor and grew organically into a grassroots, community-driven collaboration. Three phases of template development were identified.
Phase 1
In late February, a major Norwegian hospital vendor recognized the need to develop and deploy new software tools in their clinical system to equip clinicians to monitor and report on COVID-19 cases within their hospitals. Within 10 days, two openEHR templates were created and deployed in English and Norwegian Bokmål for use in clinical systems [14,15] in Norway, Slovenia, and the United Kingdom. The templates were uploaded to a CKM COVID-19 public incubator [16] in early March 2020 under a CC-BY license: • Template 1: Suspected COVID-19 Risk Assessment [17] -based on guidance from advice given by the World Health Organization (WHO) and public health authorities in the United Kingdom, Slovenia, New Zealand, and Norway.
Due to the rapid deployment priorities imposed by the pandemic, the primary author of templates 1 and 2 developed the templates by reuse of as many existing archetypes as possible and opted to create new COVID-19-specific archetypes to represent the remaining data points (Ian McNicoll, McChB, personal communication). It was a reasonable and pragmatic design decision, with well-understood consequences-effectively a compromise between strategic reuse design principles and speed of modeling.
Phase 2
Soon after templates 1 and 2 were uploaded, CKM editors reviewed the COVID-19-specific archetypes. The editors analyzed the requirements from phase 1 and identified the missing archetype concepts in the CKM library. In direct response to the gap analysis, 11 new content-equivalent, use-case agnostic archetypes were created, covering screening questionnaires, travel history, and infectious disease exposure. Also, the modelers created 1:1 mappings [23,24] of all data points from templates 1 to 3 and templates 2 to 4 with a 98% success rate, providing a future migration path should the clinical systems using templates 1 and 2 choose to upgrade to the revised templates and use-case agnostic archetypes.
Phase 3
Three more templates were developed in the weeks and months that followed, and these provided an opportunity to test if the new phase 2 archetypes were fit for purpose and able to represent the requirements for other COVID-19-related data sets.
A Chinese university developed an entirely new template, representing the official Chinese Guidelines for Diagnosis and Treatment Guideline of COVID-19 (7th Edition) [25]. This template was used as the basis for a decision support system implemented within a Chinese hospital system in Wuhan, China, deployed in Chinese [26]. It was uploaded to CKM in early April 2020: • Template 5: COVID-19 Pneumonia Diagnosis and Treatment (7th Edition) [27] An Italian health software vendor adapted template 3 for a COVID-19 risk screening application with a nephrology focus. First-time modelers developed it for deployment in Italian within Brotzu Hospital, Cagliari, Sardinia [28], and uploaded it to CKM in late April 2020: • Template 6: Suspected COVID-19 Risk Assessment Nephrology [29] The Fast Healthcare Interoperability Resources Implementation Guide (FHIR IG) for the German Corona Consensus Data Set (GECCO) [30] supporting COVID-19 research was released in July 2020. In parallel, an openEHR template was developed to replicate the GECCO data set. It was uploaded to CKM in late July 2020: • Template 7: German Corona Consensus Data Set (GECCO) [31] Methods Analysis of both the number of occurrences of archetypes and patterns of occurrence within 7 discrete openEHR templates was carried out at the archetype or clinical concept level.
Reuse per Template
The 7 templates were analyzed in terms of: • Total data points: the total number of data elements or fields represented within a template, which gives an impression of the level of complexity or level of detail within the template; • Archetype instances: the total number of archetypes used within a template, including reuse; • Unique archetype occurrences: the total number of archetypes used within a template, excluding any reuse or repetition of archetypes; • Existing archetype reuse %: the number of use-case agnostic archetypes that existed in CKM before phase 1 that were used in the template, expressed as a percentage of the total number of archetype instances in the template; • New archetype reuse %: the number of use-case agnostic archetypes that were created during phase 2, used in the template and uploaded to the CKM pool, expressed as a percentage of the total number of archetype instances in the template; • COVID-specific archetype use %: the number of COVID-19-specific archetypes created during phase 1 that were used in the template, expressed as a percentage of the total number of archetype instances in the template.
Reuse per Archetype
The 7 templates were analyzed in terms of: • Archetype reuse: the number of times an archetype was used across all templates; • Template count: the number of templates that contained at least one occurrence of an archetype.
Reuse per Template
The results in Table 1 focus on the overall archetype composition of each template. The three archetype categories in Table 1 are defined as: • Existing archetypes: archetypes that had been authored before the COVID-19 template was developed and available in CKM; • New archetypes: archetypes authored as use-case agnostic archetypes during phase 2; • Phase 1 archetypes: COVID-19-specific archetypes authored during phase 1.
Total data points per template ranged from 40 to 179. The template with the largest number of data points was template 5, the Chinese COVID-19 guideline data set. Archetype instances per template ranged from 10 to 62. The template with the largest number of archetype instances was template 5. Unique archetype occurrences ranged from 10 to 28. The template with the largest number of unique archetypes was template 7, the German GECCO data set. Total reuse of use-case agnostic archetypes ranged from 40% to 100%. Existing archetype reuse of use-case agnostic archetypes ranged from 40% to 90%, and new archetype reuse of use-case agnostic archetypes ranged from 0% to 43%.
COVID-19-specific archetypes created for novel clinical concepts were used in templates 1 and 2. New use-case agnostic archetypes replacing the COVID-19-specific archetypes were used within templates 3, 4, 5, 6, and 7. Template 6 used a combination of ungoverned and new archetypes.
The number of data points per template can be considered a proxy for the level of detail in the template. The number of unique archetypes per template reflects the diversity of clinical content in the clinical requirements and may be considered a proxy for the level of complexity in the template.
Reuse per Archetype
The results in Table 2 focus on archetype concept reuse within templates by examining how many times each archetype occurs in each template. For example, the first archetype "Laboratory test result" is a published archetype and was used twice in template 2, twice in template 4, 15 times in template 5, and 5 times in template 7, for a total of 24 instances of reuse across 4 templates. Only clinical archetype use within the templates was analyzed. There were 48 existing and new use-case agnostic archetypes, of which 26 had completed the peer-review process, and the content published; 1 is currently undergoing the peer-review process; the remaining 21 are draft candidates.
The "Laboratory test result" archetype was the most reused archetype; it occurred 24 times across 4 templates. Reuse across templates reflects the commonality of content, despite different design intents for the templates. The existing "Story/History" and the new "Symptom/sign screening questionnaire" archetypes were reused across the largest number of templates-within 5 out of the 7 templates each.
Many of the existing archetypes were only used once within the context of these 7 templates. These archetypes had been authored for use-case agnostic use before the development of the COVID-19 data sets, so any reuse within these COVID-19 templates demonstrates reusability across both COVID-19 and non-COVID-19 use cases.
Not all archetypes were used in each template, reflecting the diversity of content requirements across the 7 templates.
Principal Findings
Before February 2020, the focus of openEHR International's CKM was on creating a library of shared archetypes. Templates had been uploaded to CKM, most commonly to demonstrate modeling patterns or to provide exemplars for common types of data sets. Any estimates of reuse of archetypes across templates had been wholly anecdotal, communicated directly by experienced modelers, and ranged from 60% to 90% reuse.
The onset of the COVID-19 pandemic triggered a collaborative openEHR community effort to fast track both archetype and template development, with CKM used as a coordinating hub. The 7 templates uploaded during this time to CKM have provided the first opportunity for a formal analysis of reuse.
Public sharing of the initial templates, templates 1 and 2, included 8 COVID-19-specific archetypes that were necessitated by the novel content combined with rapid implementation deadlines, resulting in relatively low reuse (ie, 40% and 52%, respectively). This was a reasonable and pragmatic modeling decision in the circumstances but diverged from the recommended design philosophy aiming for use-case agnostic archetypes that usually take more time to develop. Soon after, the CKM editors redesigned the 8 COVID-19-specific archetypes as 11 new use-case agnostic revisions-conceptually equivalent but intentionally designed to allow for broader reuse-and uploaded them as additions to the CKM library. The clinical concepts modeled in the new archetypes included a range of clinical screening question/answer pairs, as well as models for travel history and a risk assessment about exposure to infectious agents. Revised versions of the initial templates, with 100% reuse, were uploaded as templates 3 and 4, along with associated data mappings.
Modeling of questionnaire archetypes had been attempted unsuccessfully in the past, but without success [32]. Driven by the new COVID-19 screening requirements, modelers revisited the challenge of questionnaire modeling. Subsequently, they developed a family of screening questionnaire archetypes that were use-case agnostic and based on an underlying shared pattern, covering the screening for symptoms and signs, conditions, procedures, management and treatment, medication use, and exposure to agents. They were uploaded to a dedicated project in CKM [33] and made available for broader community reuse.
Phase 3 template development provided an opportunity to test and confirm the reuse potential for the new archetypes in additional clinical data sets.
Template 5 represented the official Chinese Clinical Guidance for COVID-19 Pneumonia Diagnosis and Treatment and was implemented as the foundation for a decision support application. This data set was the most extensive and most detailed in terms of both the number of data points and the number of archetype instances. Laboratory and imaging test results triggered system-generated advice about diagnosis and treatment, resulting in high reuse of the "Laboratory test result," "Laboratory analyte result," and "Specimen" archetypes. This template achieved 100% reuse of 17 unique archetypes drawn from the "existing" and "new" archetype pools. The archetypes were all translated into Chinese and uploaded to a Chinese equivalent of the CKM tool, known as the Healthcare Modelling Collaboration tool [34].
Template 6 represented Suspected COVID-19 Risk Assessment data within a nephrology context. It was based on template 3, including the screening questionnaire archetypes but reuse was reduced to 80% due to the inclusion of the "Fever" and "Social summary" archetypes intended to meet local data requirements.
Template 7 was created after communication with the authors of the FHIR IG for the GECCO. It was developed to investigate if the clinical content of a data set explicitly developed for implementation in FHIR could also be represented using openEHR archetypes. The resulting template contains the largest number of unique archetypes, which strongly suggests that this template was the most complex of the 7 templates. It was developed in 4 hours and resulted in 100% archetype reuse of 28 unique archetypes drawn from the "existing" and "new" archetype pools. Creation of the template first involved investigation and analysis of the FHIR IG to identify the clinical requirements and archetypes required, followed by aggregation and constraint of each archetype to match the precise requirements of the FHIR data set. Terminology value sets were not included in the modeling as it was assumed that the same value sets in the FHIR IG would be applicable in the openEHR template.
While there is considerable diversity in purpose or intent across the 7 templates, the level of archetype reuse is a clear indication of the level of commonality in the clinical concepts that underpin each data set. In addition, even though the focus and level of detail for each template varied, the shared data models underpinning each template ensured consistency of data across all of them.
It is also important to note the maturity of the archetypes used-70% (26/37) of the "existing" archetypes have completed the content peer-review process and have been published, which may be considered a proxy for data quality. Further investigations about the qualitative and quantitative assessment of archetype quality should be undertaken-firstly to assess each archetype, but also as a proxy for broader data set quality.
In building a template for each new data set, the amount of reuse depends on the similarity of its clinical concepts with archetypes created for inclusion in prior data sets. It is not so much the purpose, level of detail, or complexity of the data set that influences reuse, but rather the commonality of the component clinical concepts that determine which archetypes are required. In practice, each new template developed leverages all prior work that has shaped each existing archetype in the CKM library and, as illustrated by the development of new archetypes for templates 3 and 4, often extends the library collection. The design approach of archetypes as maximal data sets and universal use case for each concept supports the representation of a variety of levels of detail required in data sets. New clinical requirements are added by extending existing archetypes or creating new archetypes for novel concepts. Over time we can expect the number of archetypes to continue to grow and archetype quality enhanced with increasing levels of detail and refinements from the peer-review process. In this context, it is not unreasonable to expect future archetype reuse to remain at similar rates to those demonstrated in this set of COVID-19-related templates.
The 11 new archetypes in phase 2 were strategically designed as draft candidates: aiming for an inclusive, maximal data set about a single clinical concept; intended for a universal use case; discrete in scope, without any overlap with other archetypes.
The current CKM archetype library comprises a range of archetypes used in prior work. Each new, use-case agnostic archetype developed as part of the creation of the COVID-19-related templates added to CKM will be available for reuse in future modeling efforts. In this way, the CKM library will continue to grow, underpinned by technical and editorial governance processes to ensure coordination and coherence of the archetype library.
In this study, we have observed how the collection of archetypes listed in Table 2, a subset of the CKM library, has provided a focused ecosystem of coordinated and coherent information models to underpin each of the 7 data sets. With the whole CKM comprising 500+ archetypes and 8000+ data elements, it becomes more plausible to imagine the potential for this more extensive library of standardized, coordinated, and coherent information models to be able to represent a broader and more diverse range of data sets. In addition, as in the case of the development of template 7, if reuse of archetypes enables the creation of a template comprising 124 data points within 4 hours, the potential time efficiencies gained through archetype reuse is also worthy of further investigation to determine if this is more broadly applicable.
Conclusion
Investigation of the amount of archetype reuse across the 7 openEHR templates in this initial study has demonstrated significant reuse of archetypes, even across unanticipated, novel modeling challenges and multilingual deployments. While the trigger for the development of each of these templates was the COVID-19 pandemic, the templates represented a variety of types of data sets-symptom screening, infection report, clinical decision support for diagnosis and treatment, and secondary use or research.
The findings support the openEHR hypothesis that it is possible to create a shared, public library of standards-based, vendor-neutral clinical information models that can be reused across a diverse range of health data sets.
Further investigation is strongly recommended to evaluate: • The realistic extent and scope of a shared library of information models, including the limitations and barriers. Is it plausible to create a single universal health language, or would it be more feasible to develop libraries for specific purposes?
• Clinical knowledge governance requirements for a library of shared information models; • The measurement of the quality of individual information models; • The impact on data set quality if based on a foundation of high-quality information models; • Time and cost efficiencies of creating data sets from a shared library of information models; • The impact on health data interoperability if shared information models are used as the basis of data exchange directly between clinical systems, in different contexts, and for various purposes; • The impact on clinical safety when information models are shared and the need for data transformation or mapping is reduced or eliminated; • The impact on secondary use of data and research if shared information models are used, supporting safe and accurate aggregation and analysis of health data. | 2020-08-13T10:08:09.681Z | 2020-08-10T00:00:00.000 | {
"year": 2020,
"sha1": "cb34da0da3151ccaa08f68cf1955c7262365fbdb",
"oa_license": "CCBY",
"oa_url": "https://www.jmir.org/2020/11/e23361/PDF",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "22ac78e4eef4b2001bb199775a6584710107623c",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
256092879 | pes2o/s2orc | v3-fos-license | DNA double-strand breaks in human induced pluripotent stem cell reprogramming and long-term in vitro culturing
Human induced pluripotent stem cells (hiPSCs) play roles in both disease modelling and regenerative medicine. It is critical that the genomic integrity of the cells remains intact and that the DNA repair systems are fully functional. In this article, we focused on the detection of DNA double-strand breaks (DSBs) by phosphorylated histone H2AX (known as γH2AX) and p53-binding protein 1 (53BP1) in three distinct lines of hiPSCs, their source cells, and one line of human embryonic stem cells (hESCs). We measured spontaneously occurring DSBs throughout the process of fibroblast reprogramming and during long-term in vitro culturing. To assess the variations in the functionality of the DNA repair system among the samples, the number of DSBs induced by γ-irradiation and the decrease over time was analysed. The foci number was detected by fluorescence microscopy separately for the G1 and S/G2 cell cycle phases. We demonstrated that fibroblasts contained a low number of non-replication-related DSBs, while this number increased after reprogramming into hiPSCs and then decreased again after long-term in vitro passaging. The artificial induction of DSBs revealed that the repair mechanisms function well in the source cells and hiPSCs at low passages, but fail to recognize a substantial proportion of DSBs at high passages. Our observations suggest that cellular reprogramming increases the DSB number but that the repair mechanism functions well. However, after prolonged in vitro culturing of hiPSCs, the repair capacity decreases.
Background
Human induced pluripotent stem cells (hiPSCs) hold great promise for clinical applications because of their potential to differentiate into all three embryonic germ layers [1][2][3]. To use hiPSCs in cell therapy or disease modelling [4], it is fundamental that they possess an intact genome. Much research has been performed in the field of genome maintenance in mouse and human embryonic stem cells (hESCs). However, less is known about the causes of genomic aberrations and the functionality of repair mechanisms in hiPSCs [5]. In general, the genomic instabilities in hiPSCs may be introduced: 1) by pre-existing mutations in source cells; 2) during reprogramming; and 3) during in vitro expansion of the hiPSCs. It has been reported that at least 50% of the single-nucleotide variations in hiPSCs pre-existed in the source cells [6]. The process of reprogramming itself represents a serious risk of mutation acquisition. Primarily, deletions of tumour suppressor genes were observed during reprogramming [7]. Using episomal vectors may lower the risk of reprogramming-associated genome changes [8]. The culturing of pluripotent stem cells (PSCs) in vitro is probably the main cause of the accumulation of genomic instabilities, resulting from the adaptation to culture conditions and clonal selection during passaging. The data reported by Taapken et al. [9] support this idea, indicating that the types and frequency of karyotypic abnormalities are similar between hESCs and hiPSCs. In contrast, the results of Laurent et al. [7] revealed slight differences in the distribution of subchromosomal variations between hESCs and hiPSCs. Interestingly, in their study [7], prolonged in vitro culturing was associated with oncogene duplication.
One of the key techniques for monitoring DNA integrity is the detection of DNA double-strand breaks (DSBs). DSBs are a severe type of DNA damage that may cause irreversible changes in the genomic content of the cell. They are induced by internal factors such as the by-products of cell metabolism or replication stress, or by external factors such as exposure to irradiation or chemical agents [10]. A damaged cell may arrest the cell cycle until the lesions are repaired. If the DNA damage is not successfully repaired, apoptosis is commonly induced to prevent the propagation of chromosomal aberrations. The repair of DSBs is executed either by fast non-homologous end-joining (NHEJ) or more precise homologous recombination (HR). Both mechanisms contribute to DSB repair in a cell cyclespecific manner. NHEJ occurs at all phases of the cell cycle but is primarily responsible for DSB repair in the G1 stage. HR occurs predominantly in the late S and G2 phases [11]. Published data suggest that pluripotent cells exert stronger genomic protection and can repair DNA lesions more efficiently than differentiated somatic cells [5,[12][13][14]. However, a strong DNA protective mechanism may cause the pluripotent cells to be more prone to apoptosis.
Various DNA damage-response proteins have been used as markers of DSBs, including phosphorylated histone H2AX (known as γH2AX) and p53-binding protein 1 (53BP1) [15]. The generation of γH2AX foci at the site of DNA lesions precedes the formation of 53BP1 foci [16][17][18]. Several studies have confirmed that 53BP1 functions exclusively in NHEJ and that it inhibits the 5′ end resection needed for HR [19][20][21]. In contrast, γH2AX influences both NHEJ and HR [10]. The foci formation of γH2AX is dependent on the cell cycle phase [22][23][24]. S/G2 phase cells exhibit more γH2AX foci than do cells in G1 phase because of replicationrelated DSBs. Cell-cycle dependency has not been observed for 53BP1 [25].
In the present study, we compared the genomic integrity of fibroblasts and pluripotent stem cells. We used fluorescence microscopy to visualize the DSBs recognized by γH2AX and 53BP1 in three hiPSC lines and one hESC line at low or high passage numbers and in one line of source cell fibroblasts. Each hiPSC line is unique and represents a different reprogramming approach, as described in the Methods. We also aimed to detect differences in the ability to recognize DSBs artificially induced by γ-irradiation and their decrease over time. The measurements were conducted with respect to cell cycle stage, and the data were analysed separately for the G1 and G2/S phases. Thus, we aimed to elucidate genomic stability during hiPSC generation and in vitro culturing.
hiPSC generation and cell culture
Human dermal fibroblasts (hDFs; kindly provided by the National Tissue Centre Inc., Brno, Czech Republic) and CD34 + haematopoietic progenitors (blood sample kindly provided by the Department of Internal Medicine, Haematology and Oncology, Masaryk University, and University Hospital Brno, Czech Republic) were used as source cells for the generation of hiPSCs as described in Šimara et al., 2014 [26, 27]. For this study, we used the hiPSC line CBIA-3-CD34 + haematopoietic progenitors reprogrammed by the Sendai virus (CytoTuneTM-iPS Reprogramming Kit; Thermo Fisher Scientific, Waltham, MA, USA), hiPSC line CBIA-5-fibroblasts reprogrammed by the Sendai virus, and hiPSC line CBIA-7-fibroblasts reprogrammed by episomal vectors (Epi5™ Episomal iPSC Reprogramming Kit; Thermo Fisher Scientific). The CCTL-14 hESC line [28] was a kind gift from the Department of Histology and Embryology (Faculty of Medicine, Masaryk University, Brno, Czech Republic). All three hiPSC lines and the hESC line were maintained in the form of colonies on irradiated mouse embryonic fibroblast feeder cells (MEFs; 2.5 × 10 5 cells per 3.5-cm dish) in DMEM/F12 (1:1) supplemented with 20% knock-out serum replacement, 2 mM L-glutamine, 100 μM non-essential amino acids, 1% penicillin/ streptomycin, 0.1 mM 2-mercaptoethanol, and 10 ng/ml basic fibroblast growth factor (bFGF) (all from Thermo Fisher Scientific). The medium was changed daily. Markers of pluripotency (Oct-3/4, Sox2, Nanog, and SSEA4) were detected as described previously [26], and all three hiPSC lines were positive for all of these markers (Additional file 1: Figure S1). A teratoma formation assay confirmed that the hiPSCs could differentiate into all three germ layers (Additional file 2: Figure S2).
γ-Irradiation
Prior to irradiation, the hiPSCs and hESCs were feeder depleted by culturing on a Geltrex® matrix for 3 days. Essential 8™ medium (Thermo Fisher Scientific) was changed daily. The cells were then irradiated by ionizing radiation (IR; 0.5 Gy/min; 137 Cs; 1 and 4 Gy) to induce DSBs and fixed in 4% paraformaldehyde at 0.5, 2, and 6 h after irradiation.
The dose of 1 Gy was selected for the experiments based on published results [12,29] and our DSB count measurement after 1 Gy or 4 Gy irradiation (data not shown). The peak value of the foci number was recorded 0.5 h after irradiation; therefore, this time point was selected for the study of the functionality of the repair system.
Immunocytochemistry
Immunocytochemical staining was used to visualize the DSBs and distinguish between the cell cycle stages G1 and S/G2. Four hours before fixation, a nucleoside analogue of thymidine, 5-ethynyl-2′-deoxyuridine (EdU; Thermo Fisher Scientific), was added at a final concentration of 10 μM. The cells were fixed in 4% paraformaldehyde and permeabilized in 0.2% Triton-X (both from Sigma-Aldrich, St. Louis, MO, USA). Overnight incubation with primary antibodies against γH2AX (Biolegend, San Diego, CA, USA) and 53BP1 (Santa Cruz Biotechnology, Dallas, TX, USA) was followed by 1 h of incubation with a secondary antibody conjugated with Alexa 555 (Cell Signaling Technology, Danvers, MA, USA). The samples were stained with the Click-iT® EdU Alexa Fluor® 488 Imaging Kit (Thermo Fisher Scientific) to visualize EdU according to the manufacturer's instructions. Finally, the nuclei were stained with Hoechst dye (BisBenzemide H33342; 1 μg/ml; Sigma-Aldrich).
Fluorescence microscopy and image analysis
Fluorescent signals were detected using the Zeiss Axiovert 200 M system (Carl Zeiss, Oberkochen, Germany). The images were captured using a CoolSNAP HQ2 CCD camera in the wide-field mode (Photometrics, Tucson, AZ) at -30°C. Thirty 3-μm slices were acquired in each field at a resolution of 1392 × 1040 pixels. The pixel size of the images was 124 × 124 nm. Between 500 and 1000 cells were analysed from each microscopic slide. Two slides, γH2AX and 53BP1, were prepared from each sample and each time point. Acquiarium software, developed by our group, was used to acquire and analyse the images [30]. Acquiarium is open source software available for download at our group's official website (http://cbia.fi.muni.cz/projects/ acquiarium.html). During the analysis, individual cells in the field of view were first cropped manually. Next, the nucleus of each cell, stained with Hoechst dye, was recognized automatically using the "Find objects (hysteresis thresholding)" plugin. We used the Gaussian filter in the preprocessing phase (with sigma = 1), the threshold was calculated using the two-level Otsu method, and we defined the minimum size of an object to exclude the parts of adjacent cells. This plugin defined the area in which we counted γH2AX or 53BP1 foci. For this purpose, we employed the eMax algorithm described in [31] using the parameters sigma = 1, a spot height threshold of 80, and a maximum spot size of 800, which we set empirically. The EdU signal was quantified based on the total intensity calculated in the nucleus. The threshold for the separation of EdU-negative (G1) and EdU-positive (S/G2) cells was computed in MATLAB (Mathworks) using the Otsu method.
Flow cytometry analysis
To assess early apoptosis, cells were stained with Annexin-V fluorescein isothiocyanate (FITC) and 7amino-actinomycin D (7-AAD; BD Via-Probe) in Annexin-V binding buffer (Miltenyi Biotec, Bergisch Gladbach, Germany). From each sample, approximately 1 × 10 5 cells were processed. All of the samples were measured using a BD FACS Canto II flow cytometer (Becton-Dickinson). BD FACSDiva (Becton-Dickinson) software was used for the data analysis.
Western blotting
For each time point, approximately 1 × 10 6 cells were lysed in RIPA buffer. The total protein concentration was assessed using a Pierce™ BCA Protein Assay Kit (Thermo Fisher Scientific). Laemmli buffer was added, and the samples were separated by SDS-PAGE. The proteins were transferred onto polyvinylidene fluoride membranes, and the membranes were blocked with 5% milk in TBS-Tween for 1 h. The membranes were then incubated with a 1:1000 dilution of PARP and GAPDH primary antibodies (both from Cell Signaling Technology) in TBS-Tween with 5% milk at 4°C overnight. Subsequently, the membranes were incubated with the secondary antibody (1:5000 anti-rabbit HRP; Cell Signaling Technology) for 1 h at room temperature, and the blots were developed using the Clarity™ Western ECL Substrate (Bio-Rad Laboratories, Hercules, CA, USA) according to the manufacturer's instructions.
Statistical analysis
Comparison of two data sets was performed using Student's t test. Multi-group assays were analysed by a one-way analysis of variance (ANOVA) in conjunction with Tukey's test. A level of P ˂ 0.05 was considered to be statistically significant.
Discrimination between the cell cycle phases using EdU increases the accuracy of analysing DNA lesions
The overall goal of our study was to use the numbers of γH2AX and 53BP1 foci as a measure of DNA repair in hiPSCs and in their somatic founders. However, as described above, it has been previously shown that the numbers of γH2AX foci are influenced by the cell cycle phase, with more foci being present in the S/G2 nuclei than in the G1 nuclei [22][23][24]. Obviously, different types of cells (somatic versus pluripotent) as well as cells in different states of culture (early versus late) most likely differ in the lengths of the individual phases of their cell cycle. Therefore, we first determined to what extent the numbers of foci are influenced by cell cycle speed and may thus distort the overall picture obtained by the foci analysis. To do so, we labelled newly synthesized DNA with EdU, visualized the accumulation of γH2AX and 53BP1 proteins on chromatin (foci), and then used an automated analysis. This approach is shown in Fig. 1a. Figure 1b and c exemplify the situation when an EdU-positive cell (nucleus) contains a larger number of γH2AX foci compared to EdUnegative cells (nuclei). Before we counted the numbers of γH2AX and 53BP1 foci, we analysed the EdU signal distribution among the cell samples and separated the EdU-negative (G1 phase) and EdU-positive (S/G2 phase) nuclei. The EdU signal strength in particular cells in each sample was then expressed as a histogram (with a calculated threshold for EdU negativity) for maximum clarity and reproducibility in separating G1 and S/G2 cells. Histograms of all analysed samples are shown in Additional file 3 ( Figure S3). Our data revealed a statistically significant difference in cell cycle phase distribution between hDFs, representing a somatic cell type, and all pluripotent stem cells, irrespective of their type and passage number (Fig. 2). The high proportion (87.2%) of EdU-negative cells in the hDF sample suggests that the vast majority of these cells remain in G1 phase. By contrast, only between 49.5 and 57.0% of the pluripotent cells were EdU negative, confirming their high proliferation activity and short cell cycle.
Taken together, this series of experiments demonstrates the robustness of the approach that we have developed to visually discriminate between G1 and S/G2 cells in situ. Our data show that, using this technique, we can identify changes in cell cycle progression. In the context of cell cycle-associated differences in numbers of γH2AX and 53BP1 foci, this approach is extremely useful and was employed for all the following analyses in this study. The Acquiarium software also represents an extremely valuable tool for complex and automated microscope image analysis. Reprogramming is accompanied by increased numbers of γH2AX and 53BP1 foci, but this trend is reversed with prolonged in vitro culturing First, we wanted to determine whether reprogramming to pluripotency influences the numbers of DSBs as revealed by the presence of γH2AX and 53BP1 foci. To do so, we counted these foci in the parent fibroblasts (hDFs) and in cells of three independent hiPSC lines (CBIA-3, CBIA-5, and CBIA-7) at an early stage after their establishment (up to passage 27; further referred to as low-passage hiPSCs). As shown in Fig. 3a and b, the numbers of both types of foci in EdU-negative hiPSCs were higher than those observed in EdU-negative hDFs. Specifically, in hDFs, the average number of foci per cell was only 1.1 for γH2AX and 1.5 for 53BP1. In hiPSCs, however, these numbers ranged from 5.6 to 5.9 for γH2AX and from 2.1 to 4.0 for 53BP1. It needs to be stressed that the CBIA-5 and CBIA-7 cells produced about the same numbers of γH2AX foci (5.69 and 5.89, respectively) despite the different reprogramming method used to generate these cells (Sendai virus versus episomal vectors). The next question was whether prolonged passaging of hiPSCs may further affect the number of DSBs. To obtain this information, we evaluated foci in hiPSCs (all three lines as above) that were cultured for a minimum of 65 passages (further referred to as high-passage cells). In these high-passage hiPSCs, the numbers of foci decreased (compared to low-passage cells), reaching levels of only 2.6 to 4.4 foci per cell for γH2AX and 1.5 to 1.6 foci per cell for 53BP1. As described in the previous section, EdU-positive (S/G2) cells are characterized by many more DSBs than EdU-negative (G1 phase) cells, possibly as a result of replicative stress-associated amplification of DNA lesions during the progression of the cell cycle. Accordingly, the numbers of both γH2AX and 53BP1 foci were increased in EdU-positive cells compared to EdU-negative cells in all cell lines and passage categories (low and high) analysed in this experiment (Fig. 4). Interestingly, this S/ G2-linked increase was the highest in hDFs, with the mean numbers of foci per EdU-positive cell being 29.9 for γH2AX and 19.6 for 53BP1, probably reflecting their highly effective "healthy" repair machinery. In the low-passage hiPSCs, the respective mean numbers were slightly lower than in hDFs, 23.0-25.1 for γH2AX and 8.8-15.3 for 53BP1, while in high-passage hiPSCs these numbers dropped down to 11.0-17.9 for γH2AX and 4.7-6.5 for 53BP1. It is also of note that the mean numbers of γH2AX foci were always (in all cell lines as well as passage categories) higher than those of 53BP1 foci (Fig. 4a).
Since we hypothesized that increased DSBs are due to reprogramming rather than being associated with pluripotency, we thought that hESCs would have a rather low basal level of DSBs, possibly about the same as in hDFs. To address this issue, we also analysed a reference line of hESCs (CCTL-14) that we have shown in our previous work to conform in all aspects to hESC standards [28]. Contrary to our expectation, the numbers of DSBassociated foci in new, low-passage hESCs were much closer to those in hiPSCs than in hDFs. This held true for both EdU-negative and EdU-positive cells. Specifically, in EdU-negative cells the numbers averaged 4.5 for γH2AX and 2.7 for 53BP1 foci, and in EdU-positive cells they averaged 12.8 for γH2AX and 10.3 for 53BP1 foci. Clearly, the numbers of foci in hESCs follow the same trend as in hDFs and hiPSCs, being dramatically increased in S/G2 cells compared to the cells in G1 phase. Additionally, as in hDFs and hiPSCs, the numbers of γH2AX foci in hESCs were always higher than those of 53BP1 foci. Surprisingly, however, in hESCs the numbers of γH2AXand 53BP1-associated foci further increased with their prolonged culturing, which was in strict contrast to what we observed in hiPSCs (see above). Specifically, the numbers of foci per cell in high-passage hESCs were as follows: in the EdU-negative cells, 6.7 for γH2AX and 4.8 for 53BP1 foci; in the EdU-positive cells, 19.3 for γH2AX and 14.3 for 53BP1 foci. The complete set of foci numbers is shown in Table 1.
hiPSCs lose their DNA repair capacity after prolonged maintenance in vitro
The above experiments demonstrated that, under normal culture conditions, hDFs, hiPSCs, and hESCs all have characteristic numbers of γH2AX and 53BP1 foci. However, based on these measurements we cannot resolve whether this is due to differences in the level of "spontaneous" DNA damage, DNA repair capability (recognition of DNA lesions), or both. It is understood that the amount of DSBs in cultured cells caused by γirradiation is about the same for the same dose of irradiation, regardless of the type of cell. With this holding true, the numbers of γH2AX and 53BP1 foci detected in cells irradiated by the same dose of γ-rays should then reflect the capability of the DNA repair machinery to recognize DSBs rather than the level of DNA damage. In the following series of experiments, we built on this presumption to study the DNA repair efficiency of human pluripotent stem cells. We irradiated the respective cells (hDFs, hiPSCs, and hESCs) with the same dose of γ-rays (1 Gy) and then determined the number of γH2AX and 53BP1 foci at three different time points after irradiation (0.5, 2, and 6 h). It has previously been shown that the levels of γH2AX and 53BP1 loaded onto chromatin usually reach a maximum at approximately 15-30 min after ionizing irradiation [32][33][34]. Based on this data, we used 30 min as the starting point. Two additional time points (2 and 6 h) then provided information on how DNA repair is sustained. Figure 3c and d show the numbers of γH2AX and 53BP1 foci 30 min after γ-irradiation in EdU-negative cells. As expected for normal cells, hDFs exhibited a dramatic increase to 19.9 and 15.6, respectively. This represents an 18-fold (for γH2AX) and 10-fold (for 53BP1) increase over their numbers in non-irradiated controls, which indeed mirrors a massive initiation of DNA repair pathways (Fig. 4b). Surprisingly, although the numbers of both types of foci were higher in nonirradiated hiPSCs (irrespective of their passage number) than in hDFs (see the previous section), this was not the case for irradiated hiPSCs. Specifically, at 30 min after irradiation, low-passage hiPSCs produced 21.9 to 28.6 γH2AX foci and 17.8 to 20.1 53BP1 foci, thus always exceeding the corresponding numbers observed in hDFs. In contrast, high-passage hiPSCs produced only 8.0 to 12.4 γH2AX foci and 8.0 to 16.9 53BP1 foci. In other words, in hiPSCs, their prolonged passaging dramatically diminished the numbers γH2AX and 53BP1 foci induced by γ-rays to levels below or similar to those observed in hDFs.
As described in the previous section, the numbers of "spontaneous" γH2AX and 53BP1 foci were, for all cell types and categories studied here, always higher in S/G2 (EdU-positive) then in G1 (EdU-negative) cells. In hiPSCs, the fold-increase ranged from three-times in high-passage CBIA-5 cells (53BP1) to 4.6-times in high-passage CBIA-7 cells (γH2AX), and the changes were consistently statistically significant (Fig. 4a). This overall trend was also retained in γ-irradiated cells (at 30 min after irradiation); however, the actual fold-increase (S/G2 versus G1) was much lower, in four cases showing either no changes or statistically insignificant changes (for both γH2AX and 53BP1 foci in high-passage CBIA-3 and CBIA-7 hiPSCs) (Fig. 4a). Specifically, for hiPSCs, the fold-increase ranged from none to 2.1-times (24.6/11.6) for γH2AX foci in a b Fig. 4 Cell cycle-dependent changes in the γH2AX and p53-binding protein 1 (53BP1) foci number. A comparison of the foci number between the G1 phase (Edu negative; black column) and S/G2 phase (EdU positive; grey column) was performed in one fibroblast line (human dermal fibroblast; hDF), three hiPSC lines (CBIA-3, CBIA-5, and CBIA-7) and one hESC line (CCTL-14) at high or low passage number. a The number of foci per cell is shown in non-irradiated control cells (Ctrl) and irradiated cells (1 Gy; 0.5 h after irradiation). b Percentage of increase in γH2AX and 53BP1 foci after 1 Gy treatment. The mean ± SEM is shown. *P ˂ 0.05 by t test high-passage CBIA-5 cells. The percent increase of foci (both γH2AX and 53BP1) after treatment with 1 Gy was higher in the cells in G1 phase than in those in S/G2 phase (Fig. 4b). Taken together, this set of data reveals that a high level of spontaneous DNA damage (replicative stress occurring in S/G2 phase) dramatically distorts the outcome of γ-irradiation as measured by the numbers of γH2AX and 53BP1 foci.
As detailed above, we have found that irradiated highpassage hiPSCs load their DNA with much lower amounts of γH2AX and 53BP1 than irradiated hDFs and lowpassage hiPSCs, suggesting that high-passage hiPSCs are somewhat less proficient at initiating DNA repair. To further examine this issue, we also determined the numbers of γH2AX and 53BP1 foci at 2 and 6 h after γirradiation and then analysed the shapes of the resulting time-course curves. The steepness of the resulting curves, which are shown in Fig. 5a and b, collectively confirm our initial notion. The curves representing hDFs and lowpassage hiPSCs decline more steeply, indicating a faster decrease in DSBs, while the curves representing highpassage hiPSCs decline more gradually, indicating slower recovery from DSBs.
We also analysed hESCs in parallel to hiPSCs to determine whether the studied phenomena are associated with de-differentiation to pluripotency or with the pluripotency (Fig. 4a). It is of note that in high-passage hESCs (both EdU-negative and EdU-positive) the numbers of 53BP1 foci (but not of γH2AX foci) even increased compared to those typical for low-passage hESCs. Finally, the steepness of the time-course curves indicated that the decrease was more similar to hDFs than to hiPSCs (Fig. 5c and d).
To test possible differences in the sensitivity of particular cell types to apoptotic signals, we investigated the cleavage of PARP, an indirect marker of DNA damage, and early apoptosis using Annexin-V/7-AAD assay. A Western blotting analysis of PARP in hiPSC lines demonstrated that the highest cleavage occurred at 2 h after γ-irradiation ( Fig. 6a and b). No difference was observed between low and high passages. The PARP cleavage was later accompanied by a decrease in cell viability at the 6-h time-point in all hiPSC lines with the exception of high-passage CBIA-5 cells. Interestingly, hDFs and hESCs did not display as much sensitivity to 1 Gy γ-irradiation as hiPSCs (Fig. 6c).
Discussion
A DNA molecule is unstable and subject to internal and external harmful factors. Correct functioning of the DNA repair mechanisms is, therefore, essential for the maintenance of genomic integrity. In the field of hiPSC research, only cells with an intact genome can be used for clinical application. Unfortunately, the generation and expansion of hiPSCs in vitro causes genomic instability. In our research, we monitored the amount of DSBs, either spontaneous or irradiation-induced, in three lines of hiPSCs (CBIA-3, CBIA-5, and CBIA-7) at low or high passage numbers, as well as in original source cell fibroblasts (hDFs). One hESC line (CCTL-14) was also examined. Our goal was to shed light on the reaction of the cells to reprogramming and on the prolonged in vitro culturing of pluripotent cells. Here, we focused specifically on the kinetics of DSB generation and repair, cell cycle speed changes, triggering of apoptosis, and cell viability. Special attention was paid to the cell cycle phase of individual cells.
We selected two markers of DSBs-the phosphorylated histone variant γH2AX and its binding partner, the DNA repair mediator 53BP1. Fluorescence microscopy was chosen to detect these proteins because it offers two main advantages over other methods such as Western blotting. First, the expression of 53BP1 does not change; only its localization at DNA damage sites is affected. Second, analysis at the single-cell level assures a higher sensitivity and allows for the discrimination between cells at various cell cycle phases. We employed EdU staining, which discriminates between the G1 and S/G2 phases of the cell cycle. By incorporating a nucleoside analogue of thymidine (EdU) into the DNA during replication, only cells in the S or G2 stage are labelled positive [35]. The images were analysed using Acquiarium software. This software allows us to reliably determine the foci number together with the intensity of the EdU signal for each individual cell and to analyse the data from hundreds of cells per sample on a large scale. Our method for separating EdU-negative (G1 phase) cells from EdU-positive (S/G2 phase) cells is based on plotting the EdU intensity levels in histograms and using the Otsu method to find the threshold. Using this method, we revealed a longer cell cycle in somatic cells compared to pluripotent cells, which is in accordance with previously published data [36][37][38][39] and justifies the use of this method for cell cycle discrimination on the single-cell level. This approach also assures consistency among samples.
While counting the numbers of γH2AX and 53BP1 foci, it is of utmost importance to know exactly which phase of the cell cycle each individual cell is in at the moment. Our data show that cells in S/G2 phase contain more γH2AX and 53BP1 foci than cells in G1 phase and that this difference is more pronounced in nonirradiated controls. These foci emerge due to replication stress during S phase [12,22,23,40]. The replicationrelated foci play a critical role in the comparison of DSB numbers, especially between different cell types. As long as the cells have a similar cell-cycle length (e.g. pluripotent cells at a similar passage number), the number of DSBs could be compared relatively well without using cell cycle discrimination. However, the following factors influence the cell cycle speed and should be considered: 1) pluripotent cells have been reported to have a shorter cell cycle than differentiated somatic cells [36][37][38][39]; 2) pluripotent cells at high passages may have an increased rate of proliferation [39,41,42]; and 3) irradiation induces cell cycle arrest through checkpoints [43][44][45]. We analysed the foci number separately for the EdU-negative and EdUpositive groups to eliminate the replication stress bias. Our data indicate a higher percent increase of foci upon b a c Fig. 6 PARP expression and early apoptosis. Cells were exposed to 1 Gy radiation and analysed by Western blotting and flow cytometry after 0.5, 2, and 6 h. a Western blotting of PARP, cleaved PARP (c-PARP), and GAPDH. b Densitometric analysis shows the ratio of cleaved PARP to uncleaved PARP. c Annexin-V and 7-AAD were measured to assess the level of early apoptosis by flow cytometry. The Annexin-V-and 7-AAD-negative cell population is shown in the graph (± SEM). *P ˂ 0.05 versus control (Ctrl) within each group by one-way ANOVA and Tukey's multiple comparison test). hDF human dermal fibroblast γ-irradiation of cells in G1 phase, which are not burdened by replication-related foci. The cell-cycle dependency was confirmed for both γH2AX and 53BP1 markers. In general, fewer foci were detected for 53BP1 than for γH2AX, suggesting that 53BP1 is a less sensitive DSB marker with a lower capacity to recognize DSBs than γH2AX. It is known that the HR pathway plays a pivotal role during hiPSC generation [46], and 53BP1 promotes the NHEJ repair pathway while inhibiting the HR pathway [19][20][21]. In contrast, γH2AX influences both the NHEJ and HR pathways, and 53BP1 does not bind to all of the γH2AX foci [10,11,18].
Similar research was performed by Suchánková et al. [25], who measured the formation of γH2AXand 53BP1-positive nuclear bodies in relation to cell cycle stages. They used genetically modified HeLa-Fucci cells, which are able to express RFP-Cdt1 in the G1 phase and GFP-geminin in the S/G2/M phases to discriminate among the cell cycle phases. They observed a higher number of γH2AX-positive repair foci in the G2 phase than in the G1 phase for both non-irradiated and γirradiated (5 Gy) HeLa cells. In contrast to our work, they did not observe such a difference for 53BP1. It is of note that different cell types as well as a different radiation dose (1 Gy) were used in our study compared to Suchankova et al., and it has been previously published that foci formation upon ionizing radiation may vary between cell types and is radiation-dose dependent [32,34,47,48].
In our study, we worked with three unique hiPSC lines that were derived from two independent cell types (dermal fibroblasts and blood cells) and reprogrammed either by the Sendai virus or episomal vectors. This selection of samples enables us to generalize our conclusions for hiPSCs to a certain extent. To avoid bias caused by replication-related foci, we further analysed γH2AX and 53BP1 foci numbers only in the G1 (EdUnegative) subgroup. Our results indicate that spontaneously occurring DSBs are best recognized by both markers in hiPSCs at low passage, while fewer foci were observed in hiPSCs at high passage and in source fibroblasts. A low foci number in fibroblasts, therefore, is increased significantly after reprogramming into hiPSCs (either by Sendai virus or episomal vectors) and then decreases again after long-term in vitro passaging. Our results are consistent with recently reported data showing that H2AX plays a critical role in iPSC generation. Gonzáles et al. reported an increase in γH2AX during the cellular reprogramming of mouse embryonic fibroblasts independent of viral integration [46]. The HR pathway was confirmed to be essential for the errorfree repair of DSBs in both genome-integrating and non-integrating reprogramming. The importance of H2AX at the early stage of reprogramming was also suggested by Wu et al. [49]. Our observations markedly resemble the results of copy number variation (CNV) measurements by Hussein et al. [50]. They concluded that most de novo-formed CNVs are present in earlypassage hiPSCs, while fewer CNVs are found in latepassage hiPSCs and fibroblasts. There is a strong connection between CNVs and DSBs because deletions in subtelomeric regions have been shown to be highly sensitive to DSBs and are the major cause of chromosomal instability [51,52]. Similar results were published by Laurent et al. [7] who reported a higher frequency of CNVs in pluripotent samples than in non-pluripotent samples and noticed that some of the deletions receded from the population over long-term passaging. Taken together, their data suggest that genomic instability is highest in low-passage hiPSCs, and CNVs vanish during multiple clonal-based passages because most of the mutations do not provide any advantage. However, certain growth-advantageous mutations-for example, defects in genes controlling the cell cycle-may be fixed in the population [5].
The abovementioned findings imply that more DSBs at low passages are detected as a consequence of reprogramming stress and disappear as the hiPSCs are adapted to in vitro conditions and clonally selected. However, the irradiation experiments revealed that the high-passage hiPSCs cannot recognize DSBs as effectively as hDF source cells, particularly by γH2AX. The lack of ability to recognize the irradiation-induced DSBs was also obvious in all three high-passage hiPSCs lines in the time-course study. These data suggest that hiPSCs lose their repair capacity over multiple passages in vitro. Similar results were published by Zhang et al. on one mouse iPSC line [29]. They confirmed the compromised DNA damage repair capacity of iPSCs compared with the respective source cells after γ-irradiation treatment but did not focus on the length of the in vitro culturing of iPSCs. For potential clinical applications, the length of in vitro culturing time should be reduced to as short as possible. However, a certain amount of time in vitro is unavoidable because of the reprogramming process itself, cell expansion, and clearance of the remaining reprogramming factors (viral particles or vectors).
Of note, low-and high-passage hiPSCs displayed similar apoptotic responses upon γ-irradiation. PARP cleavage peaked 2 h after irradiation, which led to an increase in early apoptosis after an additional 4 h in most of the hiPSC lines. These data suggest that, despite differences in DSB recognition, both low-and high-passage hiPSCs exert DNA protection mechanisms that trigger apoptosis in reaction to γ-irradiation. Increased apoptosis was not observed in somatic hDFs or in the hESC line CCTL-14, suggesting their lower sensitivity to DNA damage. | 2023-01-23T14:19:29.610Z | 2017-03-21T00:00:00.000 | {
"year": 2017,
"sha1": "c7ed0af302b185432f5b68ac64b60a98f6100469",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13287-017-0522-5",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "c7ed0af302b185432f5b68ac64b60a98f6100469",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
208086577 | pes2o/s2orc | v3-fos-license | Time-Course Changes in Urine Metabolic Profiles of Rats Following 90-Day Exposure to Propoxur
As a major kind of carbamate insecticide, propoxur plays an important role in agriculture, veterinary medicine, and public health. The acute toxicity of propoxur is mainly neurotoxicity due to the inhibition of cholinesterase. However, little is known regarding the toxicity of propoxur upon long-term exposure at low dose. In this study, Wistar rats were orally administrated with low dose (4.25 mg/kg body weight/day) for consecutive 90 days. And the urine samples in rats treated with propoxur for 30, 60, and 90 days were collected and analyzed by employing 1H NMR-based metabolomics approach. We found that propoxur caused significant changes in the urine metabolites, including taurine, creatinine, citrate, succinate, dimethylamine, and trimethylamine-N-oxide. And the alteration of the metabolites was getting more difference compared with that of the control as the exposure time extending. The present study not only indicated that the changed metabolites could be used as biomarkers of propoxur-induced toxicity but also suggested that the time-course alteration of the urine metabolomic profiles could reflect the progressive development of the toxicity following propoxur exposure.
Results
Changes in body weight and organ weight. No obvious toxic signs were observed in the propoxur-treated rats. However, the body weight of the propoxur-treated rats decreased significantly compared with the control after the 90-day subchronic exposure (Fig. 1). The weight of organs was measured and the data was shown in Table 1. We found that only liver weight was decreased significantly after 90-day exposure; however, no obvious difference in all other organ weights was observed between the propoxur-treated rats and the control rats ( Table 1).
Effect of propoxur on clinical biochemistry and histopathology of liver and kidneys.
The changes of the biochemical parameters in serum of the rats after 90-day exposure of propoxur were shown in Table 2. We found that serum cholinesterase (ChE) activity was inhibited by 34% in the propoxur-treated rats compared with the vehicle-treated rats ( Table 2). Pathological sections of liver and kidney tissues of the rats exposed to propoxur are shown in Fig. 2. Microscopy examination found that liver parenchymal cells from the propoxur-treated rats changed with prominent swollen hyperchromatic nuclei (Fig. 2B), suggesting lesser liver damage was induced upon 90-day exposure of propoxur. However, no obvious histopathological change associated with kidney damage was observed in the kidney sections (Fig. 2D), which may suggest propoxur at the dose used in this study could not induce the kidney damage.
NMR spectroscopy and pattern recognition analysis of urine. Figure 3 shows the results of PCA analysis of urine spectrometry from the rats. Each point in the score plot represents an individual sample from each rat. It was found that the dots from control rats at different time points were all in the upper half of the circle and mainly grouped in the upper right area except all those from 90-day time point and some from the 60-day Figure 1. Change of the rat body weight following exposure to propoxur. The adult rats were orally dosed daily with propoxur (4.25 mg/kg body weight/day) for consecutive 90 days and the body weight of each rat was recorded daily. Data were expressed as mean ± SE with n = 5. * P < 0.05, compared with the control at the same time point. www.nature.com/scientificreports www.nature.com/scientificreports/ time point, which were in the upper left area. However, the dots from the treated rats were all in the lower half of the circle and mainly in the lower right area while only some dots at the 30-day time point are in the lower left area, except those at 0-day point, which were all in the upper right area and almost overlap with those from the control rats at the same time point. From this figure we found that at the very beginning of the experiment, there was no obvious difference in metabolomics profiles between the control rats and the treated rats. However, in the propoxur-treated rats, after longer exposure to propoxur, the metabolomics profiles changed greatly and the dots of treated groups separated gradually from those of the control groups, although the metabolomics profiles also changed slightly in the control rats especially after 60 days, which may suggest the normal biochemical changes due to physiological development of the test animals. Nevertheless, there was seldom overlapping of the dots from the treated rats in the four sampling time points (Fig. 3A).
Relative organ weights
The corresponding loading plot showed which metabolites contributed most to the separation of samples in the score plots. In the PCA loading plot, most dots, which represent the metabolites, are mainly around the zero points, while only a few distributed apart from the zero point, which determined the difference between the treatment groups (Fig. 3B). The time-course alteration of metabolomic profiles of the treated rats reflects the development of the propoxur toxicity. Thus, the PCA score plot revealed that propoxur time-dependently affected the urinary metabolite profiles. Figure 4 showed the typical NMR spectrums in each treatment group. There is no obvious difference of the metabolomic profiles of the rats at the first sampling time point after the beginning of propoxur administration (the 30th day after the first dosing) between the treated and the control except for citrate (2.54, 2.66 ppm), succinate (2.42 ppm), and 2-oxoglutarate (2-OG, 2.46, 3.02 ppm), which are slightly lower in propoxur group compared with that in control (Fig. 4a,b). However, at the second sampling time point (the 60th day after the first dosing), alterations of the profiles were obviously in the urine were obvious. Propoxur caused an increase in taurine (3.26, 3.42 ppm), creatinine (3.06, 3.04 ppm), trimethylamine-N-oxide (TMAO, 3.27 ppm), and dimethylglycine (DMG, 2.94 ppm), while caused a decrease in citrate (2.54, 2.66 ppm), phenylacetylglutamine (PAG, 3.76 ppm), succinate (2.42 ppm), and 2-OG (2.46, 3.02 ppm) (Fig. 4a,c). All these altered metabolites had a more drastic change at the third sampling time point (the 90th day) compared with those at the second sampling time point (the 60th day) (Fig. 4a,d). The insecticide-induced perturbations in the metabolites of urine were summarized in Table 3.
Discussion
In this study, we employed NMR-based metabolomics approach to reveal that the metabolomic profiles changed along with the treatment time of propoxur. We found that propoxur caused changes in metabolomic profiles in rat urine. Loading plots and spectra revealed metabolomic changes including the elevation of creatinine, taurine, etc. The elevation of urine taurine and creatinine has been found to be a biomarker for liver damage 27,28 . The liver www.nature.com/scientificreports www.nature.com/scientificreports/ histopathological examination showed that rat liver parenchymal cells had prominent swollen hyperchromatic nuclei, while kidney tissues displayed normal structure after 90-day exposure. It was noticed in our previous studies that the liver damage with hepatocellular necrosis and vacuolation after the rats were dosed with propoxur for 28 consecutive days at a higher dose (1/10 LD 50 ) and slightly histopathological changes at a lower dose (1/25 LD 50 ) 29,30 ; however, kidney histopathological examination showed that no pathological change was found in the rats exposed to propoxur for 28 days even at high dose (1/5 LD 50 ) 29 . We found that tricarboxylic acid (TCA) cycle intermediates or TCA-related metabolites including citrate, succinate, 2-OG, and acetate decreased in urine after propoxur treatment (Table 3), suggesting a reduced or slower catabolism in hepatocytes 31,32 . Thus, propoxur might affect the energy metabolism in liver.
Moreover, the decrease of citrate in urine is found to be a biomarker for renal tublular acidosis caused by renal tubular dysfunction 33 . While the increase of dimethylglycine (DMG) in urine usually suggested that the renal papillae was damaged 18 . Thus, propoxur could cause the damage in not only liver but also kidneys although no abnormal structure was oberved in the histological sections of kidneys by regular microscopic examination at that time point. Actually, the metabolomics profile analysis of urine samples not only revealed the toxicity of propoxur in the organs but also provided information on the mechanism of its toxicity.
Our previous study reported that compared to histopathology and clinical chemistry, 1 H NMR-based metabolomics was much more sensitive in detecting organ toxicity 25 . Consistently, in this study, the 1 H NMR analysis is found to be able to detect the change of the metabolites as biomarkers, and with the time extension of the exposure, more metabolites were found to be changed in the levels and bigger alteration of levels of the metabolites www.nature.com/scientificreports www.nature.com/scientificreports/ were observed, which could predict the toxicities (e.g. hepatotoxicity and nephrotoxicity) of the insecticide at the different stages of the exposure.
In sum, the present study revealed that propoxur caused prominent changes of the urine metabolomics in rats. In addition, we identified the urine biomarkers for the propoxur exposure. This analysis of time course-based urine metabolomics profiling is a useful non-invasive in vivo assay for the toxicity of pesticides following long-term exposure.
Animals and treatment.
Male Wistar rats with average weight of 200 ± 20 g were purchased from Weitong Lihua Laboratory Animal Technology Company (Beijing, China) and were housed individually in stainless steel cages. Animals were acclimatized for at least 1 week before the commencement of the study. During the experiment, the environmentally controlled conditions (room temperature: at 22 ± 2 °C and 50-60% humidity) and a light/dark cycle of 12 h were maintained. Animals had free access to water and commercially prepared laboratory animal diet. All animal procedures were performed strictly in accordance with current China legislation and approved by the Institute of Zoology Animal and Medical Ethics Committee.
Ten rats were randomly divided into 2 groups with 5 animals in each group. Previous studies showed the acute oral half-lethal dose (LD 50 ) of propoxur was 85.1 mg/kg for male rats 34 . In this study we chose the doses of 1/20 LD 50 (4.25 mg/kg body weight/day) of the pesticide as the dose for the pesticide treatment group.
The pesticide was dissolved in corn oil (1 ml/kg body weight for rats) and administered via oral gavage. The rats in control group received an equivalent volume of corn oil. Rats were given pesticide daily for 90 consecutive days. The body weight of each rat was recorded daily throughout the experimental period. Behavior and survival were monitored daily after dosing.
Sample preparation. During the experiment, at the points of 30 days, 60 days, and 90 days from the first administration, 24-hour urine samples of each rat were collected into ice-cold vessel containing 1% sodium azide (0.1 ml) to prevent bacterial contamination. The urine samples then were centrifuged at 3000 × g for 10 min to remove any particulate matter, after which an aliquot was taken from each sample and stored at −80 °C until NMR analysis.
Twenty-four hours after the final administration, all rats were anesthetized and decapitated. Blood samples were collected and then centrifuged. The serum was collected for clinical biochemistry assays. Organ weights were immediately measured.
Histopathology. Liver and kidney samples were fixed in 10% formalin overnight. The tissues were dehydrated in a graded series of alcohol, cleared in xylene, and then embedded in paraffin. Sections were cut with 4-µm thickness, and then rehydrated and stained with hematoxylin and eosin. The slides were observed with microscope (Olympus, Tokyo, Japan).
Serum clinical biochemistry. Biochemical parameters of serum samples were analyzed on an
Autolab-PM4000 Automatic Analyzer (AMS Co., Rome, Italy). The values of the parameters were expressed as mean ± SD. Spain), each 1 H NMR spectrum was automatically phased, baseline corrected and segmented range in size from 0 to 10 parts per million (ppm) in the spectral region, and each 1 H NMR spectrum was divided with δ 0.04 ppm. For the urine data, the water and urea region was removed (δ 4.2-6.0) prior to PR analysis, and the spectra were normalized to the total urine volume collected at each time point, to correct the variation in concentration.
Pattern recognition of the 1 H NMR spectra. The reduced data described above were scaled to the total integral of each spectrum before PR analysis. Statistical analysis was processed with the soft independent modeling of class analogy (SIMCA) software package (Version 11.5, Umetrics AB, Umeå, Sweden). The unsupervised pattern recognition method principal component analysis (PCA) was performed to examine the dominant intrinsic variation in the dataset 35,36 .
Statistical analysis. One-way ANOVA and Student's t-test was used to assess the statistical significance of differences in measured parameters between the two groups. P < 0.05 was considered statistically significant. | 2019-11-18T15:10:58.968Z | 2019-10-24T00:00:00.000 | {
"year": 2019,
"sha1": "db189406c18cfb09b422a374d9cc5018df8d1f6d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-52787-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "db189406c18cfb09b422a374d9cc5018df8d1f6d",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253665932 | pes2o/s2orc | v3-fos-license | Output Characterization of 220 nm Broadband 1250 nm Wavelength-Swept Laser for Dynamic Optical Fiber Sensors
Broadband wavelength-swept lasers (WSLs) are widely used as light sources in biophotonics and optical fiber sensors. Herein, we present a polygonal mirror scanning wavelength filter (PMSWF)-based broadband WSL using two semiconductor optical amplifiers (SOAs) with different center wavelengths as the gain medium. The 10-dB bandwidth of the wavelength scanning range with 3.6 kHz scanning frequency was approximately 223 nm, from 1129 nm to 1352 nm. When the scanning frequency of the WSL was increased, the intensity and bandwidth decreased. The main reason for this is that the laser oscillation time becomes insufficient as the scanning frequency increases. We analyzed the intensity and bandwidth decrease according to the increase in the scanning frequency in the WSL through the concept of saturation limit frequency. In addition, optical alignment is important for realizing broadband WSLs. The optimal condition can be determined by analyzing the beam alignment according to the position of the diffraction grating and the lenses in the PMSWF. This broadband WSL is specially expected to be used as a light source in broadband distributed dynamic FBG fiber-optic sensors.
The rapid scanning speed of WSLs is related to the sensitivity of dynamic fiber optic sensors and the OCT real-time imaging acquisition, and the narrow linewidth is related to the penetration depth of the sample in the OCT system. High-speed WSLs have been implemented in various ways [1][2][3][4][5][6][7][8][9][19][20][21][22][23][24]. Recent studies on high-speed WSLs have mainly focused on increasing the repetition rate of the scanning frequency by using a Fabry-Pérot wavelength tunable filter (FPTF) or MEMS-based mechanical filter, driven at a scanning frequency of several hundred kilohertz, or by using an optical interleaving method. On the other hand, a WSL with a wide scanning range can increase the measurement range in optical fiber sensors and the resolution of OCT images [6,7,9,[19][20][21][22][23][24]. In particular, WSLs with a broadband scanning range of 150 nm or more are useful in fiber-optic sensors. According to the results of recent studies, the scanning range of a wideband WSL implemented using a single semiconductor optical amplifier (SOA) is limited to within 150 nm [7,9,13,[19][20][21]. WSLs with a wider wavelength scanning range can be implemented by combining two SOAs with different center wavelengths, and have been described in several recent studies [16,[22][23][24].
WSLs are implemented using several types of wavelength-scanning filters or by tuning the dispersion of the laser resonator [1,3,4,6,19,25,26]. The FPTF-based WSL has the advantage of being easily implemented without the need for optical alignment, because all devices are pigtailed with optical fibers. However, the FPTF is thermally unstable, and it is cumbersome to replace the filter when the free spectral range (FSR) is changed. Although a polygonal mirror scanning wavelength filter (PMSWF)-based WSL has mechanical limitations and difficulties in beam alignment in free space, the output characteristics of the WSL can be changed by appropriately adjusting the optical parameters of the diffraction grating and lenses.
In this study, we present the characteristics of broadband WSLs over a 220-nm scanning bandwidth by connecting two SOAs in parallel based on a PMSWF. As the WSL scanning speed increases, the output intensity and bandwidth decrease. Therefore, we analyzed this phenomenon as the concept of the saturation limit frequency that accumulates in the amplified spontaneous emission (ASE) background and reaches the saturation power, and we determine experimentally that it is partially relevant. To increase the scanning range and output intensity of the WSL, fine alignment of the polygonal mirrors and accurate optical alignment between the two lenses and the beam incident on the diffraction grating are important. We report the result of obtaining the optimal scanning bandwidth by analyzing the output dependence of the WSL according to the alignment of the optical axis of the diffraction grating and lenses.
Saturation Limit Frequency
Most WSLs exhibit a decrease in power and bandwidth as the sweeping speed increases. This is due to several factors, such as the gain and loss of the laser, polarization, and nonlinear effects. However, the main cause is that the laser oscillation time becomes insufficient as the sweeping speed increases. The WSL requires the oscillation time to reach saturation power. If this is not satisfied, the laser power decreases exponentially. In a WSL, the oscillation of a specific wavelength that meets the filtering conditions of the tunable wavelength filter is repeated and sequentially performed. Therefore, a single wavelength is characterized by periodic oscillation followed by extinction. As the sweep speed increases, the oscillation period becomes shorter; therefore, the oscillation time becomes insufficient, and the laser power decreases exponentially. In a WSL, the frequency at which the laser builds up from the ASE background to reach saturation power is the saturation limit frequency [3]. In other words, it is related to the maximum sweep speed for the WSL to maintain the saturation output. The output of a wavelength component filtered by the tunable filter is related to the number of roundtrips of the resonator to match the filtering condition. The number n of roundtrips required to reach the saturation output is where P sat is the saturation power, P ASE is the power of the ASE, and β is the round-trip net gain, which is calculated as the difference between the small-signal gain of the laser medium and the loss per round trip [3]. The time per cavity round-trip τ roundtrip is where L is the cavity length, n re f is the refractive index of the laser cavity, and c is the speed of the light. The build-up time to reach the saturation output power using Equations (1) and (2) is as follows: L· n re f c .
The maximum frequency (saturation limit frequency) from the buildup time can be estimated as follows: where ∆λ is the linewidth of the tunable filter, and ∆λ tuningrange . is the total tuning bandwidth [3]. For the case of the FPTF, the wavelength swept is not performed up to the FSR, which is the maximum tunable range achieved through electrical signal control. However, the PMSWF has an FSR equal to ∆λ tuningrange . The saturation limit frequency is the maximum sweep frequency at which the saturation output is maintained. This is one of several factors that decreases the output power with increasing swept frequency, and it is difficult to calculate an exact value. However, because the saturation limit frequency is a major factor in the operation of the WSL, various pieces of information can be obtained from the output analysis according to the sweep frequency. First, it is possible to determine the relationship between the operating variables of the WSL in the output characteristics according to the increase in sweep frequency. Increasing the saturation limit frequency by, for example, increasing β and ∆λ or decreasing ∆λ tuningrange and L, mitigates the decrease in optical output power as the sweep frequency increases. Second, the cause of the decrease in scanning bandwidth with an increase in sweep frequency can be identified. The saturation limit frequency is related to the optical output power, but it is also closely related to the scanning bandwidth. This is also related to the decrease in the scanning bandwidth and dependence of the output power on the sweep frequency. Figure 1 shows a schematic diagram of the experimental setup for broadband WSLs. Two SOAs with different center wavelengths are implemented by connecting them in parallel using a Mach-Zehnder interferometer. The experimental setup consists of two 3 dB fiber couplers, two SOAs, four polarization controllers, an optical circulator, and a PMSWF. The PMSWF is composed of a telescope with two achromatic doublet lenses, a brazed diffraction grating, and a 72-facet polygonal scanner mirror. As the two SOAs have polarization dependence, the polarization controllers in front of and behind the SOA are appropriately adjusted to match the intensity in a wide wavelength band. Even if the two arms of the Mach-Zehnder interferometer are of unequal length, this does not significantly affect the output of the WSL. The optical path length of laser resonator lengths for each path of SOA 1 and SOA 2 including free space are 16.35 m and 18.61 m, respectively. The output from the WSL was monitored using an oscilloscope and an optical spectrum analyzer through an optical isolator. Two SOAs with center wavelengths of 1190 and 1285 nm are used, as shown in Figure 2a. The red line of SOA 1 (Innolume Inc., Dortmund, Germany) has an ASE center wavelength of 1190 nm and the 10-dB bandwidth is 49 nm (from 1131 nm to 1180 nm). In the blue line of SOA 2 (O-Band Booster Optical Amplifier; Thorlabs, Newton, NJ, USA), the center wavelength of the ASE is 1285 nm and the 10-dB bandwidth is 75 nm (from 1249 nm to 1324 nm). The black line represents the combined spectra of the two SOAs. Figure 2b shows the optical spectra of the broadband WSL. The red and blue lines show the optical spectra when only SOA 1 or SOA 2 is connected to the Mach-Zehnder interferometer, respectively. At a scan rate of 3.6 kHz when only SOA 1 is connected, the 10-dB bandwidth and average output power are ~115 nm (from 1129 nm to 1244 nm) and 4.1 mW, respectively. However, when only SOA 2 is connected, the 10-dB bandwidth and average output power are ~115 nm and 5.81 mW, respectively. When both SOAs are connected at a scanning speed of 3.6 kHz, the optical spectrum is represented by a black line. The 10-dB bandwidth and average optical output power are 223 nm (from 1129 nm to 1352 nm) and 10.1 mW, respectively. Figure 3a,b show the outputs in the spectral and temporal domains of the broadband WSL, respectively. As described above, the shapes are similar and show a one-to-one correspondence with each other. This characteristic enables realtime detection by measuring the response signal of the pulse in the temporal domain, instead of the response in the spectral domain of the optical fiber dynamic sensor. Figure 2b shows the optical spectra of the broadband WSL. The red and blue lines show the optical spectra when only SOA 1 or SOA 2 is connected to the Mach-Zehnder interferometer, respectively. At a scan rate of 3.6 kHz when only SOA 1 is connected, the 10-dB bandwidth and average output power are~115 nm (from 1129 nm to 1244 nm) and 4.1 mW, respectively. However, when only SOA 2 is connected, the 10-dB bandwidth and average output power are~115 nm and 5.81 mW, respectively. When both SOAs are connected at a scanning speed of 3.6 kHz, the optical spectrum is represented by a black line. The 10-dB bandwidth and average optical output power are 223 nm (from 1129 nm to 1352 nm) and 10.1 mW, respectively. Figure 3a,b show the outputs in the spectral and temporal domains of the broadband WSL, respectively. As described above, the shapes are similar and show a one-to-one correspondence with each other. This characteristic enables real-time detection by measuring the response signal of the pulse in the temporal domain, instead of the response in the spectral domain of the optical fiber dynamic sensor. For broadband WSLs, the net gain is very important, as it relates to optical alignment. To increase the scanning range and optical output power in the WSL, the fine alignment of the polygonal scanning mirror, perpendicularity between the optical path and lenses, and precise optical alignment between the beam incident on the diffraction grating and two lenses are important. The failure of precise optical alignment due to these factors increases the loss of the laser cavity and limits the scanning range beyond 200 nm. Therefore, we investigated the output dependence of the WSL according to the optical axis alignment of the diffraction grating and lenses.
Experiments
Let the axis of the beam traveling direction be denoted by the z-axis and the lens surface be denoted by the x-and y-axes. Figure 4 shows that the incident beam is propagated into the diffraction grating such that the first-order diffracted beam is directed toward the lens. For an accurate analysis, it is necessary to draw and analyze a complex 3D diagram, such as the angle of the incident beam and the alignment angle of the diffraction grating. Figure 4a shows that the diffracted beam propagates to the lens with ideal alignment of the diffraction grating. In this case, the diffracted beam is incident in alignment with the x-axis of the lens surface. When the diffraction grating rotates slightly around the z-axis, the direction of the grating grooves becomes Ψ with respect to the x-axis, and the aligned beam deviates from the x-axis center of the lens surface. However, if the groove direction of the diffraction grating in Figure 4b rotates slightly around the x-axis, it deviates completely from the x-axis of the lens surface, as shown in Figure 4c. For broadband WSLs, the net gain is very important, as it relates to optical alignment. To increase the scanning range and optical output power in the WSL, the fine alignment of the polygonal scanning mirror, perpendicularity between the optical path and lenses, and precise optical alignment between the beam incident on the diffraction grating and two lenses are important. The failure of precise optical alignment due to these factors increases the loss of the laser cavity and limits the scanning range beyond 200 nm. Therefore, we investigated the output dependence of the WSL according to the optical axis alignment of the diffraction grating and lenses.
Let the axis of the beam traveling direction be denoted by the z-axis and the lens surface be denoted by the xand y-axes. Figure 4 shows that the incident beam is propagated into the diffraction grating such that the first-order diffracted beam is directed toward the lens. For an accurate analysis, it is necessary to draw and analyze a complex 3D diagram, such as the angle of the incident beam and the alignment angle of the diffraction grating. Figure 4a shows that the diffracted beam propagates to the lens with ideal alignment of the diffraction grating. In this case, the diffracted beam is incident in alignment with the x-axis of the lens surface. When the diffraction grating rotates slightly around the z-axis, the direction of the grating grooves becomes Ψ with respect to the x-axis, and the aligned beam deviates from the x-axis center of the lens surface. However, if the groove direction of the diffraction grating in Figure 4b rotates slightly around the x-axis, it deviates completely from the x-axis of the lens surface, as shown in Figure 4c.
The well-known diffraction grating equation is where α and β are the incident and diffracted angles, respectively, with respect to the normal axis of the diffraction grating; m is the order of the diffracted beam; λ is the optical wavelength; and p is the pitch of the grating [1,27]. When the groove direction of the diffraction grating and the x-axis are at an angle of Ψ, the aligned beam deviates from the x-axis of the lens. The diffraction grating equations are then given by p(sin y m + sin y i ) = mλ cos Ψ where Ψ is the angle between the direction of the grating grooves and the x-axis; x m and x i are the diffracted angle and incident angle for the x-axis of the grating, respectively; and y m and y i are the diffracted angle and incident angle for the y-axis of the grating, respectively [27]. If there is only a slight rotation about the x-axis of the grating, then the change occurring only in the y-direction, as expressed by Equation (7), can be considered. As the difference in y m is very small, Equation (7) becomes where ∆y m is the difference between the diffracted angles when the shortest and longest wavelengths in the scanning wavelength range are aligned along the x-axis. We analyzed the output of the WSL for the three cases of diffraction grating alignment. The well-known diffraction grating equation is where and are the incident and diffracted angles, respectively, with respect to the normal axis of the diffraction grating; m is the order of the diffracted beam; λ is the optical wavelength; and p is the pitch of the grating [1,27]. When the groove direction of the diffraction grating and the x-axis are at an angle of Ψ, the aligned beam deviates from the xaxis of the lens. The diffraction grating equations are then given by where Ψ is the angle between the direction of the grating grooves and the x-axis; and are the diffracted angle and incident angle for the x-axis of the grating, respectively; and and are the diffracted angle and incident angle for the y-axis of the grating, respectively [27]. If there is only a slight rotation about the x-axis of the grating, then the change occurring only in the y-direction, as expressed by Equation (7), can be considered. As the difference in is very small, Equation (7) becomes where is the difference between the diffracted angles when the shortest and longest wavelengths in the scanning wavelength range are aligned along the x-axis. We analyzed the output of the WSL for the three cases of diffraction grating alignment. Figure 5a schematically shows the diffracted beam incident on the lens surface with the proper alignment of the diffraction grating. It can be seen that the directions of the Figure 6a shows the optical output spectra on a linear scale for each case in Figure 5, where the 10-dB bandwidth of the wavelength scanning for Case 1 is~209 nm from 1131 nm to 1340 nm. Figure 6b shows the difference between Cases 2 and 3, based on Case 1. As the diffraction grating is slightly shifted with respect to the x-axis, the changes in the intensity of the short and long wavelengths differ in the wavelength scanning range.
grooves of the diffraction grating are out of ideal alignment with respect to the x-, y-, and z-axis (Case 1). When the alignment of the diffraction grating is adjusted by −0.018° by turning the knob in the x-axis direction, the position of the diffraction beam entering the lens surface changes, as shown in Figure 5b (Case 2). However, when the knob in the xaxis direction is adjusted to 0.0045°, the position of the diffracted beam entering the lens surface changes, as shown in Figure 5c (Case 3). Figure 6a shows the optical output spectra on a linear scale for each case in Figure 5, where the 10-dB bandwidth of the wavelength scanning for Case 1 is ~209 nm from 1131 nm to 1340 nm. Figure 6b shows the difference between Cases 2 and 3, based on Case 1. As the diffraction grating is slightly shifted with respect to the x-axis, the changes in the intensity of the short and long wavelengths differ in the wavelength scanning range. By substituting the values of the variables used in the experiment into Equation (8) for m = 1, Δλ = 210 nm, p = 1/600 mm, and = 0.0225°, is calculated to be approximately 0.18°. It is difficult to measure small changes in the diffraction grating within 1°. When = 90° in Equation (7), both the 0th and 1st diffracted beams are mirror-reflected; therefore, = , that is, there is no difference in the angle of the y-axis [27]. However, if the direction of the grooves of the diffraction grating deviates slightly from = 90° with respect to the x-axis, then the diffracted beam is incident on the lens surface, as shown in Figure 5. By properly turning the knob to align the diffraction grating, the 0th and 1st diffracted beams can be aligned at the same point. Figure 7 shows the results of reducing the error for . It is possible to reduce the difference in intensity in the scanning wavelength range, as shown in Figure 6b. Through the optimization of alignment, as described above, the wavelength scanning range could be further increased. The 10-dB bandwidth of the wavelength scanning obtained through alignment optimization is ~223 nm (from 1129 nm to 1352 nm) as shown in Figure 2b, which is greater by approximately 14 nm compared to the case in Figure 6. (7), both the 0th and 1st diffracted beams are mirror-reflected; therefore, y 0 = y 1 , that is, there is no difference in the angle of the y-axis [27]. However, if the direction of the grooves of the diffraction grating deviates slightly from Ψ = 90 • with respect to the x-axis, then the diffracted beam is incident on the lens surface, as shown in Figure 5. By properly turning the knob to align the diffraction grating, the 0th and 1st diffracted beams can be aligned at the same point. Figure 7 shows the results of reducing the error for ∆y m . It is possible to reduce the difference in intensity in the scanning wavelength range, as shown in Figure 6b. Through the optimization of alignment, as described above, the wavelength scanning range could be further increased. The 10-dB bandwidth of the wavelength scanning obtained through alignment optimization is~223 nm (from 1129 nm to 1352 nm) as shown in Figure 2b, which is greater by approximately 14 nm compared to the case in Figure 6.
In PMSWF-based WSLs, the FSR is related to the focal length of the lenses used in the filter and the spacing of the grooves of the diffraction grating, as described in Refs. [1,6,16]. In our experiments, the grating pitch p was 1 600 mm; β 0 = 0.1 rad, which is the angle between the optical axis of the lenses and the normal axis of the diffraction grating; the focal length of the first lens F 1 = 4.5 cm; the focal length of the second lens F 2 = 10.0 cm; and the face-to-face polar angle of the polygonal mirror θ = 2π 72 rad. Therefore, the theoretically calculated (∆λ) FSR is 321.6 nm. Figure 8 shows the WSL outputs in the temporal domain, where the FSR can be obtained using one-to-one correspondence between the spectral and temporal domain [16]. The wavelength scanning range of WSL is 223 nm in the spectral domain and 186 ms in the time domain, so one period in the spectral domain corresponds to~329.7 nm for one period in the time domain of 275 ms. Therefore, the measured (∆λ) FSR in the temporal domain is~329.7 nm. The relative error between the measured and theoretical values is approximately 2.5%. This can be attributed to the uncertainty of the signal measurement in the temporal domain. In PMSWF-based WSLs, the FSR is related to the focal length of the lenses used in the filter and the spacing of the grooves of the diffraction grating, as described in Refs. [1,6,16]. In our experiments, the grating pitch p was mm; = 0.1 rad, which is the angle between the optical axis of the lenses and the normal axis of the diffraction grating; the focal length of the first lens F1 = 4.5 cm; the focal length of the second lens F2 = 10.0 cm; and the face-to-face polar angle of the polygonal mirror = rad. Therefore, the theoretically calculated ( ) is 321.6 nm. Figure 8 shows the WSL outputs in the temporal domain, where the FSR can be obtained using one-to-one correspondence between the spectral and temporal domain [16]. The wavelength scanning range of WSL is 223 nm in the spectral domain and 186 ms in the time domain, so one period in the spectral domain corresponds to ~329.7 nm for one period in the time domain of 275 ms. Therefore, the measured ( ) in the temporal domain is ~ 329.7 nm. The relative error between the measured and theoretical values is approximately 2.5%. This can be attributed to the uncertainty of the signal measurement in the temporal domain. The output of the WSL decreases in intensity and bandwidth as the scanning frequency increases [16]. Figure 9 shows a graph of the output characteristics of the wideband WSL according to the scanning frequency, up to 11.9 kHz. The average intensity gradually decreases from 11.1 mW to 5.3 mW with an increase in the scanning frequency, as shown in Figure 9a. However, the bandwidth gradually decreases from 226 nm to 223 nm until the scanning frequency is 10 kHz, and then shows a dramatic decrease above 10 kHz. The output of the WSL decreases in intensity and bandwidth as the scanning frequency increases [16]. Figure 9 shows a graph of the output characteristics of the wideband WSL according to the scanning frequency, up to 11.9 kHz. The average intensity gradually decreases from 11.1 mW to 5.3 mW with an increase in the scanning frequency, as shown in Figure 9a. However, the bandwidth gradually decreases from 226 nm to 223 nm until the scanning frequency is 10 kHz, and then shows a dramatic decrease above 10 kHz.
The decrease in the bandwidth and average intensity as the scanning frequency increases can be quantitatively explained by the saturation limit frequency described in Section 2. We analyzed the saturation limit frequencies for several wavelengths. Figure 10 shows the variations in the spectral bandwidth from the wideband WSL according to the scanning frequency. Table 1 lists the small signal gain and loss per cycle at each SOA wavelength for the calculation of β. The gain and loss at 1136, 1182, 1258, 1291, and 1356 nm in the spectral range of Figure 10 were measured, as listed in Table 1. Additionally, by substituting the values in Table 1 and the variables for each wavelength in Equation (4), the saturation limit frequencies are obtained, as listed in Table 1. Here, the effective refractive index (n re f ) and the total tuning range (∆λ) FSR were 1.46 and 330 nm, respectively. The wavelength at 1356 nm is located at the edge of the scanning range, and 1182 nm is a wavelength with a large intensity change with increasing scanning speed. In addition, the wavelength of 1182 nm corresponds to the vicinity of the valley in the ASE spectrum, where the two SOAs are combined, as shown in Figure 2a. As shown in Table 1, the saturation frequencies of 1182 nm and 1356 nm are calculated to be 4.92 kHz and 0.41 kHz, respectively. However, for the other three wavelengths (1136, 1258, and 1291 nm), the saturation frequencies are calculated to be 9.94, 10.20, and 8.73 kHz, respectively. In the spectra of Figure 10, the intensity decreases faster near 1200 nm as the scanning frequency increases. This can be found in the ASE spectrum of SOA1 in Figure 2a. The ASE spectrum of SOA1 shows a weak relative intensity near 1200 nm. This can be seen as the result of insufficient gain recovery as the scanning frequency increases. The decrease in the bandwidth and average intensity as the scanning frequency increases can be quantitatively explained by the saturation limit frequency described in Section 2. We analyzed the saturation limit frequencies for several wavelengths. Figure 10 shows the variations in the spectral bandwidth from the wideband WSL according to the scanning frequency. Table 1 lists the small signal gain and loss per cycle at each SOA wavelength for the calculation of β. The gain and loss at 1136, 1182, 1258, 1291, and 1356 nm in the spectral range of Figure 10 were measured, as listed in Table I. Additionally, by substituting the values in Table 1 and the variables for each wavelength in Equation (4), the saturation limit frequencies are obtained, as listed in Table 1. Here, the effective refractive index ( ) and the total tuning range ( ) were 1.46 and 330 nm, respectively. The wavelength at 1356 nm is located at the edge of the scanning range, and 1182 nm is a wavelength with a large intensity change with increasing scanning speed. In addition, the wavelength of 1182 nm corresponds to the vicinity of the valley in the ASE spectrum, where the two SOAs are combined, as shown in Figure 2a. As shown in Table I, the saturation frequencies of 1182 nm and 1356 nm are calculated to be 4.92 kHz and 0.41 kHz, respectively. However, for the other three wavelengths (1136, 1258, and 1291 nm), the saturation frequencies are calculated to be 9.94, 10.20, and 8.73 kHz, respectively. In the spectra of Figure 10, the intensity decreases faster near 1200 nm as the scanning frequency increases. This can be found in the ASE spectrum of SOA1 in Figure 2a. The ASE spectrum of SOA1 shows a weak relative intensity near 1200 nm. This can be seen as the result of insufficient gain recovery as the scanning frequency increases. Figure 11 shows the relative intensity variations of each wavelength component with increasing scanning frequency. Relative wavelength intensities were measured while increasing the scanning frequency by normalizing the intensity at 0.36 kHz to 1.0 kHz. The relative intensity at 1356 nm, located at the edge of the scanning range, decreases rapidly as the scanning frequency increases. The saturation limit frequency of this wavelength is less than 1 kHz, and as the scanning frequency increases, the scanning bandwidth decreases, as shown in Figure 9b. This is consistent with the fact that lasing is no longer possible above a certain frequency. Therefore, it can be seen that the decrease in the relative intensity according to the increase in the scanning frequency is slightly related to the saturation limit frequency. In addition, because the SOA has polarization dependence, an intensity difference according to polarization occurs over the entire scanning wavelength range and, thus, a relative decrease in intensity appears partially. Figure 11 shows the relative intensity variations of each wavelength component with increasing scanning frequency. Relative wavelength intensities were measured while increasing the scanning frequency by normalizing the intensity at 0.36 kHz to 1.0 kHz. The relative intensity at 1356 nm, located at the edge of the scanning range, decreases rapidly as the scanning frequency increases. The saturation limit frequency of this wavelength is less than 1 kHz, and as the scanning frequency increases, the scanning bandwidth decreases, as shown in Figure 9b. This is consistent with the fact that lasing is no longer possible above a certain frequency. Therefore, it can be seen that the decrease in the relative intensity according to the increase in the scanning frequency is slightly related to the saturation limit frequency. In addition, because the SOA has polarization dependence, an intensity difference according to polarization occurs over the entire scanning wavelength range and, thus, a relative decrease in intensity appears partially.
Conclusions
If a wavelength-swept laser (WSL) is implemented based on a single semiconductor optical amplifier (SOA), it is difficult to obtain a scanning range of 150 nm or more. However, if the WSL is implemented by combining two SOAs with different center wavelengths, it can have a scanning range of 200 nm or more. We successfully implemented a broadband WSL using a PMSWF by connecting two SOAs in parallel in the form of a Mach-Zehnder interferometer. The center wavelengths of each SOA were 1190 nm and 1285 nm, respectively, and the 10-dB scanning bandwidth of the WSL was 223 nm, from 1129 nm to 1352 nm. When the WSL scanning speed increased, the output intensity and bandwidth decreased. We analyzed these phenomena in terms of the concept of the saturation frequency limit that accumulates in the amplified spontaneous emission (ASE) background and reaches saturation power, and we experimentally determined that they are related in part. To increase the scanning range and output intensity in the WSL, a fine alignment of the polygonal mirrors and precise optical alignment between the two lenses and the beam incident on the diffraction grating are important. We were able to obtain the
Conclusions
If a wavelength-swept laser (WSL) is implemented based on a single semiconductor optical amplifier (SOA), it is difficult to obtain a scanning range of 150 nm or more. However, if the WSL is implemented by combining two SOAs with different center wavelengths, it can have a scanning range of 200 nm or more. We successfully implemented a broadband WSL using a PMSWF by connecting two SOAs in parallel in the form of a Mach-Zehnder interferometer. The center wavelengths of each SOA were 1190 nm and 1285 nm, respectively, and the 10-dB scanning bandwidth of the WSL was 223 nm, from 1129 nm to 1352 nm. When the WSL scanning speed increased, the output intensity and bandwidth decreased. We analyzed these phenomena in terms of the concept of the sat-uration frequency limit that accumulates in the amplified spontaneous emission (ASE) background and reaches saturation power, and we experimentally determined that they are related in part. To increase the scanning range and output intensity in the WSL, a fine alignment of the polygonal mirrors and precise optical alignment between the two lenses and the beam incident on the diffraction grating are important. We were able to obtain the optimal optical output by analyzing the output dependence of the WSL according to the alignment of the optical axis of the diffraction grating and lenses. This broadband WSL is expected to be utilized as a light source for dynamic optical fiber sensors with a wide measurement range. | 2022-11-19T16:15:13.678Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "7865299d25438fac9e5871efbe8dec60c0960bd4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/22/22/8867/pdf?version=1668595584",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2dacdbe8eb380fbe807abf6e5ab7b6e6b42933d6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
253362088 | pes2o/s2orc | v3-fos-license | Grain Quality Characterization of Hybrid Rice Restorer Lines with Resilience to Suboptimal Temperatures during Filling Stage
Rice (Oryza sativa L.) is a staple food that is consumed worldwide, and hybrid rice has been widely employed in many countries to greatly increase yield. However, the frequency of extreme temperature events is increasing, presenting a serious challenge to rice grain quality. Improving hybrid rice grain quality has become crucial for ensuring consumer acceptance. This study compared the differences in milling quality, appearance quality, and physical and chemical starch properties of rice grains of five restorer lines (the male parent of hybrid rice) when they encountered naturally unfavorable temperatures during the filling period under field conditions. High temperatures (HTs) and low temperatures (LTs) had opposite effects on grain quality, and the effect was correlated with rice variety. Notably, R751, R313, and Yuewangsimiao (YWSM) were shown to be superior restorer lines with good resistance to both HT and LT according to traits such as head rice rate, chalkiness degree, chalky rice rate, amylose content, alkali spreading value, and pasting properties. However, Huazhan and 8XR274 were susceptible to sub-optimal temperatures at the grain-filling stage. Breeding hybrid rice with adverse-temperature-tolerant restorer lines can not only ensure high yield via heterosis but also produce superior grain quality. This could ensure the quantity and taste of rice as a staple food in the future, when extreme temperatures will occur increasingly frequently.
Introduction
Rice (Oryza sativa L.) is a staple food for more than half of the world's population [1]. Improving rice grain quality has become crucial for ensuring consumer acceptance due to the growing demand for high-quality rice [2]. However, grain quality is influenced significantly by the field environment, and in particular, by air temperature [3][4][5][6][7][8][9][10]. Extreme temperature events (including high and low temperatures) will likely increase in frequency and intensity under the conditions of global warming, presenting a serious challenge for future rice yield and grain quality [11]. Hence, cultivation of high-quality rice with superior temperature resilience is a priority for breeders.
Rice grain quality covers the physicochemical properties of rice kernels that influence the milling, appearance, cooking, taste, and nutritional properties [2]. Milling quality, along with yield, determines the edible yield and economic value [12]. The appearance of rice grains is primarily evaluated based on transparency. Chalkiness is one of major issues with appearance. More importantly, it negatively affects milling quality and results in poorer eating and cooking quality (ECQ) [13]. The grain quality relies on the microstructure and physicochemical properties of the starch that accounts for over 80% of the grain weight. Starch is a mixture of two homopolymers, amylose (linear α-1, 4-polyglucans) and amylopectin (α-1, 6-branched polyglucans) [14,15]. The amylose content (AC), chain length distribution (CLD) of amylose and amylopectin, gel consistency (GC), gelatinization The experiments were conducted in a randomized block pattern with three replications. Twenty plants were planted in each plot, with a row spacing of 20 cm × 20 cm. Weeds, pests, and diseased plants were intensively controlled. After ripening, 2 kg of rice grains were collected and dried to about 13% moisture. The rice was stored at room temperature for three months for the determination of rice quality and RVA characteristics of the starch.
Determination of Milling Quality
Rice kernel samples (20-25 g) (m 0 ) were weighed (accurate to 0.01 g), and the germinated grains were picked out. The germinated grains were dehusked separately and weighed to record the mass of brown rice of germinated grains (m 1 ). The remaining samples were shelled by a huller, and the mass of brown rice was measured and recorded (m 2 ). The incomplete brown grains were picked out by sensory inspection, and the mass of incomplete brown grains was measured (m 3 ). The brown rice rate was calculated according to Formula (1): The mass of samples obtained in the previous step was measured (m 3 ), and the samples were poured into the milling chamber, and the chamber was adjusted to the optimal milling time to make the accuracy of milling reach the third level of the national standard. The milled rice grains were sieved through a diameter of 1.5 mm to remove the embryo and bran. After cooling to room temperature, the mass of milled rice (m 4 ) was measured (accurate to 0.01 g). The milled rice rate was calculated according to Formula (2): Milled Rice Rate = m 4 m 3 × Brown Rice Rate (2) An appearance detection analyzer (MRS-9600TFU2L, MICROTEK, Shanghai, China) was used to collect the sample images. The image analysis system automatically analyzed and judged the image information in the sample. The mass fraction of head rice in milled rice grains (w) was then calculated. The head rice rate was calculated according to Formula (3):
Chalkiness Degree and Chalky Rice Rate Determination
One hundred grains were chosen randomly from the milled rice sample and placed on a chalkiness observation instrument (SC-E, Wseen, Hangzhou, China). Grains with chalkiness (termed as grains with a white belly, white center, and white back or a combination of these) were counted. The chalky grain rate was calculated as the percentage of grains with chalkiness. Ten intact milled rice grains were chosen randomly from the grains with chalkiness. The average percentage of chalky area in the plane projection of the whole rice grain was measured using visual observation. The chalkiness degree was calculated according to Formula (4): Chalkiness degree(%) = chalky grain rate (%) × average chalky area (%)/100.
Starch Isolation
Milled rice samples were stored in sealed bags under refrigeration (4 • C) until analysis. The polished rice was ground into flour in a mill (FOSS 1093 Cyclotec Sample Mill, Hoganas, Sweden) with a 0.5 mm screen, and starch was isolated as described previously [9].
Scanning Electron Microscopy (SEM)
Brown rice grains were cracked with a razor blade, and the transverse section surface was coated with gold for 90 s using a vaporizer and observed under a scanning electron microscope (SU8010, Hitachi, Tokyo, Japan). Observation conditions were as follows: acceleration voltage, 2000 V; magnification, 10,000.
Starch Size Distribution and Chain Length Distribution (CLD) Analysis
For starch size distribution analysis, the samples were removed after water balance, ground, and dispersed with a mortar. Then, they were passed through a 200-mesh sample sieve. A sample of 100-200 mg of starch was placed in a clean Eppendorf tube, and then 1 mL 75% alcohol was added to the Eppendorf tube and mixed well. The samples were measured with dynamic light scattering (Mastersizer 3000, Malvern Panalytical, Malvern, UK). All tests were performed in triplicate.
Determination of the Amylose Content (AC), Gel Consistency (GC), and the Alkali Spreading Value (ASV)
AC of the flour was determined following a modification of the iodine colorimetric method described by Man et al. [31]. After water balance, 0.1000 ± 0.0002 g of rice flour sample and the standard samples were fully wet by ethanol (1.0 mL) and mixed with sodium hydroxide solution (1.0 M, 9 mL) before incubation overnight (37 • C). By adding iodine potassium iodide solution (0.2%) for color development, the OD 620 value was measured. Based on the standard curve performed according to AC in the standard samples, AC in the tested sample was calculated.
GC was measured according to a method described by Tan et al. with minor revision [32]. The crushed milled-rice was passed through a 100-mesh sieve and then 87.8-88.2 mg dry sample was carefully put into a round bottom test tube (inner diameter: 11 mm; length: 100 mm). After adding phenol blue indicator (0.2 mL) and potassium hydroxide solution (0.20 M, 2.0 mL), the test tube was covered with a glass ball and subjected to a boiling water bath for 8 min, room temperature for 5 min, an ice water bath for 20 min, and then incubation flatly for 1 h (temperature: 25 ± 2 • C; humidity: 60 ± 5%). The length measured from the bottom of the tube to the front edge of the rice glue was the GC value. ASV was measured visually by soaking the milled rice grains in 1.4% KOH solution for 24 h at 30 • C as described by Mariotti et al. [33]. The milled rice grains should be without breakage or cracks, and with uniform size and maturity. The endosperm appearance and digestion diffusion degree were visually inspected. The grading of each rice grain was evaluated one by one mainly based on the decomposition degree.
Water Solubility Index and Swelling Power Assay
Swelling power and water solubility were determined according to a previous method [9]. Starch samples (m 0 ) were mixed with water (2%, w/v), placed in a 2 mL centrifuge tube (m 1 ), and heated in a water bath at 95 • C for 30 min. The tubes were gently shaken for 1 min. The samples were cooled to room temperature, centrifuged at 8000× g for 10 min, and the supernatant was discarded. The colloid remaining in the centrifuge tube was weighed (m 2 ), and the sediments were dried to constant weight (m 3 ) at 60 • C. The swelling power and solubility were calculated as follows: swelling power = (m 2 − m 1 )/(m 3 − m 1 ) (g/g); solubility (%) = 100 (m 0 + m 1 − m 3 )/m 0 × 100%.
Statistical Analysis
All parameters shown in the tables and figures used in this article represent the mean values of the experimental data obtained from triplicate tests for all varieties sown during the three planting periods. The analysis of all data was performed using the SPSS16.0 Statistical Software. Two-way analysis of variance and Tukey's tests were used to determine whether statistically significant differences (p < 0.05) existed between the means.
Milling Quality
Milling quality is usually evaluated by indicators such as brown rice rate, milled rice rate, and head rice rate. To examine the effect of improper temperatures at the grain-filling stage on the milling quality of grains from restorer lines, we measured and compared the three indicators of the grains harvested after different temperature treatments. As shown in Figure 1A, the brown rice rate of the restorer line R751 was affected neither by HT nor LT; and YWSM, 8XR274, and Huazhan exhibited unchanged brown rice rates upon HT treatment at the grain-filling stage. However, LT decreased the brown rice rate of the latter three restorer lines by 2.30%, 2.04%, and 1.77%, respectively ( Figure 1A). R313 was shown to be the most vulnerable variety among those tested, as its brown rice rate was reduced significantly by LT (3.63%) and increased by HT (2.69%). It was shown that LT during the grain-filling stage had a greater negative effect than HT on the brown rice rate of most varieties except for the resistant line R751. , and head rice rates (C) of different rice varieties affected by improper temperature at the grain-filling stage. LT represents low temperature; HT represents high temperature; NT represents normal temperature control. Data are shown as mean ± standard error of triplicate measurements. Different letters are marked above standard deviations to express significant differences (p < 0.05).
Appearance Quality
Chalk, as the inverse indicator of the appearance quality, is usually induced by elevated temperature during grain filling [20,25,26,34], and it negatively affects milling quality and ECQ [2]. However, whether LT affects chalkiness formation is not particularly clear. To investigate the effects of HT and LT on chalk and identify superior restorer lines with stable good appearance, we examined the appearance of grains from five restorer lines experiencing low and high temperatures at grain-filling stages, with quantitative measurements of chalky grain rate and chalkiness degree. As shown in Figure 2A, the , and head rice rates (C) of different rice varieties affected by improper temperature at the grain-filling stage. LT represents low temperature; HT represents high temperature; NT represents normal temperature control. Data are shown as mean ± standard error of triplicate measurements. Different letters are marked above standard deviations to express significant differences (p < 0.05).
HT increased the milled rice rates of R751, R313, YWSM, and Huazhan significantly by 2.02%, 3.11%, 1.15%, and 0.98%, respectively, but it did not change that of 8XR274 ( Figure 1B). The milled rice rate of all five varieties was reduced by more than 2.15% by LT ( Figure 1B). The results suggested that LT during the grain-filling stage led to a decline, and HT resulted in an increase in the milled rice rate; however, LT had a greater impact for most tested varieties.
Head rice is conventionally defined as intact grains that have 3/4 of the original kernel length after complete milling [13]. Head rice yield (HRY) is the gold standard of rice millers to quantify milling quality [13]. High temperature decreased the head rice rates of 8XR274 and Huazhan by 31.3% and 26.67%, respectively ( Figure 1C). Although the high temperature also reduced the head rice rates of YWSM (by 6.63%) and R313 (by 1.87%), their degrees of reduction were much lower than those of 8XR274 and Huazhan ( Figure 1C). Likewise, compared with the significant negative impact of LT on 8XR274 (by 38.77%) and Huazhan (by 11.39%), head rice rates of R751, R313, and YWSM exhibited only slight decreases (by 3.14%, 3.24%, and 4.95%, respectively) with LT ( Figure 1C). Interestingly, differently from the opposite effects of high and low temperatures on milled rice rate, both HT and LT reduced the head rice rate ( Figure 1C). Our results demonstrated that considering HRY, the varieties R751, R313, and YWSM are more resistant to HT and LT, whereas 8XR274 and Huazhan were more susceptible to improper temperatures. The results suggested that improper temperature-caused loss of HRY for R751, R313, YWSM, could be less than that of 8XR274 and Huazhan.
Appearance Quality
Chalk, as the inverse indicator of the appearance quality, is usually induced by elevated temperature during grain filling [20,25,26,34], and it negatively affects milling quality and ECQ [2]. However, whether LT affects chalkiness formation is not particularly clear. To investigate the effects of HT and LT on chalk and identify superior restorer lines with stable good appearance, we examined the appearance of grains from five restorer lines experiencing low and high temperatures at grain-filling stages, with quantitative measurements of chalky grain rate and chalkiness degree. As shown in Figure 2A, the grains of R751, R313, and YWSM were more translucent than those of 8XR274 and Huazhan under normal temperature (NT). Indeed, the chalkiness degrees of R751, R313, and YWSM were about 1.1-2.1%, and the chalkiness degrees of 8XR274 and Huazhan were as high as 23.7% and 9.9%, respectively ( Figure 2C). The chalky grain rates of R751, R313, and YWSM were only 5.6-7.7%, whereas those of 8XR274 and Huazhan were as high as 83.6% and 21.4%, respectively ( Figure 2B). R751, R313, and YWSM demonstrated better appearance quality than 8XR274 and Huazhan under NT.
Furthermore, under the LT at the grain-filling stage, the chalkiness degree and chalky grain rate of 8XR274 were decreased by 29.55% and 21.52%, respectively ( Figure 2B,C). LT decreased the chalkiness degree and chalky grain rate of Huazhan by 32.24% and 65.66%, respectively ( Figure 2B,C). Our results demonstrated that the LT caused a reduction of chalkiness, indicating the positive effect of LT on the appearance quality of sensitive varieties. Previous studies reported that unusual starch degradation, rather than starch synthesis, was involved in the occurrence of chalky grains of rice [35]. LT might debase the activity of enzymes involved in starch degradation in the varieties sensitive to LT, thereby weakening the starch degradation process and leading to the reduction of chalkiness.
Moreover, under HT at the grain-filling stage, the chalkiness degree and chalky grain rate of 8XR274 were increased by 84.39% and 14.83%, respectively ( Figure 2B,C). HT increased the chalkiness degree and chalky grain rate of Huazhan dramatically by 106.06% and 136.45%, respectively ( Figure 2B,C). The results showed that HT further caused grain chalkiness increases in varieties with poor appearance quality, which is consistent with previous research on conventional rice or hybrid rice combination varieties [9,[20][21][22][23]. However, our research further identified three restorer lines of hybrid rice whose appearance quality was unaffected by HT or LT. The chalkiness degree or chalky grain rates of R751, R313, and YWSM under HT and LT showed no significant differences compared with those under NT ( Figure 2). Overall, R751, R313, and YWSM possessed better appearance quality than the other two restorer lines, and these desirable characteristics were unlikely to be affected by temperature stress.
previous research on conventional rice or hybrid rice combination varieties [9,[20][21][22][23]. However, our research further identified three restorer lines of hybrid rice whose appearance quality was unaffected by HT or LT. The chalkiness degree or chalky grain rates of R751, R313, and YWSM under HT and LT showed no significant differences compared with those under NT ( Figure 2). Overall, R751, R313, and YWSM possessed better appearance quality than the other two restorer lines, and these desirable characteristics were unlikely to be affected by temperature stress. LT represents low temperature, HT represents high temperature, and NT represents normal temperature controls. Data are shown as mean ± standard error of triplicate measurements. Different letters are marked above standard deviation to express significant differences (p < 0.05). LT represents low temperature, HT represents high temperature, and NT represents normal temperature controls. Data are shown as mean ± standard error of triplicate measurements. Different letters are marked above standard deviation to express significant differences (p < 0.05).
Starch Granule Morphology and Granule Size Distribution
The formation of chalk usually accompanies abnormal starch granule morphology, including incompletely filled starch granules with many air spaces in between that are visible as opaque spots along the translucent grain [36]. In order to intuitively observe the morphological characteristics, including the arrangement compactness of starch granules in rice grains, we examined horizontal slices of grains via scanning electron microscopy instead of isolated starch obtained after the rice grains were ground into powder, as reported in a previous study [4]. In the grains of R751, R313, and YWSM treated with LT, NT, or HT at the filling stage, the starch granules were semi-crystalline and tightly packed, and the shapes of starch granules were polygonal with sharp edges (Figure 3). However, in the grains of 8XR274 and Huazhan, the starch granules were loosely packed with many air spaces and were nearly round in shape, lacking edges and corners, for each temperature treatment that the grains experienced during endosperm development ( Figure 3). More air spaces in between starch gains and many small spherical granules existed in 8XR274 grains under HT. Dense pits were frequently observed on the surfaces of starch granules in Huazhan grains grown under HT (Figure 3), which is consistent with previous studies concerning the effect of high temperatures on starch granules [26,35]. However, the influence of LT on starch granule morphology of 8XR274 and Huazhan was not as strong as that of HT (Figure 3). This is consistent with our previously observed effects of LT on the chalkiness degrees of 8XR274 and Huazhan, being less than those of HT to some extent (Figure 2). HT and LT increased the chalkiness degrees of 8XR274 by 84.39% and 27.42%, respectively. For Huazhan, HT and LT increased the chalkiness degree by 106.06% and 65.66%, respectively ( Figure 2). As a qualitative technique, using SEM analysis it was difficult to show the effect of LT on the morphology of starch particles (Figure 3). Overall, the fine starch granule morphologies of R751, R313, and YWSM were unaffected by LT represents low temperature, HT represents high temperature, and NT represents the normal temperature control. Red arrows indicate pits on the surfaces of starch granules. Yellow arrows indicate air spaces in between granules. Bar = 5 μm.
Amylose Content
The proportions and properties of amylose are usually tested for estimating the cooking, eating, textural, and nutritional qualities of rice [2]. Therefore, we measured the amylose content (AC) to directly reflect the proportion of amylose. Only 8XR274 (29.01% under NT) could be considered to contain an intermediate amount of amylose, and the other five restorer lines, especially Huazhan (10.42% under NT), were varieties with low amylose content according to the AC classification [37] (Figure 4A). Previous studies have reported that the HT-ripened grains contained decreased levels of amylose [9,20,22]. Our results for R751 and 8XR274 confirmed the negative effect of HT on the amylose content, even though the extent of decline of R751 (decreased by 3.82%) was much lower than that of 8XR274 (decreased by 8.62%) ( Figure 4A). The ACs of R313 and YWSM were shown to be unaffected by HT, while HT resulted in a substantial increase in AC in Huazhan (increased by 21.15%) ( Figure 4A). Our data identified the ACs of R313, YWSM, and R751 as being less affected by HT. HT caused considerable changes in ACs of 8XR274 and Huazhan. To further characterize the effect of temperature on the size distribution of starch panicles from various restorer lines, we compared the differences in starch volume distributions ( Figure S1). Upon HT treatment, the curve peak value of the 8XR274 particle volume distribution increased by 39.19% with a narrower curve range, whereas LT did not show a significant effect on the parameter ( Figure S1). Moreover, the curve peak shifted to the left slightly (6.61-8.78 µm to 5.75-7.59 µm) under HT, suggesting that HT led to the formation of more small starch granules in 8XR274. The results were likely to be consistent with the SEM analysis ( Figure 3). It was demonstrated that HT rather than LT exerted an effect on the starch size distribution of 8XR274.
Nevertheless, the curves of R751, YWSM, and Huazhan under HT nearly coincided with their counterparts under NT, indicating that the starch particle sizes of these three restorer lines may be unaffected by HT ( Figure S1). In R313, although the peak value under HT was 11.37% higher than that under NT, the curve was not shifted, in-dicating that HT had limited influence on the starch size of R313. However, for LT, we found that the curve ranges were wider and peak values were decreased by 31.61%, 21.43%, and 26.79% for R751, R313, and YWSM upon LT treatments, respectively ( Figure S1). The results suggested that LT, rather than HT, affected the starch granule size distribution of R751, R313, and YWSM. Notably, Huazhan was the only variety with a starch particle size distribution unaffected by either HT or LT. However, the peak value from Huazhan was clearly lower than those of the other four varieties, and this may have contributed to the widely different morphological features (Figure 3). These results demonstrated that the effect of improper temperature on the size distribution of starch panicles is correlated with variety.
Amylose Content
The proportions and properties of amylose are usually tested for estimating the cooking, eating, textural, and nutritional qualities of rice [2]. Therefore, we measured the amylose content (AC) to directly reflect the proportion of amylose. Only 8XR274 (29.01% under NT) could be considered to contain an intermediate amount of amylose, and the other five restorer lines, especially Huazhan (10.42% under NT), were varieties with low amylose content according to the AC classification [37] ( Figure 4A). Previous studies have reported that the HT-ripened grains contained decreased levels of amylose [9,20,22]. Our results for R751 and 8XR274 confirmed the negative effect of HT on the amylose content, even though the extent of decline of R751 (decreased by 3.82%) was much lower than that of 8XR274 (decreased by 8.62%) ( Figure 4A). The ACs of R313 and YWSM were shown to be unaffected by HT, while HT resulted in a substantial increase in AC in Huazhan (increased by 21.15%) ( Figure 4A). Our data identified the ACs of R313, YWSM, and R751 as being less affected by HT. HT caused considerable changes in ACs of 8XR274 and Huazhan.
As reported previously, indoor artificially controlled LT increased the amylose content [8,28]. However, few studies have considered how the naturally LT in the field during grain filling affects the AC of rice grains. In this study, it was shown that AC was reduced by LT for R751 (decreased by 9.55%), R313 (decreased by 8.33%), YWSM (decreased by 7.27%), and 8XR274 (decreased by 3.79%) but increased by LT in Huazhan (increased by 5.77%) ( Figure 4A). Our results demonstrated that the naturally LT in the field resulted in a reduction in AC in varieties with intermediate AC but led to an increase in AC in varieties with low AC, such as Huazhan, differently from previous studies [8,28]. Rice grains with low AC tend to have a soft texture and hence a favorable ECQ for East Asians [38]. In this study, it was found that both HT and LT at the grain-filling stage affected AC and thus ECQ.
Alkali Spreading Value and Gel Consistency
In addition to the amylose content, gelatinization temperature (GT) is the main determinant of rice cooking time. Rice grains with a low GT require a relatively shorter time to cook, leading to a softer texture [12]. GT is an amylopectin property. The alkali spreading value (ASV) is widely used as an inverse indicator of the GT of milled rice starch granules [2,33]. To investigate the roles of improper temperature on ASV, we compared the ASV of grain powder of the five restorer lines experiencing LT, NT, or HT. It was shown that neither HT nor LT exerted a significant effect on the ASV of 8XR274 and Huazhan ( Figure 4B). LT exerted no significant effect on the ASV of R751, R313, or YWSM ( Figure 4B). However, the ASV of R751, R313, and YWSM were decreased by 15.71%, 9.23%, and 14.29%, respectively, upon HT at the filling-stage compared with that under NT ( Figure 4B). It was suggested that HT resulted in reduction of the ASV and an increase in GT, likely leading to elongation of cooking time and unacceptable texture for grains of R751, R313, and YWSM. Overall, ASVs of R751, R313, YWSM, and 8XR274 were higher than that of Huazhan under each temperature during grain filling, indicating that these varieties have shorter cooking times and softer rice texture than Huazhan. Data are shown as mean ± standard error of triplicate measurements. Different letters are added above standard deviation to express significant difference (p < 0.05).
As reported previously, indoor artificially controlled LT increased the amylose content [8,28]. However, few studies have considered how the naturally LT in the field during grain filling affects the AC of rice grains. In this study, it was shown that AC was reduced by LT for R751 (decreased by 9.55%), R313 (decreased by 8.33%), YWSM (decreased by 7.27%), and 8XR274 (decreased by 3.79%) but increased by LT in Huazhan (increased by 5.77%) ( Figure 4A). Our results demonstrated that the naturally LT in the field resulted in a reduction in AC in varieties with intermediate AC but led to an increase in AC in varieties with low AC, such as Huazhan, differently from previous studies [8,28]. Rice grains with low AC tend to have a soft texture and hence a favorable ECQ for East Asians [38]. In this study, it was found that both HT and LT at the grain-filling stage affected AC and thus ECQ.
Alkali Spreading Value and Gel Consistency
In addition to the amylose content, gelatinization temperature (GT) is the main determinant of rice cooking time. Rice grains with a low GT require a relatively shorter time to cook, leading to a softer texture [12]. GT is an amylopectin property. The alkali spreading value (ASV) is widely used as an inverse indicator of the GT of milled rice starch granules [2,33]. To investigate the roles of improper temperature on ASV, we compared the ASV of grain powder of the five restorer lines experiencing LT, NT, or HT. It was shown Data are shown as mean ± standard error of triplicate measurements. Different letters are added above standard deviation to express significant difference (p < 0.05).
Gel consistency (GC) is a routinely used indicator to describe amylopectin properties, reflecting the range of cooked rice textures. GC is ranked as hard (27-40 mm), medium (41-60 mm), or soft (61-100 mm), depending on the horizontal migration of cold rice paste after cooking, cooling, and incubation using test tubes [2]. After grain filling under NT, the GCs of R313 and YWSM grains were hard; and the GCs of R751, 8XR274, and Huazhan grains were medium ( Figure 4C). Rice grains with softer GC produced tender cooked grains that remained soft after cooling [2]. The results suggested that cooked rice of R751, 8XR274, and Huazhan would be softer than that of R313 and YWSM. Furthermore, GCs of the grains experiencing HT at the filling stage increased significantly in R751, R313, YWSM, 8XR274, and Huazhan, by 37.50%, 52.94%, 72.50%, 38.74%, and 29.70%, respectively ( Figure 4C). A previous study reported that japonica rice cultivars had about 3.57-4.74% higher GC when grown under hot conditions (with average field temperature~30 • C during grain filling in 2013) than under the average field temperature of~28 • C during grain filling in 2014 [4]. The GC increase from our study was clearly greater, and this may have been due to the differences between indica and japonica varieties and the experimental designs. Our data confirmed that HT in the field at filling stage resulted in an extensive increase in GC content that may have led to a softer texture of cooked rice. LT at the filling stage did not affect the GC of the tested rice varieties except for the reduction in R313 (decreased by 11.76%) ( Figure 4C). The results indicated that LT plays an opposite role in GC than HT, in that it may lead to hard texture of cooked rice in some varieties.
Amylopectin Chain Length Distribution (CLD)
Amylopectin consists of branched starch chains. Starch chain-length distribution (CLD, the distribution of the number of monomer units in chains) is a major determinant of rice grain quality [2]. One commonly held conclusion is that HT easily results in a decrease in short-chain amylopectin but more intermediate-and long-chain amylopectin than under normal conditions, although the range of DP (degree of polymerization) values characterizing the length of the chain was slightly different in previous studies [4,6,24]. Our results based on the naturally HT in the field showed that R751, R313, 8XR274, and Huazhan grains contained amylopectin with reduced amounts of short chains (DP 6-12) and intermediate chains , whereas these varieties were enriched in long chains, with DP having more than 37 (Table 2). This may have contributed to the higher GCs of these four varieties ( Figure 4C) according to a previous study showing that rice grains have softer GC due to a higher proportion of short-chain amylopectin [16]. YWSM was the only variety showing different responses to HT, for its short chains of amylopectin were increased, and its long chains were reduced ( Table 2). These results suggested the genetic differences among varieties and the complexity of the mechanism underlying the regulation of HT on amylopectin CLD. Considering that limited studies have investigated the effect of LT on the distribution of CLD, we examined the CLD of five restorer lines under the naturally LT conditions. The results demonstrated that LT led to increases in short chains (DP 6-12) and intermediate chains , whereas LT decreased medium-long chains (DP 25-36) and long chains; DP had more than 37 in R751, YWSM, 8XR274, and Huazhan (Table 1). R313 grains showed less sensitivity to LT, including a slight increase in intermediate chain (DP 13-24) only ( Table 2). Our results demonstrated for the first time that naturally occurring LT in the field during grain filling played an opposite role to HT in the CLD of amylopectin. The proportion of short amylopectin chains was negatively correlated with the GT, and this affects the cooked rice texture and ECQ [39]. Thus, based on the CLD analyses, the results suggested that HT possibly led to higher GT, a harder texture, and a lower level of stickiness, whereas LT led to lower GT, a softer texture, and a higher level of stickiness in most tested varieties in this study.
Starch Pasting Properties as Determined by RVA Analysis
The viscosity of starch pastes was analyzed through the RVA method, which mimics the cooking process of rice grains for determining physicochemical parameters as indirect indicators of ECQ [16,17]. For example, a low setback (SB) value is associated with softness after cooking, and a high viscosity breakdown (BK) value is related to good palatability [16,17,40].
In our study, the SB for 8XR274 increased due to HT (by 120.89%) and decreased due to LT (by 59.07%) ( Figure 5A). The conclusion on the effect of HT on the indica rice SB was in contrast to the previous work on japonica rice cultivar Koshihikari [23], indicating that the mechanism of HT regulation of the SB in indica and japonica rice is different and complex. Although the SB for Huazhan was not significantly affected by HT, it decreased under LT conditions (by 92.82%) ( Figure 5A). The results revealed the opposite effects of LT and HT on the SB. It was worth noting that the SB values for R751, R313, and YWSM were not affected by either HT or LT ( Figure 5A). Thus, R751, R313, and YWSM were restorer lines with relatively stable SB values. Figure 5C). The peak times (PTs) for R751, R313, and YWSM were not affected by either HT or LT. Interestingly, both HT and LT resulted in 12.04% and 9.98% longer PTs for 8XR274, respectively ( Figure 5D). Conversely, both HT and LT led to 11.93% and 10.93% shorter PTs, respectively, for Huazhan ( Figure 5D). The different, even opposite effects of improper temperature on the PT value among varieties are unexplained to date. However, there was no doubt that R751, R313, and YWSM had stable RVs and PTs that were not easily affected by temperature. Overall, in terms of the SB, BK, RV, and PT, the restorer lines R751, R313, and YWSM were hardly affected by the ambient temperature. The unit of starch viscosity is cp, 1 cp = 1/12 RVU (rapid viscosity units). Data are shown as mean ± standard error of triplicate measurements. Different letters are added above standard deviation to express significant differences (p < 0.05).
Water Solubility Index and Swelling Power of Starch Particles
The interaction between starch and water is significant for the processing quality of rice food products. To characterize the influence of improper temperature on the starchwater interaction, we compared the water solubility index and swelling power of the starch particles from the five restorer lines. As shown in Figure 6, HT increased the water solubility of R751 and R313 but decreased that of Huazhan. LT resulted in a reduction in the water solubility of R751, R313, YWSM, and Huazhan ( Figure 6). However, the extent of change induced by HT was lesser than that induced by LT. The water solubility of the 8XR274 line did not change upon HT or LT treatment. Regarding swelling power, all va- Neither HT nor LT played a role in the regulation of the BK values for R313, YWSM, and Huazhan ( Figure 5B). However, the BK for R751 increased under HT conditions (by 80.50%), showing no significant difference under LT conditions ( Figure 5B). Both HT and LT reduced the BK for 8XR274 by 55.56% and 46.70%, respectively ( Figure 5B). However, as reported previously, the BK for Koshihikari was increased under HT conditions [23]. The effects of HT on the BK were associated with varieties. The results indicated that R313, YWSM, and Huazhan exhibited stable BK values. R751 had stable BK values under LT conditions. Furthermore, the recovery values (RVs) for R751, R313, and YWSM were unaffected by either HT or LT. However, the RV for 8XR274 was decreased by 60.41% under LT, and the RV for Huazhan was increased by 18.565 under HT ( Figure 5C). The peak times (PTs) for R751, R313, and YWSM were not affected by either HT or LT. Interestingly, both HT and LT resulted in 12.04% and 9.98% longer PTs for 8XR274, respectively ( Figure 5D). Conversely, both HT and LT led to 11.93% and 10.93% shorter PTs, respectively, for Huazhan ( Figure 5D). The different, even opposite effects of improper temperature on the PT value among varieties are unexplained to date. However, there was no doubt that R751, R313, and YWSM had stable RVs and PTs that were not easily affected by temperature. Overall, in terms of the SB, BK, RV, and PT, the restorer lines R751, R313, and YWSM were hardly affected by the ambient temperature.
Water Solubility Index and Swelling Power of Starch Particles
The interaction between starch and water is significant for the processing quality of rice food products. To characterize the influence of improper temperature on the starchwater interaction, we compared the water solubility index and swelling power of the starch particles from the five restorer lines. As shown in Figure 6, HT increased the water solubility of R751 and R313 but decreased that of Huazhan. LT resulted in a reduction in the water solubility of R751, R313, YWSM, and Huazhan ( Figure 6). However, the extent of change induced by HT was lesser than that induced by LT. The water solubility of the 8XR274 line did not change upon HT or LT treatment. Regarding swelling power, all varieties exhibited unchanged or slightly changed values under improper temperatures compared with normal conditions (Figure 6). The results suggested that LT decreased the water solubility of most restorer lines except for 8XR274. The improper temperature conditions used in this study only slightly affected the swelling power. , and LT (low temperature) during the grain-filling stage. Data are shown as mean ± standard error of triplicate measurements. Different letters are added above standard deviation to express significant differences (p < 0.05).
Conclusions
This study systematically analyzed and compared the effects of naturally occurring HT and LT on five male parent lines of hybrid rice from the following four aspects. Firstly, for the milling quality, LT decreased but HT increased the brown rice rates and milled rice rates of sensitive varieties, whereas both decreased the head rice rates of sensitive varieties. Secondly, for the appearance quality, HT further increased the chalkiness of varieties with high chalkiness accompanied with abnormal starch granule morphology, and LT reduced chalkiness. Thirdly, ACs in most tested varieties were decreased under HT but increased under LT, as reported previously [8,9,20,22,28]. However, the effects of HT and LT on AC were in contrast to the findings of the susceptible variety Huazhan, indicating the complexity of temperature effect mechanism on AC. Fourthly, for the starch pasting properties, in contrast to the reported effect of HT on the japonica rice cultivar, HT increased the SB and reduced the BK of 8XR274 [23]. Moreover, the effects of improper temperature on the ASV, GC, CLD of amylopectin, water solubility index, and swelling power
Conclusions
This study systematically analyzed and compared the effects of naturally occurring HT and LT on five male parent lines of hybrid rice from the following four aspects. Firstly, for the milling quality, LT decreased but HT increased the brown rice rates and milled rice rates of sensitive varieties, whereas both decreased the head rice rates of sensitive varieties. Secondly, for the appearance quality, HT further increased the chalkiness of varieties with high chalkiness accompanied with abnormal starch granule morphology, and LT reduced chalkiness. Thirdly, ACs in most tested varieties were decreased under HT but increased under LT, as reported previously [8,9,20,22,28]. However, the effects of HT and LT on AC were in contrast to the findings of the susceptible variety Huazhan, indicating the complexity of temperature effect mechanism on AC. Fourthly, for the starch pasting properties, in contrast to the reported effect of HT on the japonica rice cultivar, HT increased the SB and reduced the BK of 8XR274 [23]. Moreover, the effects of improper temperature on the ASV, GC, CLD of amylopectin, water solubility index, and swelling power of starch particles were shown to be correlated with varieties.
Based on the analysis, we concluded that HT and LT usually played opposite roles in the alteration of grain quality of the susceptible varieties. Notably, R751, R313, and YWSM are superior restorer lines due to their head rice rate, chalkiness degree, chalky rice rate, amylose content, alkali spreading value, and pasting properties. These lines were largely unaffected by either HT or LT. Breeding hybrid rice varieties with these restorer lines is likely to support rice quality stability to mitigate the effects of climate change.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/foods11213513/s1. Figure S1. Size distribution of the starch granules. The curves represent the starch granule volume distributions for five restorer lines. Blue, green, and red lines indicate the distribution patterns of samples from low, normal, and high temperature treatments, respectively. | 2022-11-06T16:21:11.897Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "81ea9618826f6c54b5eaf341b0fd2c08a32061ce",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/11/21/3513/pdf?version=1667555369",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eea10acdc01a36bfd6f1cfed927da37be504fc8c",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
212583845 | pes2o/s2orc | v3-fos-license | Neutrophil Gelatinase Associated Lipocalin: Is not an Early Marker Inductor for Diabetic Nephropathy in Qatari Population
Background: The WHO Global Report on Diabetes (2016) showed that the number of diabetic patients quadrupled between 1980 and 2016, while causing the death of 1.5 million people. While the global prevalence of diabetes is 9%, the prevalence of diabetes in Qatar is between 17-20%, 45% of which developed diabetic nephropathy. Diabetic Nephropathy is the largest cause of End Stage Renal Disease, and it develops in 20% of diabetic patients. Currently, DN is diagnosed by the detection of microalbumin in urine samples. However, nephropathy can be present even in the absence of albuminuria, and the levels of microalbumin in urine does not correlate with the degree of nephropathic damage. Early detection can prevent total renal failure. Studies have shown that neutrophil gelatinase-associated lipocalin (NGAL) was highly expressed even before the appearance of pathological microalbuminuria in both type 1 and type 2 diabetic patients. The levels of NGAL in urine also correlates with the degree of nephropathic damage. However, currently no information exists about the presence of NGAL in diabetic patients of the Qatari population.
Introduction
According to the 2017 report of the International Diabetes Federation, about 16.5% of the adult population of Qatar is expected to be diabetic, while 513 deaths are estimated to be due to diabetes [1]. A 2012 survey carried out by the Supreme Council of Health showed that about 17-20% of the Qatari population was diabetic, while 45% of them developed Diabetic Nephropathy (DN) [2].
DN is a microvascular complication of diabetes, and it is clinically characterized by persistent albuminuria, decline in glomerular filtration and function, and high risk of cardiovascular mortality and morbidity. DN, which is one of the microvascular complications of diabetes, is the largest cause of End Stage Renal Disease (ESRD) [3]. A kidney biopsy is the most accurate way of diagnosing DN, but since it is an invasive procedure, currently, DN is mainly diagnosed by testing the concentration of urine microalbumin. However, often, there is normoalbuminuria even though glomerular damage is observed under the microscope. In some laboratories, the serum creatinine and blood urea nitrogen [4], and the estimated glomerular filtration rate is also used as a diagnostic tool. However, these parameters are affected by various other physiological and pathological factors, such as pregnancy, exercise, dehydration, cardiovascular disease and inflammation, to name a few.
Moreover, the above-mentioned criteria often manifest only after the disease has progressed. Since early detection is the key to manage nephropathy patients, it is important to determine a suitable biomarker that is detectable in the earliest stages of DN.
Neutrophil Gelatinase associated lipocalin (NGAL) is one such biomarker [5]. It is secreted when there is damage to tubular cells, which precedes glomerular damage, and is therefore seen in urine only in case of injury, and much earlier than the glomerular markers currently used. It also correlates with the degree of nephropathic damage [6]. Therefore; it is a promising biomarker to detect DN in the early stages to facilitate efficient patient management [7]. Based on previous publications, it is hypothesized that the Neutrophil gelatinase associated lipocalin (NGAL) is an early diagnostic biomarker of diabetic nephropathy. The aim of this study was to investigate the relationship between the concentration of neutrophil gelatinase associated lipocalin with kidney function.
Epidemiology of Diabetes
Diabetes mellitus (DM) is a leading cause of mortality and morbidity [8]. It is a common disease in Qatar. According to Hamad Medical Corporation [HMC] (2017), the prevalence of diabetes is 17% and 11-23% are at risk of having diabetes [2]. Moreover, Qatar is considered to be one of the countries that have an increased rate of glucose intolerance in 17% of its population [9]. Furthermore, in 2012, it was reported that the prevalence of Qatari Type 2 Diabetes mellitus (T2DM) patients was 16.7%, and by 2050, it is predicted to reach 24%, where most of T2DM are aged from 18 to 64 years old. Additionally, the prevalence of physical inactivity, obesity and active smoking were 45.9%, 41.4% and 16.4% respectively [9]. Obesity is considered to be a key factor that affects two-thirds of the incidence of T2DM Qatari patients. Therefore, evaluating the future of T2DM in Qatar is crucial to be notified to control the prevalence of the disease by addressing new preventive methods, early detection of the disease and therapeutic interventions [2]. Moreover, it is a worldwide metabolic disease in which its prevalence is rising to more than one million new cases annually in the USA [8].
T2DM incidence differs significantly from one geographical area to another according to the variation of environmental, lifestyle and genetic risk factors [10]. The prevalence of T2DM in adult patients is expected to rise in the next decades and to increase greatly in developing countries [11].
Lipocalin (NGAL)
It has been observed that the renal tubules, and especially the proximal tubules, play an important role in the development of DN, and this lead to finding that several tubular factors can be found in the urine (tubular proteinuria) even before the onset of glomerular damage or the appearance of microalbuminuria [12].
Neutrophil gelatinase-associated lipocalin (NGAL), a member of lipocalin family has generated a great interest to be a novel marker in the detection of diabetic nephropathy in its early stages [13].
It was initially identified by Allen and Venge in 1989 from human neutrophils [14]. It is not produced by just one cell type like many other endogenous biomarkers as many different pathologies can provoke the production and release of NGAL, such as inflammation, cardiovascular diseases and others [15,16]. It is also called lipocalin-2 [17], and it has a barrel shaped tertiary structure that binds to small lipophilic molecules [18]. Siderophores, small iron binding molecules, are the major ligands for NGAL. It is identified as a 25-kDa protein that exists in monomeric, dimeric and heteromeric forms. It covalently associates with human matrix metalloproteinase 9 (MMP-9) from human neutrophils that is stored mainly in the specific granules of neutrophils [19]. It protects it from degradation.
Functions of NGAL
NGAL is expressed in very minimal levels in several human tissues, such as kidney, trachea, lungs, stomach, and colon. It is found to possess diverse functions such as transportation, activation of MMP-9, induction of apoptosis, tumourgenesis, and regulation of immune responses. NGAL also plays a renoprotective role through enhancing tubule cell proliferation in kidney injury, especially in ischemia-reperfusion injury. In fact, it is one of the most robustly expressed proteins in the ischemic or nephrotoxic injury of kidney [20,21]. In physiological process, recent researches have proved that NGAL can trigger nephrogenesis by stimulating the conversion of mesenchymal cells into kidney epithelia [22]. This is because it can work as an iron-transporting protein to deliver iron, which is crucial for cell growth and development, by forming a complex with iron-binding siderophores [23]. In pathological process, accumulating evidences have suggested that NGAL relates tightly with series of renal dysfunctions.
NGAL, Diabetes and Diabetic Nephropathy
Studies have successfully investigated that the messenger RNA (mRNA) expression of NGAL is significantly higher in diabetic/obese human beings, and associated closely with insulin resistance and hyperglycemia, which implies that NGAL may play an important role in type-2 diabetes [12]. It is speculated that NGAL in DN is produced principally by the injured tubule cells to prevent kidney from early injury for various reasons by upregulating its mRNA within a few hours after the harmful stimuli of kidney tubules. It belongs to stress induced renal biomarkers involved in the pathophysiology of diabetic nephropathy. Firstly, since injury of renal tubule is unavoidable in the process of DN, repair mechanisms must be started by the body. As an iron-transporting protein, NGAL may be expressed by the damaged tubule to induce regeneration since iron is necessary for re-epithelialization. Besides, the complex of NGAL/siderophore/Fe 2+ can up-regulate heme-oxygenase 1 (HO-1), which could limit oxidant-mediated apoptosis of renal tubule cells through limiting iron-driven oxidant stress [23]. Second, it is well known that the pathological changes of DN involve accumulation of extracellular matrix, which is degraded mainly by matrix metalloproteinases (MMPs).
The activities of MMPs depend on metal ions and are limited by tissue inhibitors of metalloproteinase-1 (TIMP-1), and NGAL is a universal activator of the MMPs family [21]. Studies have found that the metabolic disorder in the process of DN usually destroys the balance of MMPS/TIMP: the degradation ability of MMP-9 declines and the expression of TIMP-1 is up-regulated, and therefore, extracellular matrix accumulates [12]. NGAL is capable of protecting MMP-9 from degradation [1] Moreover, NGAL can activate the MMP-9 precursor directly, and counteract the inhibiting effect of TIMP-1 [12]. Thus, it is presumed based on a body of evidence that NGAL is activated to delay the progression of renal fibrosis in DN through preservation of the enzymatic activity of MMP-9 [24]. Thirdly, an expanding body of data now strongly suggests that inflammation contributes to diabetes mellitus and DN [14,25]. The disorder of glycometabolism stimulates the expression of inflammatory factors, and these could not only ruin the kidney tissue directly, but also provoke the secretion of type IV collagen, fibrin and others, thereby accelerating kidney sclerosis. It is suggested that NGAL acts as an immuno-modulator by binding to lipophilic inflammatory mediators like neutrophil tripeptide chemoattractant and clearing them [12]. It has also been proposed that NGAL plays a role in cell apoptosis via autocrine and paracrine pathways [26]. Thus, it contributes to the protection of diabetic kidney through restraining inflammation reactions and activating apoptosis of affected cells in the tubules and interstitium.
Advantages of Using NGAL to Diagnose Diabetic Nephropathy
It has been known that plasma NGAL is filtered by the glomerulus and largely reabsorbed by the proximal tubules by an efficient megalin-dependent endocytosis mechanism. Thus, the excretion of NGAL in the urine happens only when the reabsorption by proximal renal tubule is blocked due to an injury that stimulates the synthesis of NGAL. The size of NGAL must be taken into consideration as it is smaller than albumin, which explains its rapid filtration by the glomeruli and its appearance in urine of normoalbuminuric diabetic patients [26,27]. It is detected within 2-4 hours of renal injury, even before the appearance of albumin in urine [28].
It correlates with severity of renal impairment via expressing different concentrations according to the degree of chronic failure condition. It has been found that urine NGAL corresponds positively with major parameters used currently in evaluating DN and renal impairment (autosomal polycystic kidney and glomerulonephritis) such as cystatin C, blood urea nitrogen and serum creatinine [29].
In fact, it was observed by Liu et al. [30] that when diabetes was induced in rats, following the development of lesions in the kidney tubules, urine NGAL levels increased significantly earlier when compared to other biomarkers [30]. Serum NGAL increases in the very early stage of diabetic nephropathy and drops down as the disease develops. In other words, serum NGAL relates inversely with the amount of albuminuria.
It has been investigated that serum NGAL changes in this way in a variety of renal dysfunctions such as ischemia-reperfusion injury, drug-induced acute interstitial nephritis, kidney transplantation and so on. However, urine NGAL is directly related to the amount of albumin in urine. Therefore, as the disease progresses, and the functions of kidney become poorer, excretion of NGAL in urine increases day after day, the absorptive function of nephric tubule decreases, and urine NGAL reaches the highest levels in macroalbuminuric patients [31]. The amount of NGAL secreted into the urine is also correlated with the degree of kidney damage [6]. This is mainly because NGAL acts as a repair protein, and is secreted into the serum, as explained earlier. As the injury worsens, there will be leakage into the urine, which increases with disease progression. Therefore, NGAL is a promising biomarker to detect DN in the earliest stages, as it is secreted within a few hours of nephropathic damage, even before glomerular involvement. It is also demonstrative of the degree of nephropathic damage and is easily detected by non-invasive procedures.
Samples and Study Population
This research study is a cross-sectional study.
ELISA for Measuring NGAL
Enzyme-linked immunosorbent assay (ELISA) was performed using anti-human lipocalin 2 antibody (R&D Systems, Minnesota, USA) to detect NGAL in the urine samples of both diabetic and healthy patients [32].
Sample Preparation
Centrifuged urine samples were allowed to warm to room temperature, and then used in the assay.
Assay Procedure
100uL of assay diluent (provided with the kit) was added to each well. 50 uL of standard, control or sample was added to each well, covered with adhesive strip, and incubated for 2 hours at 2 °C. After incubation, an automatic washer and the prepared wash buffer was used to wash the wells four times, after which the plates were inverted and blotted against paper towels.
General Study Characteristics
Since prolonged hyperglycemia precedes DN, HbA1c was considered as a factor to predict probable damage to the kidney. There-fore, the samples were divided into three groups: non-diabetic healthy patients, diabetic patients with HbA1c>6, and diabetic patients with HbA1c<6. As shown in Figure 3. Out of 123 samples, 38 were non-diabetic, 14 were diabetic with HbA1c<6%, and 71 were diabetic with HbA1c>6%.
Phenotypic Characteristics
This sample was comprised of 123 subjects having a mean age of 48.24 with an average BMI of 30.57kg/m 2 and weight of 84.16kg.
An analysis was made using statistical methods (ANOVA) between the three groups. As shown in Table 1, the data demonstrated that there is a statistically significant difference between the mean HbA1c levels of the three groups (p=0.00). It can also be observed that there is a significant difference in the mean glucose levels (p=0.00) and age of the patients (p=0.00) in each group. However, no statistically significant difference was observed between the mean levels of the other investigated parameters: NGAL, BMI, weight, serum total protein and albumin, eGFR, BUN, SCr and c-peptide of insulin (p>0.05). Note: * -Represents statistical significance (p<0.05) The data shown here represents the mean and standard deviation.
Microalbuminuria
Of the 30 samples selected from the group of diabetic patients having HbA1c> 6 g% and tested for albuminuria, 5 patients (17%) tested positive for micro/macroalbuminuria, while the rest (83%) were normoalbuminuric, as demonstrated in Figure 4.
Microalbuminuria was characterized as the presence of more than 30mg of albumin/g of creatinine in urine.
Figure 4: Distribution of patients based on albuminuria.
Note: * 30 patients were chosen from the group of diabetic patients with HbA1c>6%
Correlational Analysis
To study the correlation between uNGAL and kidney function, Pearson's correlation factor was calculated, with uNGAL concentration being the dependent factor, and the results are tabulated in Table 2. In the non-diabetic group, no significant correlations were found between uNGAL concentrations and any of the investigated parameters (p>0.05), except serum albumin (p<0.05). Similar to the non-diabetic group, in the Diabetic patients with HbA1c<6% group there was no significant correlations were found between uNGAL concentrations and any of the investigated parameters (p>0.05) (Figures 5-7). In Diabetic patients with HbA1c > 6% group, there was a significant correlation between uNGAL concentrations and glucose (p=0.023), HbA1c (p=0.026) and serum albumin (p=0.001). Since significant correlation was discovered, the R 2 value was also calculated to determine the degree of correlation, and the results are demonstrated in Figures 8 & 9. It was observed that the degree of correlation was weak (R 2 values closer to 0). Note: * -Represents statistical significance (p<0.05)
Discussion
Diabetes is a growing global epidemic that is expected to affect and it was a weak positive correlation (R 2 =0.06 and 0.07 respectively). This implies that uNGAL concentrations are secreted in larger amounts following prolonged hyperglycemia, which may be related to the pathogenesis of DN.
Moreover, significant weak negative correlation was observed between uNGAL and serum albumin concentrations (p<0.05 and R 2 =0.0007), which means that as serum albumin decreases, uNGAL concentration increases. Decrease in serum albumin is often due to its increased loss in urine. Therefore, it can be inferred that there might be positive correlation between uNGAL and urine albumin concentrations, which has been reported by Papadopoulou-Marketou and colleagues (2017), among others [34]. The discrepant results could be attributed to the small sample size in this study and the variation in age, ethnicity and stage of the disease, and no conclusion can be drawn from these results. Moreover, it should be taken into consideration that the complications of diabetes and prolonged hyperglycemia can be other microvascular complications, such as retinopathy and neuropathy, or macrovascular complications, such as atherosclerosis, cardiovascular disease and coronary heart disease, among others. Since enough clinical information about the patients were not available, it was difficult to confirm the status of the other patients in group 3 (diabetic; HbA1c>6%). Since this study tries to correlate disease development with structural involvement through secretion of a protein, it would be more beneficial if the study design was of the observational or prospective format, instead of the cross-sectional format used. Moreover, it would prove useful to test the hypothesis on a larger sample size. | 2019-12-05T09:05:27.672Z | 2019-10-16T00:00:00.000 | {
"year": 2019,
"sha1": "639ef06c959cc802cfcbf71cd8b0f364210ec53c",
"oa_license": "CCBY",
"oa_url": "https://biomedres.us/pdfs/BJSTR.MS.ID.003692.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "800726d739dfacf231100df702d409657d0afb60",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14207497 | pes2o/s2orc | v3-fos-license | Implementation of a telephone-based secondary preventive intervention after acute coronary syndrome (ACS): participation rate, reasons for non-participation and 1-year survival
Background Acute coronary syndrome (ACS) is a major cause of death from a non-communicable disease. Secondary prevention is effective for reducing morbidity and mortality, but evidence-based targets are seldom reached and new interventional methods are needed. The present study is a feasibility study of a telephone-based secondary preventive programme in an unselected ACS cohort. Methods The NAILED (Nurse-based Age-independent Intervention to Limit Evolution of Disease) ACS trial is a prospective randomized controlled trial. All eligible patients admitted for ACS were randomized to usual follow-up by a general practitioner or telephone follow-up by study nurses. The intervention was made by continuous telephone contact, with counseling on healthy living and titration of medicines to reach target values for blood pressure and blood lipids. Exclusion criteria were limited to physical inability to follow the study design or participation in another study. Results A total of 907 patients were assessed for inclusion. Of these, 661 (72.9 %) were included and randomized, 100 (11 %) declined participation, and 146 (16.1 %) were excluded. The main reasons for exclusion were participation in another trial, dementia, and advanced disease. “Excluded” and “declining” patients were significantly older with more co-morbidity, decreased functional status, and had more seldom received education above compulsory school level than “included” patients. Non-participants had a higher 1-year mortality than participants. Conclusions Nurse-led telephone-based follow-up after ACS can be applied to a large proportion in an unselected clinical setting. Reasons for non-participation, which were associated with increased mortality, include older age, multiple co-morbidities, decreased functional status and low level of education. Trial registration International Standard Randomized Controlled Trial Number (ISRCTN): ISRCTN96595458 (archived by WebCite at http://www.webcitation.org/6RlyhYTYK). Application date: 10 July 2011.
Background
Survival has improved remarkably for acute cardiovascular disease over the last few decades due to improved treatment, evidence-based guidelines and effective medications for reducing secondary morbidity. However, cardiovascular disease is still the major cause of death from a noncommunicable disease, accounting for 17.3 million deaths annually worldwide, four million in Europe alone. The total cost for society is estimated to be US$863 billion worldwide and US$190 billion in Europe annually [1].
The risk of death, recurrent acute coronary syndrome (ACS) or stroke/transitory ischemic attack (i.e., cerebrovascular lesion, CVL) is markedly increased after an initial cardiac ischemic event [2]. The most important aspect for reducing mortality after ACS is secondary preventive measures [3,4]. To address this, evidence-based guidelines on secondary intervention after cardiovascular disease have been issued both nationally and internationally [5]. Medical treatment alone has been shown to reduce the risk of re-infarction and death by 20-30 % [6]. However, multiple studies have shown that compliance to guidelines is surprisingly low [7,8] but increasing slightly [4]. In Sweden in 2013, only 12 % of ACS patients in the SWEDEHEART registry reached targets for the four most important preventable risk factors at follow-up (i.e., smoking, blood pressure, low-density lipoprotein (LDL) and physical exercise) [9]. In the EUROASPIRE IV survey, only one fifth of those on lipid-lowering medication reached the target for LDL cholesterol (<1.8 mmol/L), and less than one third achieve targets for blood pressure, despite a high prevalence of modern medication use [10].
Thus, a more effective method of achieving secondary prevention targets is needed [11]. One method, which has proven to be cost-effective, is follow-up and medical titration via telemedical methods [12][13][14]. The NAILED (Nurse-based Age-independent Intervention to Limit Evolution of Disease) study is an on-going randomized controlled study on secondary preventive measures after ACS and CVL. The aim of the NAILED study was to evaluate whether nurse-based follow-up via telephone is a more effective method of reaching set target values for blood lipids and blood pressure than ordinary care by a general practitioner (GP) [15]. The aim of this present study was to examine the feasibility of the NAILED protocol in ACS patients on a population basis. We examined the rate of participation, rate and reasons for non-participation, differences between the groups and the 1-year mortality rate.
Methods
The overall NAILED study was designed as a non-blinded prospective randomized controlled trial. Participants in the present study comprised all who were considered for inclusion in the NAILED ACS risk factor trial between 1 January 2010 and 31 January 2013. The exclusion criteria were limited to an inability to adhere to a telephone intervention and participation in another trial. The trial was conducted at Östersund County hospital, Jämtland, Sweden, the only hospital in the county. The hospital has a rural catchment area of approximately 126,000 inhabitants, and is where all patients with suspected ACS in the county are sent. The ACS diagnosis was defined as myocardial infarction type 1 [16], comprising ST-elevation myocardial infarction (STEMI) and non-ST-elevation infarction (NSTEMI) or unstable angina (UA) based on symptoms of myocardial ischemia together with electrocardiographic changes (ST depression or T wave changes) indicative of myocardial ischemia.
During in-hospital care, all patients diagnosed with ACS were evaluated based on baseline data in interview and medical records (i.e., clinical status, risk factors and co-morbidities) by specially trained study nurses. Patients eligible for inclusion at discharge were randomized to follow-up as usual by a GP (control) or by a study nurse (case). An extensive description of the study design for the randomized trial is available in the study protocol [15]. Briefly, all cases were contacted by telephone 1 month after discharge and then annually for 3 years with prior measurements of standardized blood pressure and appropriate blood specimens. The cases were counseled on compliance, smoking cessation and exercise, and their medication titrated to reach set targets for blood pressure and lipids. A study physician made decisions about interventions. Follow-up occurred a month after every intervention, and a new titration was made if necessary. Unscheduled appointments could be made at the patient's request. Controls had their standardized blood pressure and the same blood samples taken at the same interval, and the results were reported to their GP and the study nurses. The aim of the overall study is to evaluate the hypothesis that a nurse-conducted telephone-based approach to secondary prevention will lead to a significantly larger proportion of patients reaching set targets for blood pressure and LDL cholesterol compared to usual practice.
The following reasons for exclusion were documented: participation in another on-going medical trial or not being able to participate according to the study design (i.e., not being physically or cognitively able to handle a telephone or unable to commute to have blood samples drawn). A 1-year follow-up of mortality was performed in all groups for assessment. Causes of mortality were classified based on the national Swedish National Cause of Death Register or, if insufficient, medical records. Kidney function was measured as estimated glomerular filtration rate (eGFR) based on the creatinine value at admission using the CKD-EPI formula. Every patient's functional status was assessed according to the modified Rankin scale (mRS) by the study nurses.
Statistical analyses were performed with IBM SPSS statistics software v 22.
All patients, or their next of kin, gave verbal informed consent to the collection of baseline data and the study was approved by the Regional Ethical Review Board, Umeå University, Umeå, Sweden (
Statistical analysis
Patients were subdivided into the following categories: "included" (eligible for inclusion and willing to participate), "declined" (eligible for inclusion but not willing to participate) and "excluded" (not eligible for inclusion). We compared basic characteristics and variables of interest between the three groups with two-sided chi 2 tests, Fischer's exact test or independent samples t test as appropriate. A p value <0.05 was considered significant.
To identify independent predictors of the decision to not participate, we set up a multivariate logistic regression model of variables with an alpha level <0.1 in the univariate analyses between the "included" and "declined" groups. We then performed manual stepwise exclusion based on the level of significance. To evaluate independent baseline characteristics important for exclusion, we set up a second multivariate model between included and excluded patients in the same manner. Sex and age were included in both regression models regardless of statistical significance. The results are presented as odds ratios (ORs) with 95 % confidence intervals (CIs). We categorized continuous variables in the multivariate models.
To assess 1-year cumulative survival we made Kaplan-Meier estimations with group comparisons using the log rank test. To calculate ORs for mortality, we used univariate logistic regression.
Baseline patient characteristics are presented in Table 1. In a univariate comparison between the "included" and "excluded" subgroups, we found that the "excluded" group consisted of significantly older patients and a larger proportion with a mRS score >3. The "excluded" group had a larger proportion of women and a lower proportion of patients with an education above compulsory school level. Excluded patients had a lower body mass index (BMI), decreased kidney function, and a larger proportion with previous ischemic heart disease (IHD), congestive The "included" group also had fewer patients with atrial fibrillation, prior hypertension or diabetes. No variables were missing more than 3 % data except "known hyperlipidemia" (missing 25.6 % of data) which, therefore, was excluded from all further analyses.
Patients who declined participation had significantly different baseline characteristics than "included" patients. Declining patients were generally older with a larger proportion being women and patients with mRS >3, and a larger proportion lacked education above compulsory school level. Also, the eGFR was lower in the "declining" group. Regarding established risk factors, a larger proportion of patients in the group that declined participation had previous cerebrovascular disease, congestive heart failure CHF and atrial fibrillation. Patients who declined participation differed from excluded patients only in regards to a lower proportion with a mRS score >3 or education above compulsory school level, but these differences were not significant.
In the first multivariate analysis, female sex, mRS >3, education limited to compulsory school level and age 85 years or older were significantly associated with a decision to decline participation (Fig. 2). In the second multivariate model, age 85 years or older, mRS >3 and known congestive heart failure CHF were associated with exclusion (Fig. 2). During the first year after discharge, 88 (9.7 %) patients died: 43 in the "included" group (6.5 %), 29 in the "excluded" group (19.9 %) and 16 in the "declining" group (16 %). Regarding the 1-year mortality prognosis, we found a significant difference between the "included" group and the "excluded" and "declining" groups (p <0.001, Fig. 3). The cumulative survival was not significantly different (p = 0.21) between the "excluded" and "declining" groups. However, the Kaplan-Meier curves revealed increased mortality during the first few months for the "excluded" group. Cardiovascular reasons were an insignificantly more common cause of death (52.1 % versus 47.9 %) than non-cardiovascular reasons in the overall population.
Discussion
This study was a non-participation feasibility study of a telemedical method to improve adherence to secondary preventive measures after a coronary ischemic event. Almost three quarters of admitted patients were eligible for inclusion in our community-based rural cohort. The main reasons for non-participation were declining by one's own will, participation in another medical trial, dementia and severe cardiovascular disease. In univariate analyses, factors that indicated a larger proportion of co-morbidity (decreased eGFR and BMI and previous heart disease or CVL) were significant for non-participation and a disadvantage for women. We can only speculate about the result for women, but our subanalyses showed a significantly lower level of education among women compared to men (data not shown). Low education is considered a measure of socioeconomic status (SES) and known to increase cardiovascular risk [17].
Our study showed a significant increase in 1-year mortality for non-participants. Since the groups are selected as shown in Table 1, the mortality analysis is only descriptive and illustrates the high early mortality in non-participants. Subjective reasons for declining participation are multifaceted. We tried to objectify these reasons using a multivariate regression model of quantifiable surrogate variables. This analysis showed that the main variables for the decision to decline were older age, increased disability as measured in mRS, and education limited to compulsory school level. Predictors of exclusion in the second multivariate regression model were older age, reduced autonomy in mRS and congestive heart failure CHF. Notably, the "excluded" and "declined" groups only differed in terms of autonomy (mRS >3) at univariate comparison.
Reducing the burden of modifiable risk factors is an important goal for both individuals and society. Therefore, a standardized and cost-effective programme that can be applied to a large proportion of patients in both urban and rural settings is needed. We designed the NAILED study without predefined exclusion criteria except participation in another trial or inability to adhere to the concept of a telephone intervention. We intended to mimic a natural cohort and clinical setting as much as possible. We found that a simple follow-up method made it possible to include a large proportion of an unselected population-based ACS cohort.
To the best of our knowledge, this is the first study of the implementation of a multifactorial secondary preventive programme at the population level. Other studies on telephone-based secondary prevention programmes are restricted in terms of more extensive inclusion or exclusion criteria, age or small selected population samples [12,18]. Consequently, direct comparisons regarding rates of participation and mortality are difficult.
A dilemma with existing secondary preventive programmes is that they tend to be more accessible to patients with higher income and education. The PURE study concluded that, even though high-income countries have the highest prevalence of cardiovascular risk factors, mortality is lower than in low-income countries due to less effective healthcare in the latter [19,20]. In a study by Bergström et al., prognosis after AMI worsened with lower SES despite a statefunded healthcare system with a strong egalitarian tradition and after adjusting for traditional risk factors [21]. Jelinek et al. showed that individual coaching via telephone reduced inequalities in secondary preventive target fulfillment due to social class [22]. Eighteen months after the intervention, the effects were sustained [23].
The present study was conducted in a rural setting in Sweden. However, the present secondary prevention programme can be implemented in both urban and rural settings in large parts of the world as much of the intervention was by telephone with little demand for travel. Intervening health workers need only basic training and access to a consulting physician.
It is important that a secondary prevention strategy is designed so that all inhabitants can take part, regardless of age and co-morbidities. The NAILED protocol was constructed to be as including and comprehensive as possible. Because low education and reduced autonomy were independent factors for a patient's unwillingness to participate, we speculate on the presence of an increased need for education at discharge in these groups. The older aged and advanced disease group is more challenging, but studies indicate low adverse effects from secondary preventive interventions and potential benefits on morbidity, but various effects on mortality [24,25]. Further studies need to be conducted to elucidate with what means we can reach our oldest patients with multi-morbidity. It would also be important to identify settings in which preventive efforts are futile.
Strengths and limitations
A strength of this study is that the cohort consists of an unselected cardiovascular population. Only one hospital is in the catchment area, which gives us a good overview of the cohort and local treatment traditions. During an initial 3-month control period, no missed cases were found. A weakness is that this is a single center study and general applicability may be questioned. We also lack variables to extensively evaluate SES. Additionally, patients not referred to the cardiology department, probably due to clinicians regarding interventions as more harmful than useful in the specific patient's circumstances, could have been missed despite the study nurses surveying all departments outside the cardiac care unit.
Conclusion
Nurse-led telephone-based follow-up of secondary prevention after coronary ischemic events can be applied to a large proportion of patients in an unselected clinical setting. Increased mortality is seen in those who do not participate. Reasons for non-participation include older age, multiple co-morbidities, decreased functional status and education limited to compulsory school level. | 2016-04-07T22:52:55.727Z | 2016-02-15T00:00:00.000 | {
"year": 2016,
"sha1": "f5397b0cd55adc2448e6cfc37d23837bc92fa3a9",
"oa_license": "CCBY",
"oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/s13063-016-1203-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f5397b0cd55adc2448e6cfc37d23837bc92fa3a9",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118992521 | pes2o/s2orc | v3-fos-license | Emergent Universe Scenario, Bouncing and Cyclic Universes in Degenerate Massive Gravity
We consider alternative inflationary cosmologies in massive gravity with degenerate reference metrics and study the feasibilities of the emergent universe scenario, bouncing and cyclic universes. We focus on the construction of the Einstein static universe, classes of exact solutions of bouncing and cyclic universes in degenerate massive gravity. We further study the stabilities of the Einstein static universe against both homogeneous and inhomogeneous scalar perturbations and give the parameters region for a stable Einstein static universe.
Introduction
General relativity (GR), as a classical theory describing the non-linear gravitational interaction of massless spin-2 fields, is widely accepted at the low energy limit. Nevertheless, there are still several motivations to modify GR, based on both theoretical considerations (e.g. [1,2]) and observations (e.g. [3,4].) One proposal, initiated by Fierz and Pauli [2], is to assume that the mass of a graviton is nonzero. Unfortunately, the interactions for massive spin-2 fields in Fierz-Pauli massive gravity have long been thought to give rise to ghost instabilities [5]. Recently, the problem has been resolved by de Rham, Gabadadze, and Tolley (dRGT) [6], and dRGT massive gravity has attracted great attention and is studied in various areas such as cosmology [7][8][9][10] and black holes [11,12]. We refer to e.g. [13][14][15] and reference therein for a comprehensive introduction of massive gravity.
There are several extensions of dRGT massive gravity for different physical motivations, such as bi-gravity [16], multi-gravity [17], minimal massive gravity [18], mass-varying massive gravity [19], degenerate massive gravity [20] and so on [21]. Thereinto, the degenerate massive gravity was initially proposed by Vegh [20] to study holographically a class of strongly interacting quantum field theories with broken translational symmetry. Later this theory has been studied widely in the holographic framework [22][23][24][25] and black hole physics [26][27][28][29][30]. However, the cosmological applications of this theory are few. Recently, together with suitable cubic Einstein-Riemann gravities and some other matter fields, degenerate massive gravity was used to construct exact cosmological time crystals [31] with two jumping points, which provides a new mechanism of spontaneous time translational symmetry breaking to realize the bouncing and cyclic universes that avoid the initial spacetime singularity. It is worth noting that it is higher derivative gravity, not massive gravity, that is indispensable for the realization of cosmological time crystals, which involves discontinuity in the time derivative of the cosmological scale factor at the turning points. On the other hand, one can also consider smooth bouncing universes, and cyclic models. In the framework of Einstein gravity such models will necessarily violate the energy condition. In this paper, we consider degenerated massive gravity to study these models.
Actually, it is valuable to investigate alternative inflationary cosmological models within the standard big bang framework, because traditional inflationary cosmology [32][33][34][35] suffers from both initial singularity problem [36] and trans-Planckian problem [37]. By introducing a mechanism for a bounce in cosmological evolution, both the trans-Planckian problem and an initial singularity can be avoided. The bouncing scenario can be constructed via many approaches such as matter bounce scenario [38], pre-big-bang model [39], ekpyrotic model [40], string gas cosmology [41], cosmological time crystals [31] and so on [42][43][44].
The cyclic universe, e.g. [45], can be viewed as the extension of the bouncing universe since it brings some new insight into the original observable Universe [46]. Another direct solution to the initial singularity proposed by Ellis et al. [47,48], i.e., the emergent universe scenario, is assuming that the universe inflates from a static beginning, i.e., the Einstein static universe, and reheats in the usual way. In this scenario, the initial universe has a finite size and some past-eternal inflation, and then evolves to an inflationary era in the standard way. Both horizon problem and the initial singularity are absent due to the initial static state. Actually, these alternative inflationary cosmologies have been studied in different class of massive gravities. The bouncing universes, and cyclic universes have been studied in mass-varying massive gravity [49]. The emergent scenario has been also studied in dRGT massive gravity [50,51] and bi-gravity [52,53]. To our knowledge, these alternative inflationary models have not been studied in degenerate massive gravity. For our purpose, we would like to study the feasibilities of an emergent universe, bouncing universes, and cyclic universes in massive gravity with degenerate reference metrics.
The remaining part of this paper is organized as follows. In Sec. 2, we give a brief review of the massive gravity and its equations of motion. In Sec. 3, we study the emergent universe in degenerate massive gravity with a perfect fluid. First we obtain the exact Einstein static universe solutions in several cases. Then we give the linearized equations of motion and discuss the stabilities against both homogeneous and inhomogeneous scalar perturbations.
We give the parameters regions of stable Einstein static universes. In Sec. 4, we construct exact solutions of the bouncing universes, and cyclic universes in degenerate massive gravity with a cosmological constant and axions. We conclude our paper in Sec. 5.
Massive gravity
In this section, following e.g. [6], we briefly review massive gravity. The four-dimensional action S of massive gravity is given by where M pl is the Plank mass and we assume M 2 pl /2 = 1 in the rest discussion, S m is the action of matters, R is the Ricci scalar, g represents the determinant of g µν , m represents the mass of graviton, c i are free parameters and U i are interaction potentials which can be expressed as follows: where the regular brackets denote traces such as [K] = Tr[K] = K µ µ . K µ ν is given by and obeys where f is a fixed symmetric tensor and called a reference metric, which is given by where η ab = diag(−1, 1, 1, 1) is the Minkowski background and φ a are the Stückelberg fields introduced to restore diffeomorphism invariance [54]. In the limit of m → 0, massive gravity reduces to GR. The equations of motion are given by where the energy-momentum tensor T µν = − 1 √ −g ∂Sm ∂g µν . We refer to e.g. [13][14][15] and reference therein for more details of massive gravity.
Generally, all the Stückelberg fields φ a are nonzero in massive gravity and the rank of the matrix f (2.5) is full, i.e., rank(f ) = 4. In Ref. [20], there are two spatial nonzero Stückelberg fields which break the general covariance in massive gravity. The matrix f has a rank 2 and thus, is degenerate. The massive gravity with degenerate reference metrics is called degenerate massive gravity. For our purpose, we set only the temporal Stückelberg field to equal to zero. It follows that the massive gravity we consider in this paper has degenerate reference metrics of rank 3. And the unitary gauge of the corresponding Stückelberg fields is defined simply by φ a = x µ δ a µ . So φ a are given by [31] φ a = a m (0, x, y, z) , (2.9) in the basis (t, x, y, z), where a m is a positive constant.
Emergent universe scenario
In this section, we consider the realization of the emergent universe scenario in the context of degenerate massive gravity. We consider only a spatially flat Friedmann-Lemaître-Robertson-Walker (FLRW) metric because the Stückelberg fields in degenerate massive gravity are chosen in a spatially flat basis. On the other hand, based on the latest astronomical observations [55,56], the Universe is at good consistency with the standard spatially flat case. In the following discussion, we assume that the matter field is composed of perfect fluids. Firstly we construct the Einstein static universe in several cases. Then we study the stability against both homogeneous and inhomogeneous scalar perturbations.
Einstein static universe
The spatially flat FLRW metric is given by The energy-momentum tensor corresponding to perfect fluids is given by where ρ and P represent the energy density and pressure respectively, w is the constant equation-of-state (EOS) parameter, and velocity 4-vector u µ is given by u µ = (1, 0, 0, 0) , satisfying u µ u µ = −1 .
where the dot denotes the derivative with respect to time. For the sake of obtaining the Einstein static universe, we let the scale factor a(t) = a 0 = const. = 0 andȧ =ä = 0. We request a 0 < a m [10] to avoid the ghost excitation from massive gravity. The energy density ρ can be solved from the Friedmann equation (3.4), Substituting Eqs. (3.6) and (3.7) into (3.5), the final independent equation is given by e 3 n 3 + e 2 n 2 + e 1 n + e 0 = 0 , with e 0 = −3(c 3 + 4c 4 )w , e 1 = 7 + 3w + 6(c 3 + 2c 4 )(1 + 3w) , (3.9) The Einstein static universe solution is given by a 0 = n a m . Because there are several parameters in the Eq. (3.8), we will discuss them in different cases. (3.10) static universe (3.10) can exist in the following two cases: , the solution is given by We discuss the existence of the two solutions respectively. Both cases require reality conditions (3.6) and (3.7). The existence of n = n + requires the following two cases: the solution is given by the solution is given by The existence of n = n − requires the following two cases: the solution is given by , and c 4 = 5 − 9c 3 12 , , and c 4 = 5 − 9c 3 12 , the solution is given by In this case, Eq. (3.8) can be rewritten aŝ (3.23) For 4ê 3 1 + 27ê 2 0 ≤ 0 andê 1 < 0, there are three real solutions which are given bŷ For 4ê 3 1 + 27ê 2 0 > 0 andê 1 < 0, there is one real solution which is given bŷ Forê 1 > 0, there is one real solution which is given bŷ (3.26) Forê 1 = 0, there is one real solution which is given bŷ Substituting the solutions into Eqs. (3.23) and (3.6), the solutions and energy density are given by
Linearized Massive Gravity
Now we study the linear massive gravity with degenerate reference metrics. We use the symbols bar and tilde representing the background and the perturbation components of the metric respectively. First, we obtain the linearized equations of motion The perturbed metric can be written as whereḡ µν is the background metric which is given by Eq. (3.1) with a = a 0 and h µν is a small perturbation. For our purpose, we consider scalar perturbations in the Newtonian gauge. h µ ν is given by where Ψ and Φ are functions of (t, x, y, z). For scalar perturbations, it is useful to perform a harmonic decomposition [99], Now the indexes are lowered and raised by the background metric unless otherwise stated. By using the relation g µν g νλ = δ µ λ , the inverse metric is perturbed by So the perturbed M can also be written as where " 0 " and " i, j " denote time and space components respectively, and the same index does not mean the Einstein rule. For perfect fluids, the perturbations of energy density and pressure are ρ and P = w ρ respectively. The perturbations of velocity are given by whereρ and U are also functions of (t, x, y, z). The perturbed energy momentum tensor is given by where u µ represents the background components and is given by Eq. where (3.41) It is useful to perform a harmonic decomposition of (Ψ, Φ,ρ, U ), (3.42) In these expressions, summation over co-moving wavenumber k are implied. The harmonic where ∆ is Laplacian operator and κ is separation constant. For spatially flat universe, we have κ 2 = 2 where the modes are discrete ( = 0, 1, 2 . . . ) [60,70]. Substituting Eqs. (3.30) and (3.42) into (3.37), after some algebra, we find ξ k (t) = (2κ 2 n + a 2 m (12c 4 m 2 (1 − n) 2 (5 − 2n) + 3c 3 m 2 (5 − 24n + 27n 2 − 8n 3 ) + n(−3m 2 (4 − 9n + 4n 2 ) + 2n 2 ρ)))Ψ k (t)/(n 3 a 2 m ρ 0 ) , where Ψ k (t) satisfies a second order ordinary differential equation with Z = (2κ 2 nw + a 2 m (4n 3 ρw − m 2 (12(1 + 2c 3 + 2c 4 )(w − 1)n 3 + (14 + 18c 3 + 24c 4 − 27w − 81c 3 w − 108c 4 w)n 2 + 12(1 + 6c 3 + 12c 4 )wn − 15(c3 + 4c 4 )w))/(2n 3 a 2 m ) .
(3.46)
To analyze the stabilities of the Einstein static universe in massive gravity with a degenerate reference metric, we require the condition of the existence of the oscillating solution of Eq. (3.45) which is given by In the following discussions, we study the parameters region satisfying reality conditions (3.6) and ( The stabilities of the Einstein static universe (3.11) require . and , w > 0 , and w = − 2 (3c 3 − 1) 9 (c 3 + 1) , and κ 2 > 0 , , w > 0 , and w = − 2 (3c 3 − 1) 9 (c 3 + 1) , and κ 2 > 0 , , w > 0 , and κ 2 > 0 , (3.52) In the case (2.4) the stabilities of the Einstein static universe (3.21) require conditions , and c 4 = 5 − 9c 3 12 , In this case, we also study the parameters (c 3 , c 4 , w) region of the stabilities conditions numerically. However, in order to obtain the stable Einstein static universe against in-homogeneous scalar perturbations, we should consider all modes of the perturbations, i.e. κ 2 = 1, 4, 9 . . . ∞. It is not easy to analyze numerically. So we study the stabilities in some special cases. According to the stability condition (3.47), we find that κ 2 does not impact the condition when w = 0. And the case w = 0 represents the Universe is filled with ordinary matter, which is important and received with great interests. We find that the stable Einstein static flat universes filled with ordinary matters w = 0 do exist. And we plot the parameters (c 3 , c 4 ) regions of the stable solutions in w = 0 cases in Fig. 2 for It is worth noting that, to our knowledge, our construction is the first of the stable Einstein static universes with the flat spatial geometry, in the presence of ordinary matter against both homogeneous and inhomogeneous scalar perturbations in modified gravities.
Bouncing and cyclic universes
In Ref. [31], the cosmological time crystal with two jumping points was constructed to realize bouncing universes, and cyclic universes in degenerate massive gravity together with Einstein -Riemann cubic gravities and some matters. These cosmological time-crystal solutions are characterized by the discontinuity ofȧ at the turning points. In this section, we would like to turn off the higher-order derivative terms and construct the smooth bouncing and cyclic models in degenerate massive gravity. To be specific, we focus on the construction of classes of exact solutions of bouncing universes, and cyclic universes. The total action S is given by Eq. (2.1). The gravitational part is still degenerate massive gravity. However, the action of matter S m is given by where Λ 0 is the bare cosmological constant. Note that we further added three axion fields ϕ i with a positive constant α. These axions preserve the homogeneity and isotropicity of the background cosmological metric, but can have nontrivial perturbative effects [100].
(4.4)
The corresponding Hamiltonian constraint is given by which can be viewed as the effective equation of motion. And it can be rewritten as a differential equation,ȧ with
Bouncing universe
We consider that the initial state taking a cosh-type ansatz for a bouncing model, where A 1 , A 2 , and A 3 are constants and obey the following reality conditions, For the FLRW metric (3.1), the solutions are given by The existence of the cosh-type bouncing solution requires that We study this case in the following subsection.
Cyclic universe
We consider that the initial state taking a sin-type ansatz for cyclic or oscillating universes: where B 1 , B 2 , and B 3 are constants and obey the following reality conditions, For the FLRW metric (3.1), the sin-type solutions are given by 14) The existence of the sin-type cyclic solution requires that It can be seen that the cyclic model can also exist without axions. It is worth commenting that Λ 0 , the bare cosmological constant, must be negative for these cyclic solutions, whilst it must be positive for the bounce solutions studied in the previous section. This is because the massive gravity by itself can provide sufficiently large repulsion to overcome the attraction from the negative bare cosmological constant for the universe to bounce; it requires a sufficient attractive force from the bare cosmological constant for the Universe to contract at a later time so that the Universe becomes cyclic.
Cyclic universe as linear perturbation
In the previous subsections, we consider c 4 = −c 3 /4, for which exact solutions of bouncing universes, and cyclic universes could be obtained. We now consider the more general c 4 = −c 3 /4 and we would like to construct cyclic universes whose a can be viewed as a small perturbation from the Minkowski spacetime, and hence we can obtain the exact solution for the linearized metric. In other words, we consider the Universe (4.12) oscillating in a small range comparing with the lowest value of scale factor, i.e. B 1 |B 2 |. The cyclic or oscillating ansatz can be rewritten as where the constant C 1 is the zeroth order solution, describing the Minkowski spacetime, C 2 (t) is the first order solution, and is a small quantity. According to the Euler-Lagrangian equations, the existence of the zeroth order solution C 1 requires and we have We substitute Eqs (4.16) and (4.18) into effective Lagrangian (4.3) and Hamiltonian (4.5), and then perform a series expansion of the effective Lagrangian and Hamiltonian to the second order. We find where V 0 is constant and given by The vanishing of V 0 implies ghost instabilities, so we set it equal to a second order small quantity, Note that λ here can be of any sign and any finite constant, as long as perturbation is sufficiently small. According to the above equation, we have (4.23) we have For H 2 = 0, we can rewriteĊ Considering that we have the sin-type oscillating ansatz, the solution is given by which satisfies Eq. (4.26). The existence of the solution requires the following conditions, Finally, we have 6C 2 1 (α 2 (2a m + C 1 ) + a 2 m (C 1 (Λ 0 − m 2 ) + a m m 2 ) (a m − C 1 ) 2 2 < λ < 0 , In this paper, we investigated massive gravity with degenerated reference metrics, focusing on the feasibility of some alternative inflationary models such as the emergent universe scenario, bouncing universes, and cyclic universes.
We first studied the feasibility of the emergent universe scenario. We constructed the Einstein static flat universe in degenerate massive gravity filled with perfect fluids. We then derived the linearized equations of motion in this background and studied the stabilities against both the homogeneous and inhomogeneous scalar perturbations. We found that there could exist stable such a universe filled with ordinary matter (w = 0). Our construction is the first of the stable Einstein static universe with the flat spatial geometry, in the presence of ordinary matter against both homogeneous and inhomogeneous scalar perturbations in modified gravities. The results show that the Einstein static flat universe can safely enter an inflationary epoch. Our conclusion is significant since the universe with flat geometry appears to be favored by latest astronomical observations [55,56].
We also constructed classes of exact solutions of bouncing universes, and cyclic flat universes in degenerate massive gravity by including a bare cosmological constant and three free axion fields. It turns out that the cosmological constant is necessary but the axions are optional in the construction. For appropriate parameters, we found that cyclic universes could also emerge as some linear perturbations of the flat Minkowski spacetime. In our solutions, for the bounce universes, the bare cosmological constant must be positive whilst it must be negative for the cyclic universes. In the latter case, the attractive force from the negative bare cosmological constant is necessary to overcome the repulsion from massive gravity to provide a contracting point so that the Universe becomes cyclic. Our results demonstrate that bouncing universes, and cyclic universes can emerge in massive gravity coupled to a bare cosmological constant. The simplicity of the theory and the existence of such simple exact solutions open a new avenue to study alternative inflationary cosmology.
Our initial investigation of alternative cosmological models in degenerated massive gravity showed a new possibility of addressing cosmological problems. However, many works remain. All perturbations, including vector and tensor perturbations, should be analyzed when we study the stabilities of the Einstein static universe. Furthermore stabilities of bouncing and cyclic solutions should be also investigated. We leave these to future works. | 2019-03-10T06:51:23.000Z | 2019-03-10T00:00:00.000 | {
"year": 2019,
"sha1": "2883f248118e4091078de391aeffa12b7e87315c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1903.03940",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fb614ab22c4d5c54237c0d29c56da3dc2718dfd6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
257831598 | pes2o/s2orc | v3-fos-license | Mg2+ and Cu2+ Charging Agents Improving Electrophoretic Deposition Efficiency and Coating Adhesion of Nano-TbF3 on Sintered Nd-Fe-B Magnets
In order to prepare nano-TbF3 coating with high quality on the surface of Nd-Fe-B magnets by electrophoretic deposition (EPD) more efficiently, Mg2+ and Cu2+ charging agents are introduced into the electrophoretic suspension and the influence on the electrophoretic deposition is systematically investigated. The results show that the addition of Mg2+ and Cu2+ charging agents can improve the electrophoretic deposition efficiency and coating adhesion of nano-TbF3 powders on sintered Nd-Fe-B magnets. The EPD efficiency increases by 116% with a relative content of Mg2+ as 3%, while it increases by 109% with a relative content of Cu2+ as 5%. Combining the Hamaker equation and diffusion electric double layer theory, the addition of Mg2+ and Cu2+ can change the zeta potential of charged particles, resulting in the improvement of EPD efficiency. The relative content of Mg2+ below 3% and Cu2+ below 5% can increase the thickness of the diffusion electric double layer, the excessive addition of a charging agent will compress the diffusion electric double layer, and thicker diffusion layer represents higher zeta potential. Furthermore, the addition of Mg2+ and Cu2+ charging agents greatly improves the coating adhesion, and the critical load for the cracking of the coating increases to 146.4 mN and 40.2 mN from 17.9 mN, respectively.
Introduction
With their excellent magnetic performance, Nd-Fe-B magnets have been widely used in consumer electronics, automobile and wind power generation, etc. [1,2]. However, the low Curie temperature of a sintered Nd-Fe-B magnet limits its application in the field of high temperature. To obtain high-coercive Nd-Fe-B magnets, the heavy rare earth element (HRE) of Tb or Dy is added into the magnets because the (Nd, HRE) 2 Fe 14 B phase has a higher anisotropy field H A than that of the Nd 2 Fe 14 B phase [3,4]. Grain boundary diffusion (GBD) is a commonly used technique to infiltrate heavy rare earth along the grain boundary from the surface into the interior, which only consumes a very small amount of HREs [5,6].
To prepare the diffusion source on the magnet surface, we can use different coating methods such as dipping [7], magnetron sputtering [8], Plasma Spraying [9], vapor deposition [10] and electrophoretic deposition (EPD) [11]. Charged particles of a suspension are deposited electrokinetically by an electric field placed between two oppositely charged electrodes; this process is called EPD. EPD has the advantage of having a lower cost than magnetron sputtering. Compared with coating and dipping, EPD offers many advantages, such as good shape freedom, uniform film, controllable thickness of film and relatively good adhesion [12]. In EPD, the suspension properties affect the EPD process and the quality of the final deposited coatings. The water-based suspensions cause problems in electrophoretic forming, such as gas generated by water electrolysis and joule heating of the suspension, which reduces the stability of the suspension [13]. Therefore, at present, HRE compounds diffusion sources such as TbF 3 , TbH 3 , DyF 3 and DyH 3 are mainly electrophoretically deposited in organic suspension [14][15][16][17]. Actually, organic liquids usually have a low dielectric constant, which limits the charges attached to the particles, and leads to the formation of electronically charged particle flocculation [13].
The dissociation or ionization of surface groups on particles and the adsorption of surfactants are the most important mechanisms to obtain stable nonaqueous EPD suspension and a good EPD process [18]. Charging agents including acids (H + ), bases (OH − ), adsorbed metal ions or adsorbed poly-electrolytes are used to obtain particle surfaces' electrosteric stabilization for an effective EPD process [19]. Das et al. [20] prepared the stable suspension of yttria stabilized zirconia (YSZ) nanoparticles by using phosphate ester (PE) as a charging agent and revealed that the most stable suspension of YSZ nano-powder can be obtained when the PE concentration is 0.01 g/100 mL, because of the high zeta potential. Zarbov et al. [21] added Polyethyleneimine (PEI) into the BaTiO 3 EPD suspension as a charging agent and found that PEI maintains its very strong cationic charge by protonation of the amine groups from the surrounding medium. Guan et al. [22] prepared Dy 2 O 3 film on the magnet surface by EPD and used polyethylene imine (PEI) to improve the coating adhesion and EPD efficiency in an isopropanol suspension. Wang et al. [23] added 5 wt.% MgCl 2 into TbF 3 EPD suspension to improve the EPD efficiency and the coating adhesion. Cu 2+ has also acted as a charging agent in the process of co-deposition to form copper-graphene composite films [24]. These studies focused on improving the EPD process or magnet property by using charging agents, but the effect of different relative content of charging agents on deposition has not been studied. Moreover, it is also important to explain the effect of metal ions on EPD efficiency from the perspective of deposition kinetics. In order to improve the efficiency of EPD and the coating adhesion of nano-TbF3 powders so as to prepare for the grain boundary diffusion process of the sintered magnets, the effect of different relative contents of Mg 2+ and Cu 2+ charging agents on EPD efficiency and coating adhesion were systematically studied. In addition, the mechanism of action of Mg 2+ and Cu 2+ charging agents was analyzed with the theory of EPD and diffusion double layer from the perspective of deposition kinetics.
Experimental
A commercial sintered Nd-Fe-B magnet provided by GRIREM Co., Ltd. was selected as the initial magnet. The magnet was wire cut into cuboids with sizes of 8 × 8× 7 mm (c-axis). Then, the magnet was polished with 600 mesh, 1000 mesh and 2000 mesh sandpaper until the surface of the sample was flat and smooth. Subsequently, the surface of the polished magnet sample was cleaned with conduct alkali washing, acid washing and anhydrous ethanol ultrasonic cleaning and dried for later use. Nano TbF 3 (900 nm) powder provided by GRIREM Advanced Materials Co., Ltd. (Beijing, China), was used as the diffusion source. TbF 3 powder and anhydrous ethanol were mixed into 8 g/L suspension, MgCl 2 or CuCl 2 with a mass of 1-10% relative to TbF 3 was added and mechanical stirring was performed until MgCl 2 or CuCl 2 was completely dissolved; then, the ultrasonic was performed for 2 min until the suspension was uniform. Copper plate as anode and sample as cathode which were soaked in suspension, and TbF 3 coating was obtained by EPD at 60 V voltage for 1-5 min. The deposited samples were then diffused under vacuum at 900 • C for 7 h, followed by tempering at 500 • C for 2 h.
The magnetic properties of the samples were measured by a high temperature permanent magnet measuring instrument (NIM-500C, Beijing, China) and the microstructure of the coating was characterized by a scanning electron microscope (SEM, Tescan Vega II, Brno, Czech Republic). The coating elements were analyzed by ICP. The phases of the coating powders stripped from the surface of magnet were measured by XRD. The viscosity of the suspension was measured by a rotary rheometer (Anton paar MCR302, Graz, Austria). The ζ potential of the suspension was tested by a Zeta potential tester (Brookhaven 90plus, New York, NY, USA). The acidity and alkalinity of the suspension were detected by precision pH test paper. Coating adhesion was tested by a micro-nano mechanical testing system (STEP500-NHT3-MCT3, Anton paar, Graz, Austria). Figure 1 shows the EPD rate of nano-TbF 3 with different relative contents of Mg 2+ (a) and Cu 2+ (b). It was found that with the increase in the relative content of Mg 2+ , the EPD efficiency increased first and then decreased, reaching the highest value when the relative content of Mg 2+ was 3%, with the efficiency improving 116% from 3.1 mg/(cm 2 /min) to 6.7 mg/(cm 2 /min). The voltage was used for ion migration when adding too much Mg 2+ , which resulted in EPD basically unable to occur. For Cu 2+ , the EPD efficiency also first increased and then decreased with the increase in the relative content of Cu 2+ . Unlike Mg 2+ , the EPD efficiency reached the highest when the relative content of Cu 2+ was 5%, with the efficiency improving 109% from 3.1 mg/(cm 2 /min) to 6.47 mg/(cm 2 /min). EPD can continue to occur if Cu 2+ is continuously increased, but the efficiency of EPD will decline rapidly. The reasons will be added later.
EPD Efficiency
of the coating was characterized by a scanning electron microscope (SEM, Tescan Vega Brno, Czech Republic). The coating elements were analyzed by ICP. The phases of th coating powders stripped from the surface of magnet were measured by XRD. The visco ity of the suspension was measured by a rotary rheometer (Anton paar MCR302, Gra Austria). The ζ potential of the suspension was tested by a Zeta potential test (Brookhaven 90plus, New York, USA). The acidity and alkalinity of the suspension we detected by precision pH test paper. Coating adhesion was tested by a micro-nano m chanical testing system (STEP500-NHT3-MCT3, Anton paar, Graz, Austria). Figure 1 shows the EPD rate of nano-TbF3 with different relative contents of Mg 2+ ( and Cu 2+ (b). It was found that with the increase in the relative content of Mg 2+ , the EP efficiency increased first and then decreased, reaching the highest value when the relati content of Mg 2+ was 3%, with the efficiency improving 116% from 3.1 mg/(cm 2 /min) to 6 mg/(cm 2 /min). The voltage was used for ion migration when adding too much Mg which resulted in EPD basically unable to occur. For Cu 2+ , the EPD efficiency also fir increased and then decreased with the increase in the relative content of Cu 2+ . Unlike Mg the EPD efficiency reached the highest when the relative content of Cu 2+ was 5%, with th efficiency improving 109% from 3.1 mg/(cm 2 /min) to 6.47 mg/(cm 2 /min). EPD can continu to occur if Cu 2+ is continuously increased, but the efficiency of EPD will decline rapidl The reasons will be added later. The EPD behavior with different time was studied by adding the best relative conte of Mg 2+ and Cu 2+ . Figure 2 shows the EPD TbF3 amount (a) and EPD rate (b) of nano-Tb at different times. It can be seen in Figure 2a that the amounts of EPD all increased linear with time when EPD was measured from the beginning to three minutes, which is co sistent with the fact that the amount of EPD TbF3 amount is proportional to the time in th EPD theory. It is also found in Figure 2b that the EPD rate reduced when the EPD tim reached 4-5 min because the deposited EPD coating increases the resistance. Howeve the EPD rate with Cu 2+ reduced more than that of the others, which indicates that som changes occur to reduce the EPD efficiency when adding Cu 2+ to the EPD suspension. R latedly, adding Mg 2+ will not affect the EPD process. The EPD behavior with different time was studied by adding the best relative content of Mg 2+ and Cu 2+ . Figure 2 shows the EPD TbF 3 amount (a) and EPD rate (b) of nano-TbF 3 at different times. It can be seen in Figure 2a that the amounts of EPD all increased linearly with time when EPD was measured from the beginning to three minutes, which is consistent with the fact that the amount of EPD TbF 3 amount is proportional to the time in the EPD theory. It is also found in Figure 2b that the EPD rate reduced when the EPD time reached 4-5 min because the deposited EPD coating increases the resistance. However, the EPD rate with Cu 2+ reduced more than that of the others, which indicates that some changes occur to reduce the EPD efficiency when adding Cu 2+ to the EPD suspension. Relatedly, adding Mg 2+ will not affect the EPD process. Figure 3 shows the cross-sectional SEM morphologies of the EPD 90s coated magnet. It can be seen in Figure 3a that the thickness of the coating was about 40 µm. However, Figure 3b,c shows that the thickness increased to about 80 µm with 3% Mg 2+ added and increased to 60 µm with 5% Cu 2+ added. This also proves that the addition of Mg 2+ and Cu 2+ improves the EPD efficiency. Figure 3 shows the cross-sectional SEM morphologies of the EPD 90s coated magn It can be seen in Figure 3a that the thickness of the coating was about 40 μm. Howev Figure 3b,c shows that the thickness increased to about 80 μm with 3% Mg 2+ added a increased to 60 μm with 5% Cu 2+ added. This also proves that the addition of Mg 2+ a Cu 2+ improves the EPD efficiency. In order to further study the effect of Mg 2+ and Cu 2+ on the EPD efficiency from t perspective of kinetics, we first introduce the EPD Hamaker equation [25]:
Cζ t
In the equation, m is the amount of EPD, ε0 is the dielectric constant of vacuum, ε the dielectric constant of solution, η is the viscosity of EPD suspension, C is the parti concentration in EPD suspension, ζ is the zeta potential of charged particles, E is the a plied electric field strength, L is the electrode spacing and t is the EPD time. In this pap both the applied electric field's strength and the distance between the electrodes were co stant. EPD efficiency was calculated by dividing the EPD quantity by the EPD time. The fore, the EPD efficiency at the same EPD time is inversely proportional to the η and Firstly, the effect of Mg 2+ and Cu 2+ on the viscosity of the EPD suspension was tested Figure 3 shows the cross-sectional SEM morphologies of the EPD 90s coated magnet. It can be seen in Figure 3a that the thickness of the coating was about 40 μm. However, Figure 3b,c shows that the thickness increased to about 80 μm with 3% Mg 2+ added and increased to 60 μm with 5% Cu 2+ added. This also proves that the addition of Mg 2+ and Cu 2+ improves the EPD efficiency. In order to further study the effect of Mg 2+ and Cu 2+ on the EPD efficiency from the perspective of kinetics, we first introduce the EPD Hamaker equation [25]: In the equation, m is the amount of EPD, ε0 is the dielectric constant of vacuum, εr is the dielectric constant of solution, η is the viscosity of EPD suspension, C is the particle concentration in EPD suspension, ζ is the zeta potential of charged particles, E is the applied electric field strength, L is the electrode spacing and t is the EPD time. In this paper, both the applied electric field's strength and the distance between the electrodes were constant. EPD efficiency was calculated by dividing the EPD quantity by the EPD time. Therefore, the EPD efficiency at the same EPD time is inversely proportional to the η and ζ. Firstly, the effect of Mg 2+ and Cu 2+ on the viscosity of the EPD suspension was tested in order to study the kinetic reason of improving the EPD efficiency. The rotating rheometer tested the viscosity of suspension by shearing generated by a rotating motion, so we tested In order to further study the effect of Mg 2+ and Cu 2+ on the EPD efficiency from the perspective of kinetics, we first introduce the EPD Hamaker equation [25]: In the equation, m is the amount of EPD, ε 0 is the dielectric constant of vacuum, ε r is the dielectric constant of solution, η is the viscosity of EPD suspension, C is the particle concentration in EPD suspension, ζ is the zeta potential of charged particles, E is the applied electric field strength, L is the electrode spacing and t is the EPD time. In this paper, both the applied electric field's strength and the distance between the electrodes were constant. EPD efficiency was calculated by dividing the EPD quantity by the EPD time. Therefore, the EPD efficiency at the same EPD time is inversely proportional to the η and ζ. Firstly, the effect of Mg 2+ and Cu 2+ on the viscosity of the EPD suspension was tested in order to study the kinetic reason of improving the EPD efficiency. The rotating rheometer tested the viscosity of suspension by shearing generated by a rotating motion, so we tested the viscosity with a shear rate of 0-1000 to enhance the accuracy of the test. Figure 4 shows the viscosity of the EPD suspension with different Mg 2+ and Cu 2+ relative contents. It can be found that the viscosity of the EPD suspension increased with the increase in the shear rate (γ) because of the viscous resistance of the particles in the suspension, but the viscosity curves of different Mg 2+ and Cu 2+ relative contents were basically coincident. Since the average viscosity of the various concentrations was essentially consistent across the illustrations, it is clear that the addition of Mg 2+ and Cu 2+ had little impact on the EPD suspension's viscosity.
the viscosity with a shear rate of 0-1000 to enhance the accuracy of the test. Figure 4 show the viscosity of the EPD suspension with different Mg 2+ and Cu 2+ relative contents. It ca be found that the viscosity of the EPD suspension increased with the increase in the shea rate (γ) because of the viscous resistance of the particles in the suspension, but the visco ity curves of different Mg 2+ and Cu 2+ relative contents were basically coincident. Since th average viscosity of the various concentrations was essentially consistent across the illu trations, it is clear that the addition of Mg 2+ and Cu 2+ had little impact on the EPD suspen sion's viscosity. Subsequently, the effect of Mg 2+ and Cu 2+ addition on the zeta potential of EPD sus pension was investigated. Figure 5 shows the dependence of the zeta potential of EPD suspension on the relative contents of Mg 2+ (a) and Cu 2+ (b). It can be seen that the zet potential initially increased and then decreased with the change of Mg 2+ and Cu 2+ relativ content, which is consistent with the change trend of EPD efficiency in Figure 1. Similarly zeta potential also reached the peak value when adding about 3% Mg 2+ and 5% Cu 2+ com pared with Figure 1. According to the EPD Hamaker equation, the EPD efficiency is pro portional to the ζ potential. The effect of Mg 2+ and Cu 2+ on ζ potential was highly consisten with that on EPD efficiency. Therefore, we can confirm that the adding of Mg 2+ and Cu affected the efficiency of EPD by changing the zeta potential of the EPD suspension. In order to explain the lower rate of later EPD with the addition of Cu 2+ , or the highe zeta potential with a higher relative content of Cu 2+ (such as 10%) than without Cu 2+ bu with a lower EPD rate than without Cu 2+ , we analyzed the coating elements. Table 1 show Subsequently, the effect of Mg 2+ and Cu 2+ addition on the zeta potential of EPD suspension was investigated. Figure 5 shows the dependence of the zeta potential of EPD suspension on the relative contents of Mg 2+ (a) and Cu 2+ (b). It can be seen that the zeta potential initially increased and then decreased with the change of Mg 2+ and Cu 2+ relative content, which is consistent with the change trend of EPD efficiency in Figure 1. Similarly, zeta potential also reached the peak value when adding about 3% Mg 2+ and 5% Cu 2+ compared with Figure 1. According to the EPD Hamaker equation, the EPD efficiency is proportional to the ζ potential. The effect of Mg 2+ and Cu 2+ on ζ potential was highly consistent with that on EPD efficiency. Therefore, we can confirm that the adding of Mg 2+ and Cu 2+ affected the efficiency of EPD by changing the zeta potential of the EPD suspension.
the viscosity with a shear rate of 0-1000 to enhance the accuracy of the test. Figure 4 show the viscosity of the EPD suspension with different Mg 2+ and Cu 2+ relative contents. It ca be found that the viscosity of the EPD suspension increased with the increase in the she rate (γ) because of the viscous resistance of the particles in the suspension, but the visco ity curves of different Mg 2+ and Cu 2+ relative contents were basically coincident. Since th average viscosity of the various concentrations was essentially consistent across the illu trations, it is clear that the addition of Mg 2+ and Cu 2+ had little impact on the EPD suspe sion's viscosity. Subsequently, the effect of Mg 2+ and Cu 2+ addition on the zeta potential of EPD su pension was investigated. Figure 5 shows the dependence of the zeta potential of EP suspension on the relative contents of Mg 2+ (a) and Cu 2+ (b). It can be seen that the ze potential initially increased and then decreased with the change of Mg 2+ and Cu 2+ relati content, which is consistent with the change trend of EPD efficiency in Figure 1. Similarl zeta potential also reached the peak value when adding about 3% Mg 2+ and 5% Cu 2+ com pared with Figure 1. According to the EPD Hamaker equation, the EPD efficiency is pr portional to the ζ potential. The effect of Mg 2+ and Cu 2+ on ζ potential was highly consiste with that on EPD efficiency. Therefore, we can confirm that the adding of Mg 2+ and Cu affected the efficiency of EPD by changing the zeta potential of the EPD suspension. In order to explain the lower rate of later EPD with the addition of Cu 2+ , or the high zeta potential with a higher relative content of Cu 2+ (such as 10%) than without Cu 2+ b with a lower EPD rate than without Cu 2+ , we analyzed the coating elements. Table 1 show In order to explain the lower rate of later EPD with the addition of Cu 2+ , or the higher zeta potential with a higher relative content of Cu 2+ (such as 10%) than without Cu 2+ but with a lower EPD rate than without Cu 2+ , we analyzed the coating elements. Table 1 shows the ICP element analysis results of coating; it was found that the coating deposited by adding Cu 2+ contained 0.71 wt.% Cu, while adding Mg 2+ contained none. Figure 6 shows the XRD results of EPD coating with Cu 2+ charging agents; it can be seen that the coating had a peak of copper and no peak of CuCl 2 , which indicates that Cu 2+ in EPD suspension will be reduced to Cu under the electric field. In addition, Mg 2+ can hardly be reduced to elemental because the discharge order is after H + , while Cu 2+ is in front of H + .
Anode: Cu − 2e − = Cu 2+ Cathode: Cu 2+ + 2e × = Cu the ICP element analysis results of coating; it was found that the coating deposited by adding Cu 2+ contained 0.71 wt.% Cu, while adding Mg 2+ contained none. Figure 6 shows the XRD results of EPD coating with Cu 2+ charging agents; it can be seen that the coating had a peak of copper and no peak of CuCl2, which indicates that Cu 2+ in EPD suspension will be reduced to Cu under the electric field. In addition, Mg 2+ can hardly be reduced to elemental because the discharge order is after H + , while Cu 2+ is in front of H + . This process reduces the EPD rate because of consuming Cu 2+ and reducing the content of positive charge in the EPD suspension. Additionally, the process of generating Cu is not conducive to the deposition of TbF3 on the surface of the magnet. This is also the reason for the uneven coating surface in Figure 3c.
According to the diffusion double layer theory [26], Mg 2+ and Cu 2+ affect the zeta potential of the EPD suspension by adsorption. Figure 7 shows a schematic of the diffusion electric double layer region of a TbF3 particle. It can be seen in Figure 7a that when no charging agents are added, the particle surface mainly absorbs the positive and negative ions produced by dissociation in suspension. It can be seen in Figure 7b that when the relative content of Mg 2+ and Cu 2+ was low, the surface charge of the ion was gradually saturated. The diffusion double layer gradually became thicker with the increase in This process reduces the EPD rate because of consuming Cu 2+ and reducing the content of positive charge in the EPD suspension. Additionally, the process of generating Cu is not conducive to the deposition of TbF 3 on the surface of the magnet. This is also the reason for the uneven coating surface in Figure 3c.
According to the diffusion double layer theory [26], Mg 2+ and Cu 2+ affect the zeta potential of the EPD suspension by adsorption. Figure 7 shows a schematic of the diffusion electric double layer region of a TbF 3 particle. It can be seen in Figure 7a that when no charging agents are added, the particle surface mainly absorbs the positive and negative ions produced by dissociation in suspension. It can be seen in Figure 7b that when the relative content of Mg 2+ and Cu 2+ was low, the surface charge of the ion was gradually saturated. The diffusion double layer gradually became thicker with the increase in Mg 2+ and Cu 2+ on the surface, and the zeta potential was also higher, so the EPD efficiency gradually increased. When EPD efficiency reached the highest value, the surface charge of the particles was saturated; as Figure 7c shows, the diffusion electric double layer was compressed by adding charging agents, so the ζ potential decreased with the increase in Cl − in the suspension.
Mg 2+ and Cu 2+ on the surface, and the zeta potential was also higher, so the EPD efficiency gradually increased. When EPD efficiency reached the highest value, the surface charge of the particles was saturated; as Figure 7c shows, the diffusion electric double layer was compressed by adding charging agents, so the ζ potential decreased with the increase in Cl − in the suspension.
Coating Adhesion
Good coating adhesion can ensure the GBD result after EPD. To test it, we applied a linear normal load of transverse movement to the coating, and determined the coating adhesion according to the load when the coating cracked. Figure 8 shows the scratch test results of EPD coating. It was found that the critical load for cracking of the coating without a charging agent was 17.9 mN, but it increased to 146.4 mN and 40.2 mN when Mg 2 and Cu 2+ were added, respectively, which indicates that the addition of Mg 2+ and Cu 2+ charging agents greatly improved the coating adhesion. Figure 9 shows the surface SEM morphologies of the EPD 90 s coated magnet. In Figure 9 (a1,a2,b1,b2), the coating surface with Mg 2+ added showed regular gully morphology with distributed micro-cracks compared with the coating without a charging agent, which is profit for the release of stress [24]. As is seen in Figure 9 (c1,c2), the coating surface with Cu 2+ added exists aggregates on the surface, and shows finer gully morphology, which may be the reason that the coating adhesion was between without charging agents and with Mg 2+ charging agents.
Coating Adhesion
Good coating adhesion can ensure the GBD result after EPD. To test it, we applied a linear normal load of transverse movement to the coating, and determined the coating adhesion according to the load when the coating cracked. Figure 8 shows the scratch test results of EPD coating. It was found that the critical load for cracking of the coating without a charging agent was 17.9 mN, but it increased to 146.4 mN and 40.2 mN when Mg 2+ and Cu 2+ were added, respectively, which indicates that the addition of Mg 2+ and Cu 2+ charging agents greatly improved the coating adhesion. Figure 9 shows the surface SEM morphologies of the EPD 90 s coated magnet. In Figure 9 (a1,a2,b1,b2), the coating surface with Mg 2+ added showed regular gully morphology with distributed micro-cracks compared with the coating without a charging agent, which is profit for the release of stress [24]. As is seen in Figure 9 (c1,c2), the coating surface with Cu 2+ added exists aggregates on the surface, and shows finer gully morphology, which may be the reason that the coating adhesion was between without charging agents and with Mg 2+ charging agents.
Magnetic Performance
The addition of Mg 2+ and Cu 2+ can improve the EPD efficiency and the adhesion of the coating, but it cannot be at the expense of deteriorating the magnetic properties after heat treatment. A high-temperature permanent magnet measuring instrument (NIM-500C) was used to measure the magnetic properties of the permanent magnetic materials at room temperature and high temperatures, respectively. An external field of~3 T was initially applied to make the samples magnetized at full saturation. The remanence (B r ), coercivity (H cj ) and maximum magnetic energy product (BH) max of the magnet were obtained by testing the demagnetization curve of the material. Figure 10 shows the demagnetization curves of the magnet, where it can be seen that the coercivity (H cj ) of the original magnet was 19.3 kOe. With the first heat treatment process of 900 • C for 7 h, and the second heat treatment process of 500 • C for 2 h, the coercivity of the undeposited reference sample decreased to 18.1 kOe, while the coercivity of the EPD sample and EPD sample with Mg 2+ and Cu 2+ increased to 22.3 kOe, 22.2 kOe and 22.6 kOe, respectively. The remanence of all samples did not decrease. It indicates that the addition of Mg 2+ and Cu 2+ had no effect on the TbF 3 coating GBD process. As an aside, the preparation of TbF 3 coating containing copper has some development prospects, because Cu can promote GBD by widening the grain boundary [27].
Magnetic Performance
The addition of Mg 2+ and Cu 2+ can improve the EPD efficiency and the adhesion of the coating, but it cannot be at the expense of deteriorating the magnetic properties after heat treatment. A high-temperature permanent magnet measuring instrument (NIM-500C) was used to measure the magnetic properties of the permanent magnetic materials at room temperature and high temperatures, respectively. An external field of ~3 T was initially applied to make the samples magnetized at full saturation. The remanence (Br), coercivity (Hcj) and maximum magnetic energy product (BH)max of the magnet were obtained by testing the demagnetization curve of the material. Figure 10 shows the demagnetization curves of the magnet, where it can be seen that the coercivity (Hcj) of the original magnet was 19.3 kOe. With the first heat treatment process of 900 °C for 7 h, and the second heat treatment process of 500 °C for 2 h, the coercivity of the undeposited reference sample decreased to 18.1 kOe, while the coercivity of the EPD sample and EPD sample with Mg 2+ and Cu 2+ increased to 22.3 kOe, 22.2 kOe and 22.6 kOe, respectively. The remanence of all samples did not decrease. It indicates that the addition of Mg 2+ and Cu 2+ had no effect on the TbF3 coating GBD process. As an aside, the preparation of TbF3 coating containing copper has some development prospects, because Cu can promote GBD by widening the grain boundary [27].
Conclusions
Nano-TbF3 coating on the surface of the magnet was prepared with the addition of Mg 2+ and Cu 2+ in the EPD suspension, and the effects of Mg 2+ and Cu 2+ on EPD efficiency and coating adhesion were studied. The conclusions were as follows: (1) The addition of Mg 2+ and Cu 2+ can improve the EPD efficiency. With 3% relative content of Mg 2+ added, the efficiency improved 116% from 3.1 mg/(cm 2 /min) to 6.7 mg/(cm 2 /min). With 5% relative content of Cu 2+ added, the efficiency improved 109% from 3.1 mg/(cm 2 /min) to 6.47 mg/(cm 2 /min). (2) The effect of Mg 2+ and Cu 2+ on the EPD efficiency from the perspective of kinetics was analyzed with the Hamaker equation and diffusion double layer theory; it was found that Mg 2+ and Cu 2+ influence the EPD efficiency by changing the zeta potential of
Conclusions
Nano-TbF 3 coating on the surface of the magnet was prepared with the addition of Mg 2+ and Cu 2+ in the EPD suspension, and the effects of Mg 2+ and Cu 2+ on EPD efficiency and coating adhesion were studied. The conclusions were as follows: (1) The addition of Mg 2+ and Cu 2+ can improve the EPD efficiency. With 3% relative content of Mg 2+ added, the efficiency improved 116% from 3.1 mg/(cm 2 /min) to 6.7 mg/(cm 2 /min). With 5% relative content of Cu 2+ added, the efficiency improved 109% from 3.1 mg/(cm 2 /min) to 6.47 mg/(cm 2 /min). (2) The effect of Mg 2+ and Cu 2+ on the EPD efficiency from the perspective of kinetics was analyzed with the Hamaker equation and diffusion double layer theory; it was found that Mg 2+ and Cu 2+ influence the EPD efficiency by changing the zeta potential of charged particles, but not the viscosity of suspension. In addition, the diffusion electric double layer absorbs Mg 2+ or Cu 2+ to increase its thickness, which indicates higher zeta potential of TbF 3 particles when the relative content of Mg 2+ and Cu 2+ is low. When the relative content reaches 3% and 5%, respectively, the diffusion electric double layer reaches saturation, and further addition of a charging agent will compress the diffusion electric double layer and reduce zeta potential. Furthermore, the reduction reaction of Cu 2+ is the reason of the lower rate of later EPD when Cu 2+ charging agents were added. (3) It was found that the addition of Mg 2+ and Cu 2+ charging agents greatly improve the coating adhesion; the critical load for the cracking of the coating increased to 146.4 mN and 40.2 mN from 17.9 mN, respectively. Furthermore, the addition of Mg 2+ and Cu 2+ has no bad effect on TbF 3 coating GBD of the magnet. | 2023-03-30T15:07:47.238Z | 2023-03-28T00:00:00.000 | {
"year": 2023,
"sha1": "81cc372f02c68dc50040caade5922450db79b9c7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/16/7/2682/pdf?version=1679985525",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "00b21f651cc99576f78cfd6338e9e02315e5ecdc",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
3459117 | pes2o/s2orc | v3-fos-license | Effect of Age on Blood Glucose and Plasma Insulin, Glucagon, Ghrelin, CCK, GIP, and GLP-1 Responses to Whey Protein Ingestion
Protein-rich supplements are used widely to prevent and manage undernutrition in older people. We have previously shown that healthy older, compared to younger, adults have less suppression of energy intake by whey protein—although the effects of age on appetite-related gut hormones are largely unknown. The aim of this study was to determine and compare the acute effects of whey protein loads on blood glucose and plasma gut hormone concentrations in older and younger adults. Sixteen healthy older (eight men, eight women; mean ± SEM: age: 72 ± 1 years; body mass index: 25 ± 1 kg/m2) and 16 younger (eight men, eight women; 24 ± 1 years; 23 ± 0.4 kg/m2) adults were studied on three occasions in which they ingested 30 g (120 kcal) or 70 g (280 kcal) whey protein, or a flavored-water control drink (~2 kcal). At regular intervals over 180 min, blood glucose and plasma insulin, glucagon, ghrelin, cholecystokinin (CCK), gastric inhibitory peptide (GIP), and glucagon-like peptide-1 (GLP-1) concentrations were measured. Plasma ghrelin was dose-dependently suppressed and insulin, glucagon, CCK, GIP, and GLP-1 concentrations were dose-dependently increased by the whey protein ingestion, while blood glucose concentrations were comparable during all study days. The stimulation of plasma CCK and GIP concentrations was greater in older than younger adults. In conclusion, orally ingested whey protein resulted in load-dependent gut hormone responses, which were greater for plasma CCK and GIP in older compared to younger adults.
Introduction
Despite the well-recognized major adverse impact of nutritional impairment on the health of the elderly, including ageing-related muscle loss [1], and related increase in the use of high-energy drinks, usually rich in whey protein, few nutritional studies have involved older people. We have recently reported that healthy older, compared to younger, adults have less suppression of energy intake by whey protein, ether ingested orally [2] or infused directly into the proximal small intestine [3].
Appetite, energy intake, and blood glucose regulation are likely to be dependent on gastrointestinal mechanisms triggered by the interaction with the nutrients ingested. Mechanisms which reduce energy intake in younger adults include the stimulation of gut hormone secretion, e.g., cholecystokinin (CCK) and glucagon-like peptide (GLP-1), and the suppression of ghrelin. The incretin hormones gastric inhibitory polypeptide (GIP) and GLP-1 play major roles in the control of plasma insulin, glucagon, and blood glucose concentrations in response to nutrient ingestion [4]. We, and others, have reported that age affects gut hormone responses; healthy older, compared to younger, adults had higher CCK concentrations after overnight fasting, after mixed nutrient intake [5,6], and during intraduodenal glucose and lipid infusions [7], in addition to higher insulin in response to intraduodenal glucose infusion [8], higher GIP after glucose ingestion [9,10], and higher GLP-1 after an overnight fast [9,11,12] as well as after glucose [9] and mixed macronutrient intakes [11], while the reported effects of age on fasting and postprandial ghrelin after mixed macronutrient intakes are inconsistent [6,[12][13][14][15].
The aims of the study were to further determine the effects of oral whey protein loads on blood glucose and plasma insulin, glucagon, ghrelin, CCK, GIP, and GLP-1 concentrations in older as well as younger adults. We hypothesized that orally administered whey protein would result in load-related responses of glucose and gut hormones, and that these responses to whey protein would be greater in older than younger subjects.
Protocol
The protocol was identical to that of our previous studies comparing younger and older men [2], and older men and women [16]-results of blood glucose and plasma gut hormone concentrations in the healthy older women compared to men are published [16]. The study had a randomized (using the method of randomly permuted blocks; www.randomization.com (16 subjects randomized in one block with random permutations)) double-blind cross-over design including three study days, separated by three to 14 days.
Subjects consumed a standardized evening meal (beef lasagna (McCain Foods Pty Ltd., Wendouree, VIC, Australia),~591 kcal) before the study days at~19.00 h. They were instructed to fast overnight from solids and liquids thereafter and to refrain from strenuous physical activity. On the study day, subjects attended the laboratory at~08.30 h and were seated in an upright position [2,16].
Subjects ingested drinks containing 30 g (120 kcal) or 70 g (280 kcal) whey protein or a control drink (~2 kcal) [2,16]. The drinks were prepared by a research assistant who was not involved in the data analysis of the study results, flavored with diet lime cordial (Bickford's Australia Pty Ltd., Salisbury South, SA, Australia), and served in a covered cup.
Data and Statistical Analysis
Sixteen subjects per age group would allow detection of differences in the area under the curve (AUC) of the primary outcomes of 25,920 pg/mL min ghrelin, 198 pmol/L min CCK, and 1080 pmol/L min GLP-1 between groups with power equal to 0.8 and alpha to 0.05. Statistical analyses were performed using SPSS software (version 22; IBM, Armonk, NY, USA). Effects of age and protein load and their interaction effect were determined using a repeated measures mixed-effect model, including baseline values as a covariate and Bonferroni's post hoc correction. AUC was calculated from baseline to 180 min using the trapezoidal rule and peak/nadir as the largest change from baseline. Statistical significance was accepted at p < 0.05. All data are presented as means ± SEMs.
Discussion
This study examined the influence of age on the acute effects of orally ingested whey protein on blood glucose and plasma gut hormone concentrations in healthy adults. Plasma ghrelin was dose-dependently suppressed, while insulin, glucagon, CCK, GIP and GLP-1 concentrations were dose-dependently increased by the whey protein ingestion. Our observations extend the previously reported data of the acute effects of orally ingested whey protein on plasma insulin, glucagon, ghrelin, CCK, GIP, and GLP-1 concentrations in young adults [18,19]. The protein load effects were particularly evident after~60 min, when the majority of the dose of 30 g whey protein had emptied from the stomach [16]; plasma concentrations returned to baseline after 30 g, while they remained at their maximal increase/decrease after 70 g whey protein intake.
Our findings confirmed earlier reports that older, when compared to younger, adults have higher plasma CCK [5][6][7] and GLP-1 [9,11] concentrations after an overnight fast, while fasting insulin concentrations were reduced in our study in healthy adults. Age also affected CCK and GIP, but not insulin, responses following whey protein ingestion; as previously reported after mixed macronutrient ingestion for CCK [5,6] and oral, but not intraduodenally infused [20], glucose ingestion for GIP [9,10], glucose, and insulin [8,20], postprandial concentrations were greater. The higher plasma CCK and GIP concentrations in older rather than younger adults may be related to differences in the small intestinal transit of the whey protein, and clearance including GIP inactivation by dipeptidyl peptidase IV (DPP-IV) and renal processes [9]. The higher incretin hormone GIP response following whey protein ingestion in older compared to younger adults is likely to be beneficial for glycemic control in older people.
The causes of the age-related reduction in the suppression of energy intake by nutrients observed in this and other studies must include altered responses to the presence of nutrients in the small intestine, because the reduced suppression is observed after intraduodenal [3] as well as oral nutrient administration [2,6,21]. CCK is a anorexigenic hormone and acts to suppress hunger and food intake [22]. We have reported previously that older, when compared to younger, age in healthy subjects is associated with at least preserved, and possibly even increased, sensitivity to the satiating effects of exogenously administered CCK [23]. Because plasma fasting and post-protein CCK concentrations were higher in older compared to young subjects in the present and previous studies, it is perhaps surprising that these higher concentrations were associated with reduced, not increased, protein-induced suppression of energy intake in the healthy older compared to young adult subjects [2,3]. It is possible that the test meal may have been given too late at 3 hours to assess the full effect of CCK changes, as plasma concentrations had returned to baseline by then after all but the highest whey protein load. Nevertheless, these findings are consistent with our previous finding that under-nourished older people have higher fasting and post-nutrient CCK concentrations in comparison to well-nourished older people, but reduced nutrient-induced suppression of food intake compared to well-nourished older people [6]. Together, these findings suggest that age-related changes in CCK (circulating concentrations and/or action) are unlikely to contribute much, if anything, to the age-related reduction in food intake after the ingestion of protein and other nutrients.
The findings of this study do not exclude a role for GLP-1 or GIP in the lesser suppression of food intake by whey protein in healthy older subjects. Baseline circulating concentrations of the anorexigenic hormone GLP-1 were significantly higher in older compared to younger subjects, with no difference between age groups in the subsequent whey protein-induced rise, consistent with responses during intraduodenal infusions of lipid and glucose [7]. The higher baseline GLP-1 levels may have acted to further inhibit the suppression of appetite and thus food intake after whey protein ingestion. GLP-1 is mainly secreted more distally in the gastrointestinal tract (i.e., ileum and colon) than CCK and GIP (expressed mainly in the duodenum and jejunum), and the GLP-1 concentrations following whey protein ingestion increased more slowly than CCK and GIP concentrations. The emptying of food content from the stomach is slowed down by feedback mechanisms in the intestines including the release of CCK and GLP-1 [24,25]; indeed, gastric emptying of the whey protein was slower in the older compared to younger adults [2]. Although the effect of GIP on human appetite and food intake, if any, is not clear, there is limited animal evidence to suggest it may act to stimulate food intake; GIP receptor-deficient mice are resistant to diet-induced obesity [26]. The greater increase in circulating GIP concentrations after whey protein in healthy older compared to younger subjects might therefore act to reduce the protein-induced suppression of food intake. More studies will be required to investigate the role of these hormones in age-related feeding changes. Also, psychological factors, including increased dietary restraint, particularly in women [27], may affect the short-term energy intake regulation of older adults.
Healthy older and younger adults had comparable plasma ghrelin concentrations following whey protein ingestion, consistent with responses to mixed-nutrient intake in some [12,15] but not all previous studies [6,13,14]. It has been suggested that aging-related changes in body composition (i.e. a decrease in lean mass and increase in fat mass) may act to decrease fasting [28] and postprandial [6] ghrelin concentrations, as body fat is negatively correlated to ghrelin concentrations [29] and tends to increase with older age. Other studies, however, have found higher postprandial and fasting ghrelin concentrations in older adults than those in younger adults and impaired suppression of ghrelin after the consumption of a mixed-nutrient meal in older compared to younger subjects [13,14].
This study has several limitations, including the relatively small subject numbers. Total ghrelin instead of active ghrelin was measured, which could be considered to be less than optimal; however, the results appeared to be clear-cut, with significant dose-dependent suppressive effects of the protein loads on ghrelin in the direction expected.
Conclusions
The finding that plasma gut hormone responses to whey protein are not blunted in healthy older compared to younger men is likely to have implications to the composition of dietary supplements for older people, and warrants further research to their relation to food intake and glycemic control in older people. | 2018-02-23T06:16:01.784Z | 2017-12-21T00:00:00.000 | {
"year": 2017,
"sha1": "e902642cabee4144d00daeca51c63c43f2e4500f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/10/1/2/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c54e2ab785ffa07f2c25d79b36fe3f0ccb9d7647",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3242331 | pes2o/s2orc | v3-fos-license | Ability of serum C-reactive protein and white blood cell cout in predicting acute schemic stroke. A short -term follow-up study
Background: Stroke is one of the leading causes of mortality and long-term morbidity. The aim of the present study was to determine the ability of baseline serum C-reactive protein (CRP) and white blood cell count (WBC) values in predicting the outcome of acute ischemic stroke (AIS). Methods: This study consisted of patients with first AIS referred to Poursina Hospital, Rasht, Iran. Severity of stroke was determined according to the National Institute of Health (NIH) Stroke Scale at the time of admission. Serum CRP levels and WBC count were measured at the time of admission. All patients were followed-up for 90 days after discharge and the severity of stroke was assessed using modified Rankin Scale. Receiver operating characteristic curve analysis was used for calculating the most appropriate cutoff point of CRP and WBC count for differentiating patients with and without poor outcome at the end of the study period. Results: A total of 53 out of 102 patients (52%) had poor outcome. The most appropriate cutoff value for CRP in differentiating patients with and without poor outcome was 8.5mg/l (sensitivity: 73.1%, specificity: 69.4%) and for WBC the difference did not reach to a significant level. The cutoff points of CRP > 10.5 mg/ml yielded a predictive ability at sensitivity: 75%, specificity: 63.8% whereas predictive ability of WBC for mortality was at a borderline level. Conclusion: These findings indicate that high levels of serum CRP in AIS at the time of admission is associated with poor prognosis. However, this study found no ability for WBC in predicting AIS outcome.
S troke is one of the major health issues in the developing countries and is one of the leading etiologies of mortality and long-term morbidity (1). Unfortunately, the crude number of people who suffer from stroke types annually, related deaths and disabilityadjusted life years lost (DALYs), is still increasing (2). Therefore, it is important to prevent acute ischemic stroke by determining and modifying the risk factors. On the other hand, earlier initiation of effective reperfusion in patients with acute ischemic stroke is critical (3). So, the preventive strategies and treatment approaches for strokes are of particular importance. Over the past decade, many laboraty studies have found evidence of inflammation in the pathophysiology of cerebrovascular disease (4). Increase of several inflammatory cytokines, such as C-reactive protein (CRP), IL-6, IL1 have been found to have contribution in the pathogenesis of ischemic brain injury and worse neurological outcome (5,6). CRP, the most important inflammatory biomarker may play a role in progression of cerebrovascular pathologies (7,8). There is conflicting evidence regarding the exact role of CRP as a prognostic biomarker in ischemic stroke outcome (9,10). Similarly, the white blood cell count (WBC) has been also shown to predict the risk of first-time myocardial infarction and ischemic stroke (11,12).
It is well known that the prediction of outcome after ischemic stroke is important in clinical settings. However, identification of an independent prognostic marker in patients with stroke is still a matter of controversy. To our knowledge, the data regarding the predicting ability of serum CRP and WBC counts in patients with stroke are scarce. Thus, the aim of the present study was to determine the ability of serum CRP and WBC values assessed at the time of admission in predicting the outcome of acute ischemic stroke.
Patient selection and data collection:
Patients with firstever acute ischemic stroke who were referred to Poursina Hospital, Rasht, Iran in a one year period (2013-2014) were consecutively recruited in this cross-sectional study. The inclusion criteria were: onset of symptoms less than 24 hours and evidence of ischemic stroke on computed tomography (CT). Patients with history of previous cerebrovascular accidents (CVA), evidence of hemorrhagic stroke in CT, coexisting malignancy, end-stage liver or renal disease, active infectious diseases and use of anti-inflammatory agents were excluded from the study.
Demographic data and clinical findings including ischemic heart disease (self-reported or use of cardiovascular drugs), hypertension (self-reported or use of antihypertensive agents), dyslipidemia (self-reported or use of anti-dyslipidemic agents), diabetes (self-reported or use of anti-diabetic agents) were recorded at the time of admission. Severity of stroke was determined by a neurologist using the National Institute of Health Stroke Scale (NIHSS) at the time of admission.
The severity of stroke was categorized into three group based on NIHSS score (0-4 mild, 5-15 moderate and >16 severe). Serum inflammatory markers including WBC and CRP were measured at the admission time. CRP was measured quantitatively using BIONIK kit (Made in Iran, normal range: 4-6 milligram/liter).
Follow-up:
All patients were followed-up for 90 days after discharge from the hospital. The severity of stroke was assessed using mRS. Patients with mRS score lower than 4 were considered as patients with good outcome and those with MRS score of 4 and higher as poor outcome patients. Statistical Analysis: Statistical analysis was done using SPSS Version 16 (IBl cor, Chicago, USA). Kolmogrov-Smirnov test was used for testing normality in quantitative data. Chi-square test and Fisher's exact test were used for comparison of qualitative data. Multivariate regression logistic analysis was used to determine the predictors of poor outcome after 90 days of onset of stroke. Receiver operative characteristic (ROC) curve analysis was used for calculating the most appropriate cutoff point of CRP and WBC count to differentiate patients with and without poor outcome and mortality after 90 days.
Results
A total of 102 patients were recruited in this study. There were 43 (42.2%) males and 59 (57.8%) females and the mean age of patients was 69.471±12.125 years (range: 36-88). Table 1 shows the baseline characteristics of the patients. After 90 days of admission, 53 (52%) patients had poor outcome and good outcome in 49 (48%) patients.
Multivariate logistic regression analysis was performed for controlling the potential confounding effect of age, chronic diseases, GCS and NIHSS at admission. CRP but not WBC count remained an independent predictor for poor outcome in patients with acute ischemic stroke (table 2).
The most appropriate cutoff value for CRP in differentiating the two outcome groups was 8.5 mg/L (sensitivity: 73.1%, specificity: 69.4%) and for WBC was 8.25×10 3 per micro liter (sensitivity: 61.5%, specificity: 51%). However, the predictive ability of WBC did not reach to a significant level. Serum CRP > 10.5 mg/L predicted mortality at sensitivity of 75%, specificity of 63.8% whereas WBC did exhibit a borderline predictive ability for mortality (figures 1, 2).
Discussion
According to the findings of the current study, higher CRP levels not WBC on admission in acute ischemic stroke patients are associated with poor outcome. Ischemic brain injury begins a complex cascade which resulted in a systemic inflammatory response after both ischemic and hemorrhagic stroke (13). Different cytokines are involved in various aspects of stroke (14). Several studies have reported that higher levels of inflammatory markers such as CRP are associated with worse outcome after ischemic stroke (3,(15)(16). According to pathophysiologic mechanisms of stroke, these findings may indicate different patterns of immunoinflammatory activation (17). For example, a recent study has indicated that CRP levels can predict the risk of recurrent strokes among lacunar stroke patients (18). It was documented that CRP level has a moderate prognostic factor to identify carotid stenosis (19). The results of this study are in agreement with previous studies (20)(21)(22) who demonstrated increased levels of serum CRP at admission was associated with worse outcome in patients with acute ischemic stroke. There are data that suggest immune response following stroke occurs in a time-dependent period with the fact that the innate immune response occurring in the first 24 hours following ischemic injury and theorized that the CRP is not sensitive enough for predicting beyond 24 hours and thus may not represent inflammatory status (23). Results from a population-based cohort in prediction of a 90-day subacute recurrent stroke revealed a weak significant association for C-reactive protein (24). We also report the appropriate cutoff of CRP for adverse consequences of stroke including poor outcome and mortality. There are limited studies calculating the appropriate cutoff of based on ROC curve analysis. Ghabaee et al. for the first time reported the cutoff of CRP value of 2.2 mg/l as the optimal cutoff value for the prediction of mortality within 7 days (sensitivity: 0.81, specificity: 0.80) (25). While in another similar study, CRP cutoff of 1.5 mg/dl was determined as the optimum sensitivity and specificity for adverse clinical outcome (26). Our study revealed higher amount of CRP as an appropriate cutoff point with an acceptable sensitivity and specificity. The difference between our measures and previous studies may be attributed to longer follow-up duration (e.g. 90 days). On the other hand, there are conflicting ideas against considering CRP as prognostic biomarker for ischemic stroke outcome. Because a large body of literatures are based on the studies conducting the relationship between CRP and ischemic stroke outcome by determination of mortality as an outcome measure. It is well-known that there are moderate to severe functional impairments in more than 50 percent of stroke patients (23). Therefore we tried to utilize the NIHSS score beside the mortality outcome. Another factor that is suggested as a prognostic marker for outcome among patients with myocardial infarction and ischemic stroke is WBC count. Increasing the number of leukocytes could be a predisposing factor in high risk patients for ischemic stroke (27)(28). The findings of the present study regarding predicting ability of WBC are in conrast with the study by Sahan et al. (29). Nerdi et al. have investigated the association of elevated WBC count at early stage (72 hours) of cerebral ischemia and found it a significant independent predictor of poor clinical outcome, and discharge disability (30). Although blood biomarkers may provide valuable information regarding prediction of outcome in acute ischemic stroke but the ability of other acute phase reactant are different and this issue requires further prospective studies. In conclusion this study indicates that high serum CRP at the time of admission of acute stroke is predictive of poor outcome and serum levels greater than 10.5 mg/dl is predictive of mortality. | 2018-04-03T04:44:06.396Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "73ce80bcb241dfe0c4e39aac6287eb03735fca17",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "73ce80bcb241dfe0c4e39aac6287eb03735fca17",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253399833 | pes2o/s2orc | v3-fos-license | Depression was associated with younger age, female sex, obesity, smoking, and physical inactivity, in 1027 patients with newly diagnosed type 2 diabetes: a Swedish multicentre cross-sectional study
Background Depression is a risk factor for type 2 diabetes (T2D) and cardiovascular disease (CVD). The aims were to explore the prevalence of depression, anxiety, antidepressant use, obesity, Hemoglobin A1c > 64 mmol/mol, life-style factors, pre-existing CVD, in patients with newly diagnosed T2D; to explore associations with depression; and to compare with Swedish general population data. Methods Multicentre, cross-sectional study. Inclusion criteria: adults with serologically verified newly diagnosed T2D. Included variables: age, sex, current depression and anxiety (Hospital Anxiety and Depression Scale), previous depression, antidepressant use, obesity (BMI ≥ 30 and ≥ 40 kg/m2), Hemoglobin A1c, pre-existing CVD. Logistic regression analyses were performed. Results In 1027 T2D patients, aged 18–94 years, depression was associated with age (per year) (inversely) (odds ratio (OR) 0.97), anxiety (OR 12.2), previous depression (OR 7.1), antidepressant use (OR 4.2), BMI ≥ 30 kg/m2 (OR 1.7), BMI ≥ 40 kg/m2 (OR 2.3), smoking (OR 1.9), physical inactivity (OR 1.8), and women (OR 1.6) (all p ≤ 0.013). Younger women (n = 113), ≤ 59 years, compared to younger men (n = 217) had higher prevalence of current depression (31% vs 12%), previous depression (43 vs 19%), anxiety (42% vs 25%), antidepressant use (37% vs 12%), BMI ≥ 30 kg/m2 (73% vs 60%) and BMI ≥ 40 kg/m2) (18% vs 9%), and smoking (26% vs 16%) (all p ≤ 0.029). Older women (n = 297), ≥ 60 years, compared to older men (n = 400) had higher prevalence of previous depression (45% vs 12%), anxiety (18% vs 10%), antidepressant use (20% vs 8%), BMI ≥ 30 kg/m2 (55% vs 47%), BMI ≥ 40 kg/m2 (7% vs 3%) (all p ≤ 0.048), but not of current depression (both 9%). Compared to the Swedish general population (depression (women 11.2%, men 12.3%) and antidepressant use (women 9.8%, men 5.3%)), the younger women had higher prevalence of current depression, and all patients had higher prevalence of antidepressant use. Conclusions In patients with newly diagnosed T2D, the younger women had the highest prevalence of depression, anxiety, and obesity. The prevalence of depression in young women and antidepressant use in all patients were higher than in the Swedish general population. Three risk factors for CVD, obesity, smoking, and physical inactivity, were associated with depression.
Background
The comorbidity of depression and diabetes mellitus is associated with increased cardiovascular disease (CVD) and all-cause mortality [1]. A bidirectional link between depression and type 2 diabetes (T2D) has been demonstrated, but depression seems to be a stronger risk factor for T2D than the other way round [2,3]. The presence of anxiety in depressed people seems to further increase the risk of developing T2D [4]. Depression is, however, a heterogenous disorder characterized by dysphoria, anhedonia and/or lack of interest, accompanied by cognitive symptoms, increased or decreased appetite, weight gain or weight loss, hypersomnia or insomnia, psychomotor retardation or activation, and fatigue, leading to functional deterioration [5]. Atypical depression, which is more common in women than in men, is characterized by increased appetite, weight gain, fatigue, and hypersomnia, subsequently increasing the risk for obesity with secondary metabolic disturbances [6,7]. In a Swedish general population survey with 7618 respondents from a random sample of 16 000 people, the prevalence of selfreported depression was higher in men (12.3%) than in women (11.2%) according to the Hospital Anxiety and Depression Scale (HADS), and the prevalence of antidepressant use was higher in women (9.8%/) than in men (5.3%) [8].
T2D is characterized by hyperglycemia caused by resistance to insulin action and an inadequate compensatory insulin secretory response [9,10]. The incidence of T2D was 399/100 000 inhabitants in Sweden in 2013 [11], and the prevalence of T2D was 6.8% [12]. The onset of T2D is typically slow with a long pre-detection period of 3-7 years [10]. The risk of newly diagnosed T2D increases with age [9], but younger-onset T2D is particularly harmful with increased mortality [13]. Obesity [9,10,14], physical inactivity [9,10], and smoking [15] are other risk factors for incident T2D. The prevalence of depression in patients with newly diagnosed T2D in Sweden has to our knowledge not been previously explored. We hypothesized that the prevalence of depression was high in patients with newly diagnosed T2D, and that depression may be associated with age, sex, obesity, Hemoglobin A1c (HbA1c), life style factors, and/or pre-existing CVD. Our aims were first to explore the prevalence of current and previous depression, anxiety, antidepressant use, obesity, high HbA1c (> 64 mmol/mol), smoking, physical inactivity, pre-existing myocardial infarction (MI), and stroke, while exploring sex and age-related differences in adult patients with newly diagnosed T2D. Second, we explored associations between current depression and all included variables in the T2D patients. Third, we performed comparisons between the T2D patients and Swedish general population data regarding the prevalence of depression, antidepressant use, obesity and smoking.
Participants and study design
Multicentre, cross-sectional design. Inclusion criteria were adults (≥ 18 years) with newly diagnosed serologically verified T2D, and completion of the Swedish version of HADS (see Fig. 1). Exclusion criteria were confirmed diagnosis of type 1 diabetes (T1D) and gestational diabetes. The participants were recruited from all 5 hospitals and 54 primary health care units in Region Kronoberg (193 000 inhabitants) and Region Kalmar (240 000 inhabitants) in South Eastern Sweden. The primary care units served both urban and rural areas. The recruitment period ranged from 1 st January 2016 until 31 st December 2017 in Region Kronoberg, and from 1 st March 2016 until 28 th February 2018 in Region Kalmar. The instructions to the health care units were that all patients with newly diagnosed T2D should be informed and offered participation in the study by their physician or diabetes nurse when they received their T2D diagnoses, which could either be performed directly at the health care unit, or later by a telephone call or a letter. If that had not been completed, they should be offered participation at the follow-up visit which routinely takes place within 2-3 weeks after being diagnosed. Out of the 1248 patients who provided consent for participation, 114 patients were excluded due to incomplete HADS testing, and 107 patients were excluded as their clinical diagnoses were not serologically verified (Fig. 1).
Finally, 1027 participants were included constituting 29% of the 3592 patients who were diagnosed with clinical T2D in the two regions during the recruitment periods, 429 out of 1288 (33%) in Region Kronoberg and 595 out of 2304 (26%) in Region Kalmar. Interviews, anthropometrics, biochemical analyses, and data collection from electronic health records (EHRs) were performed.
Newly diagnosed type 2 diabetes and serological confirmation
The diagnostic criteria for diabetes mellitus were either fasting plasma glucose ≥ 7.0 mmol/L twice within two weeks, a 2-h 75 g post oral glucose tolerance test (OGTT) ≥ 11.1 mmol/L, a random venous glucose ≥ 11.1 mmol/L, or capillary glucose ≥ 12.2 mmol/L in a patient with symptoms of hyperglycemia, or Hemoglobin A1c (HbA1c) ≥ 48 mmol/mol (≥ 6.5%) [9,10]. Newly diagnosed and serologically confirmed T2D was defined as fulfilling the diagnostic criteria for diabetes mellitus without a previous history of a diabetes diagnosis or treatment, with serological confirmation by glutamic acid decarboxylase (GAD) antibodies < 10 units/ mL and C-peptide levels ≥ 0.25 nmol/L [16]. GAD antibodies were analysed by using enzyme linked immunosorbent assay (ELISA) from RSR ® (Article nr Rs-GDE/96, RSR Ltd, Cardiff, UK) and C-peptide was analysed by commercial ELISA (Mercodia ® (article nr 10-1136-01), Uppsala, Sweden), at the Diabetes Laboratory, Lund University, Lund, Sweden, for the purpose of this study.
High Hemoglobin A1c
HbA1c analyses were performed routinely at the time for the diagnoses by Olympus automated clinical chemistry analysers with high specificity (Olympus AU ® , Tokyo, Japan). The HbA1c values were collected from EHRs (Cambio Cosmic ® ), which were used by all the hospitals and primary care units in the two regions. The intra-coefficient of variation was < 1.2%. High HbA1c was defined as > 64 mmol/mol (IFCC) (> 8% NGSP), the cut-off level corresponded to the 75 th percentile.
The participants were asked whether they had been depressed previously, and whether they used antidepressants. In both cases there were two response options, yes or no. Previous depression was defined as answering yes to the first of these two questions.
Anthropometrics
Weight and length were measured by a nurse according to standard procedures. Obesity was defined as Body Mass Index (BMI) ≥ 30 kg/m 2 [21], and severe obesity as BMI ≥ 40 kg/m 2 [22].
Smoking and physical activity
The patients reported smoking habits as never, previous, non-daily, or daily smokers, which were dichotomized into current smokers (daily and non-daily smokers), and non-smokers (never and previous smokers).
Physical activity was reported as ≥ 30 min of moderate activities performed never, less than once a week, 1-2 times/week, 3-5 times/week, or daily, corresponding to the registration in the Swedish National Diabetes Register (S-NDR) [23]. The levels of physical activity were dichotomized into physical inactivity (less than once a week) and physical activity (all other levels). Both leisure time physical activity and work-related physical activity were taken into account.
Cardiovascular disease
Patients were interviewed about previous MI and stroke. Data was complemented from the EHRs.
Total number of newly diagnosed T2D patients in the two regions
The total number of patients with newly diagnosed clinical T2D was collected from the EHRs in Region Kronoberg and Region Kalmar during the recruitment periods. The clinical T2D diagnoses were not systematically serologically confirmed.
Statistical analysis
As age was not normally distributed, the analyses were performed with Mann-Whitney U test, and the results were presented as median (quartile (q) 1 , q 3 ). Pearson´s Chi-Squared test, Linear-by-Linear Association, or Fisher´s Exact Test (all two-tailed), were used to analyse categorical data which were presented as n (%). Odds ratios (OR) were calculated using logistic regression analyses (simple) with current depression as dependent variable. 95% confidence intervals (CI) were used. P < 0.05 was considered statistically significant. SPSS ® version 27 (IBM, Chicago, Il, USA) was used.
Results
In this study of depression in patients with newly diagnosed T2D, 1027 patients aged 18 to 94 years were included (women 40%, born in Sweden 88%). C-peptide levels ranged from 0.25 to 5.58 nmol/L. All patients were GAD antibody negative.
The total prevalence of current depression and obesity (BMI ≥ 30 kg/m 2 ) and their distribution within seven age-groups are presented for all and for each sex in Fig. 2.
The prevalence of current depression was for all 1027 participants 12%, for 410 women 15%, and for 617 men 10%. The highest prevalence of current depression was found in the two age-groups 18-29 years (all patients 40%, women 60%, men 20%), and declined successively until the age-group 60-69 years (all patients 8%) (p < 0.001). The prevalence of obesity was for all patients 55%, for women 60%, and for men 51%. The highest prevalence of obesity was found in the age-group 18-29 years (all patients 80%), and declined successively until the agegroup ≥ 80 years (all patients 26%) (p < 0.001).
In Table 1, baseline characteristics are presented for all participants, and age and sex specified.
In Table 2, characteristics are compared between currently depressed and non-depressed participants, and for the following variables anxiety, antidepressant use, obesity, smoking, and physical inactivity the comparisons are also presented by sex.
Among the younger participants (< 60 years), the 60 currently depressed compared to 270 non-depressed participants were more often women and had higher prevalence of previous depression, anxiety, and antidepressant use (all p < 0.001); obesity (BMI ≥ 30 and ≥ 40 kg/m 2 ) (p = 0.003 and 0.018 respectively); and physical inactivity (p = 0.014). The highest prevalence of severe obesity (BMI ≥ 40 kg/m 2 ) (29%) was found in the young depressed women. Among the older participants (≥ 60 years), the 61 currently depressed compared to 636 non-depressed participants were older (p = 0.045) and had higher prevalence of previous depression and anxiety (both p < 0.001); and antidepressant use (p = 0.035).
In Table 3, associations with current depression are presented.
Discussion
In this Swedish multicentre study of depression in 1027 participants with newly diagnosed T2D, younger age, women, previous depression, anxiety, antidepressant use, obesity (both BMI ≥ 30 and ≥ 40 kg/m 2 ), smoking, and physical inactivity, were associated with current depression. There were distinct differences between younger and older participants. Younger participants (< 60 years) had significantly higher prevalence of current and previous depression, anxiety, antidepressant use, obesity (both BMI ≥ 30 and ≥ 40 kg/m 2 ), high HbA1c (> 64 mmol/mol), smoking, and physical inactivity, than older participants. There were also sex differences with higher prevalence of both current and previous depression, anxiety, antidepressant use, obesity (both BMI ≥ 30 and ≥ 40 kg/m 2 ), and smoking in the younger women, but higher prevalence of HbA1c > 64 mmol/mol (> 8%) in the younger men. In the younger women, previous depression, anxiety, antidepressant use, and severe obesity were associated with current depression. In the younger men, physical inactivity in addition to previous depression, anxiety, and antidepressant use, were associated with depression. In the older participants, the prevalence of current depression did not differ between women and men, but the older women had significantly higher prevalence of previous depression, antidepressant use, and obesity (both BMI ≥ 30 and ≥ 40 kg/m 2 ) than the older men.
To our knowledge, the prevalence of depression at the time of diagnosis of T2D has not previously been explored in Sweden. According to previous research, depression is a risk factor for incident T2D [2,3] and for cardiovascular and all-cause mortality [1]. Depression as a risk factor for incident T2D seems to be as important as smoking and physical inactivity [3]. The presence of anxiety in depressed patients seems to further increase the risk of incident T2D [4]. In this study the association between depression and anxiety was very robust in all subgroups. The prevalence of current depression (12%) in all participants was the same as the prevalence of depression in the Swedish population study (11.7%), where depression was defined as in the present study (HADS-D ≥ 8) [8]. However, the prevalence of current depression in the younger women (< 60 years) (31%) with newly diagnosed T2D was 2.6 times higher than for younger women in the Swedish population study (Swedish women: 18-84 years 11.2%, 18-64 years 11.9%, ≥ 64 years 9,4%) [8], while the younger men with newly diagnosed T2D had the same prevalence of depression (12%) as younger men in the Swedish population study (Swedish men: 18-84 years 12.3%, 18-64 years 12.2%, ≥ 64 years 13.2%) [8]. The prevalence of current depression in the older participants (≥ 60 years) (both men and women 9%) was approximately the same as for older women in the Swedish population study, but lower for the older men with T2D than in the Swedish population [8]. On the other hand, a large proportion of the older participants, particularly the women, reported being depressed previously. In another Swedish study, the prevalence of depression (HADS-D ≥ 8), was 13% in patients hospitalized due to coronary heart disease and 4% in healthy controls [18]. Thus, the younger women with newly diagnosed T2D had 2.4 times higher prevalence of depression (31%) than patients with coronary heart disease (13%), and 7.8 times higher prevalence of depression than the healthy controls (4%). In a Swedish study of depression (HADS-D ≥ 8) in patients with T1D aged 18-59 years, the prevalence of depression in the women was 11% and 10% in the men [24], rendering a depression prevalence 2.8 times higher in the younger women and 1.2 higher in the younger men with newly diagnosed T2D in this study.
Obesity is another major contributor to both incident T2D [9,10,14,26] and to CVD and mortality [26]. The total prevalence of obesity (BMI ≥ 30 kg/m 2 ) in the Swedish general population during the period 2016-2017 was 16.6% (women 14.5%, men 18.1%) [21]. In our study the prevalence of obesity (BMI ≥ 30 kg/m 2 ) was higher both in depressed and non-depressed participants compared to the Swedish general population. The highest prevalence of obesity was demonstrated in the depressed younger women (85%), 5.9 times higher than for women in the Swedish population (14.5%) [21]. The demonstrated association between obesity and depression in this study of T2D differs from findings in patients with T1D, where no association between depression and obesity was found [27].
Furthermore, smoking is a major risk factor for both incident T2D [15], and for CVD and mortality (27). In 2016, the prevalence of smoking was 9% in the Swedish population [28], which can be compared to 13% in all participants, and 20% in the depressed and 12% in the non-depressed participants, with the highest prevalence in the younger depressed participants (29%).
Physical inactivity is another important contributor to both incident T2D [9,10] and cardiovascular and allcause mortality [29]. The prevalence of physical inactivity (less than 30 min of moderate physical activity once a week) was high in the participants (31%) compared to Swedish T1D patients (13%) [24]. Physical inactivity was particularly common in the young depressed participants in our study (52%). We have no data from the Swedish general population of the prevalence of physical inactivity defined as in our study.
As T2D usually has a long pre-detection period of 3-7 years [10], and as we do not know for how long the participants in our study had experienced depressive symptoms or used antidepressants, we cannot determine whether depression preceded or succeeded the onset of T2D. Due to the cross-sectional design, no causality can be drawn from our results.
Clinically, as well as in further research, it is probably important to detect depression and explore potential underlying conditions in patients with newly diagnosed T2D, particularly in young persons, in order to provide optimal treatment. Underlying conditions could be either somatic, psychological and/or social. Increased cortisol secretion has been demonstrated in both depressed and obese patients [6,7,30,31], and cortisol secreting tumours may induce obesity and T2D [32]. Weight stigma and discrimination are linked to both depression and to binge eating disorder [33], which has a high lifetime risk for developing T2D [34]. Post-traumatic stress disorder (PTSD) has also been linked to depression [35], obesity [36], and T2D [37]. Attention deficit/hyperactivity disorder (ADHD) is another disorder previously associated with depression [38], obesity [39], and T2D [40]. Additionally, in further research, exploration of shared endocrine and inflammatory disturbances in patients with depression, obesity and T2D would be of interest. We intend to explore the food habits of these participants in a separate article.
One strength of our study is the multicentre recruitment from a large number of health care units in both urban and rural areas in two separate Swedish regions during two years. The clinical diagnoses of T2D were confirmed as all included participants had remaining insulin secretion without immunological signs of autoimmunity [16]. By including questions about previous depression and the use of antidepressants, we could show that depression was not just a reaction to the T2D diagnosis. Since previous research indicated increasing prevalence of severe obesity [22], two levels of obesity were reported. Relevant variables were included as depression previously has been linked to weight gain and obesity, particularly in atypical depression [6,7], incident T2D [2][3][4], high HbA1c levels [24], physical inactivity [41], smoking [42], coronary heart disease [1,17], stroke [43], and all-cause mortality [1]. Age is relevant as incident T2D increases with age [9], but younger-onset T2D is particularly harmful with increased mortality [13]. The increased prevalence of depression, obesity, smoking, and physical inactivity, may all be explanatory factors to the increased mortality previously demonstrated in younger-onset T2D [13]. Age divided into 7 age-groups made it possible to study the details of the distribution of depression and obesity, and also facilitated determining a suitable cut-off for further analyses. The prevalence of depression declined until the age-group 60-69 years, which indicated that the age of 60 years was a suitable cut-off level for the age analysis.
A major limitation to our study was the high number of non-respondents to the question "Do you take antidepressant medication?" Other limitations were that there was no information regarding type of antidepressants and duration of use. Also, current depression was not confirmed by a structured interview. Yet, HADS-D has shown high validity for assessing depressive symptoms both at an individual and a collective level [20]. HADS does not include symptoms that could be signs of a somatic disease accompanied by weight changes [19], and is extensively used in research [8, 17-20, 24, 25]. Patients with inadequate knowledge of Swedish were excluded, as we could not guarantee the quality of the translation of HADS into other languages. This exclusion criteria probably contributed to underrepresentation of immigrants. According to recent research, first generation immigrants constitute about 21% of newly diagnosed T2D in Sweden [44], compared to 12% in this study. This is important as nationwide Swedish studies have shown increased risk of depression in immigrants [45]. Though the total number of included patients was quite high, it is still a limitation that just 29% of patients with newly diagnosed clinical T2D were included. However, the percentage might have been higher if only serologically verified T2D had been included in the total number. The reasons for non-participation were not registered systematically, but several health care units have reported they lacked time to include all new patients due to staff shortage. Other reasons for non-participation were that patients did not wish to participate, their T2D diagnoses were not serologically verified, or their HADS questionnaires were not adequately completed.
Conclusions
The younger women had the highest prevalence of depression, anxiety, and severe obesity. The prevalence of depression in young women and antidepressant use in all patients were higher than in the Swedish general population. Three risk factors for CVD, obesity, smoking, and physical inactivity, were associated with depression. | 2022-11-09T14:45:26.393Z | 2022-11-09T00:00:00.000 | {
"year": 2022,
"sha1": "114b1567ec5aab5e54353be97ded596cb14dd51c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "c5471c2619e2205146a8f0463a735ddf6fdc85f3",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266723253 | pes2o/s2orc | v3-fos-license | Effect of saponin on spermatogenesis and testicular structure in streptozotocin-induced diabetic mice
About a third of human infertility is related to male factors. Of these, idiopathic-related infertility is not curable. Diabetes mellitus is a metabolic disorder affecting male impotence and fertility by increased production of free radicals and oxidative stress. Saponin, a glycosidic compound found in many plants, improves sperm parameters. The present study investigated the effect of saponin on sperm oxidative stress and testicular structure in streptozotocin (STZ)-induced diabetic mice. The diabetes was induced by the administration of 150 mg kg-1 STZ via a single intra-peritoneal injection. All experimental mice were allocated to the following groups: Control group, diabetic control group, diabetic group administrated 100 mg kg-1 saponin daily and one healthy group administrated saponin daily for 56 days. At the end of the treatment period, serum levels of insulin, glucose and oxidative stress markers were measured. A histological evaluation of testicles was performed. Treatment of diabetic mice with saponin ameliorated testicular tissue damage as well as serum glucose and insulin concentrations. Furthermore, in the diabetic group, the serum concentration of malondialdehyde was increased; while, the activity of superoxide dismutase and glutathione peroxidase enzymes was reduced. The mean Johnsen’s score and the diameter and thickness of seminiferous tubules were lower in the diabetic mice than control ones. However, these parameters were higher in the saponin-treated mice than controls. Overall, saponin administration rectified all examined parameters. The anti-oxidant role of saponin improves sperm parameters and diabetes-induced testicular oxidative damage.
Introduction
According to the report of the World Health Organization, infertility is a disorder occurring in 10.00 -15.00% of couples, of which 30.00 -40.00% are associated with the male factor. 1,2Abnormal semen parameters due to the factors other than idiopathic reasons can be improved; while, treatment for poor idiopathic semen quality is not promising. 3iabetes or diabetes mellitus is a chronic and endocrine disease, causing numerous concerns worldwide.Diabetes mellitus is a heterogeneous metabolic disorder caused by the lack of insulin production in the body or insulin resistance impairing male sexual ability and fertility. 4,5Testicular dysfunction decreases the testicular weight along with sperm count and motility and changes the morphology of the seminiferous tubules.Testosterone levels are also reduced. 6Diabetes increases the apoptosis rate (pro-apoptotic genes such as Bax up-regulation) in germ cells and also interrupts the spermatogenesis process. 5n about 90.00% of diabetic patients, defects in sexual activity are seen as decreased libido and reduced fertility. 7lthough the exact mechanism of diabetes mellitus is not well understood, the increase in the production of free radicals and increased oxidative stress are its major proposed damaging mechanisms. 7,5he presence of anti-oxidants such as vitamins or flavonoids in the diet can exert protective effects in diabetic patients. 8Reactive oxygen species (ROS) overproduction damages the mitochondrial membrane causing cytochrome C release, resulting in the apoptosis induction in testicular tissue cells. 7aponins are glycosidic chemical compounds being abundant in many plants.Saponin is involved in protecting the plant against germs and fungi.Although high doses of this substance are very toxic, several reports have indicated that saponin increases sperm motility and viability and hormone levels. 9,10his study aimed to investigate the effects of saponin on spermatogenesis, testicular tissue damage and blood biochemical and hormonal parameters in diabetic mice.
Materials and Methods
Sixty-four male mice weighing 25.00 to 30.00 g were obtained from the Animal House of Tabriz University of Medical Sciences, Tabriz, Iran, and kept for 2 weeks in standard conditions with 12 hr of light and adequate humidity.All procedures performed in studies involving animals were in accordance with the ethical standards of Tabriz University, Tabriz, Iran (Ethical code: 1398.027).
The animals were randomly divided into 4 groups of 16 and treated as follows: Group 1: The control group (no injections); Group 2: The diabetic control group received a single intraperitoneal injection of 150 mg kg -1 streptozotocin (STZ); 11 Group 3: The healthy control group received 100 mg kg -1 per day saponin via intra-peritoneal injection for 8 weeks; 2 Group 4: Treatment group receiving 150 mg kg -1 STZ (one injection) and 100 mg kg -1 per day saponin intraperitoneally for 8 weeks.
At first, the glucose levels of all mice in both experimental and control groups were determined by a glucometer (Easy-Gluco 2657A; Complete Medical Supplies Inc., New York, USA).Then, to induce diabetes, 150 mg kg -1 per day of STZ was administered intraperitoneally to groups 2 and 4.After 72 hr, blood glucose levels were measured again.After confirming that the mice were diabetic (blood glucose levels above 250 mg dL -1 ), they received 100 mg kg -1 of saponin via intra-peritoneal injection once a day for 56 days. 12At the end of the treatment period, all mice were anesthetized with a combination of 50.00 mg kg -1 ketamine (Panpharma, Luitré-Dompierre, France) and 10.00 mg kg -1 xylazine (Alfasan, Woerden, The Netherlands).Then, 2.00 to 3.00 mL of the blood samples were taken from the hearts of animals for biochemical assays.
In order to isolate sera, immediately after sampling, blood samples were centrifuged at 3,000 rpm for 10 min, and the harvested sera were stored at -80.00 ˚C until used.The glucose concentration was measured by a commercial kit (Iran Pars Azmoon, Tehran, Iran).Serum concentrations of insulin were measured by the enzymelinked immunosorbent assay (ELISA) using a standard commercial kit for mice (Mercodia Inc, Uppsala, Sweden) and reported as μg L -1 .
The lower abdominal area was incised under sterile conditions, and both testicles and epididymides were bilaterally removed and weighed.For histological examination, the right testicle was fixed in Bouin's fixative for 72 hr.Then, 5.00 μm sections were prepared, 13,5 and stained with the Hematoxylin and Eosin staining method.About 50 round seminiferous tubules were randomly examined by a light microscope (CX22; Olympus, Tokyo, Japan) with 400× magnification to determine the seminiferous tubule diameter, germinal epithelium height and spermatogenesis alterations.
Serum testosterone concentration was measured using a commercial ELISA kit (Demeditec Diagnostics, Kiel, Germany).Briefly, serum samples (25.00 μL) were incubated with 200 μL enzyme conjugate in pre-coated wells for 60 min at room temperature.Then, the wells were washed three times with 300 μL diluted irrigation solution and incubated with 200 μL substrate solution for 15 min at room temperature.The enzymatic reaction was ended by adding 100 μL stop solution, and the optical density of the solution in each well was recorded at 450 nm.The testosterone concentration was calculated using six standard concentrations and a four-parameter logistic curve fitting.The final testosterone concentration was obtained from each set of duplicates and expressed as ng mL -1 .
The superoxide dismutase (SOD) activity of serum samples was measured using a commercial kit (Ransod, Randox Laboratories Ltd., Crumlin, UK) according to the Arthur and Boyne. 14In summary, this method is based on the generation of superoxide radicals by adding xanthine and xanthine oxidase to the sample and its reaction with 2-(4-iodophenyl)-3-(4-nitrophenol)-5-phenyltetrazolium chloride to form a red formazan dye.The SOD activity is then measured by the inhibition power of this reaction and expressed as U of SOD per 10.00 mg of protein.Protein was measured using a spectrophotometer (Thermo Fisher Scientific, Waltham, USA) according to the method described by Bradford. 15lutathione peroxidase (GPx) activity was measured by a diagnostic kit (Randox) according to the Paglia and Valentine. 16The oxidation of glutathione (GSH) is catalyzed by cumene hydroperoxide in this method.The oxidized GSH is immediately converted into the reduced form with concomitant oxidation of nicotinamide adenine dinucleotide phosphate (NADPH) to NADP+ (oxidized form of NADPH) in the presence of glutathione reductase.Then, the decline in absorbance at 340 nm is calculated in a spectrophotometer (Thermo Fisher Scientific) and expressed as U L -1 .
To measure serum malondialdehyde (MDA) levels, first, 0.20 mL of serum was added to a microtube containing 3.00 mL of glacial acetic acid, following which 1.00% thiobarbituric acid (in 2.00% NaOH) was added to the microtube.The tube was then placed in the boiling water for 15 min.After cooling, the adsorption of the resulting solution was read in a spectrophotometer (Thermo Fisher Scientific) as pink at 532 nm. 17Statistical analysis.All statistical analyses were carried out using the SPSS software (version 19.0;IBM Corp., Armonk, USA).After ensuring the normal distribution of the variables, they were compared using a one-way analysis of variance.Tukey's post hoc test was applied to determine the differences between groups.The results were expressed as mean ± standard deviation.For all data, p < 0.05 was considered statistically significant.
Results
A significant increase was found in serum glucose levels in group 2 compared to the group 1 at the end of the study (p < 0.05).Additionally, a significant decrease was observed in serum glucose levels in group 3 in contrast to group 2 at the same time (p < 0.05; Table 1).This was true for one week before and one week after diabetes induction.The administration of the saponin to healthy mice (group 4) did not significantly alter glucose concentrations at any time of sampling.
The results of the histological evaluation showed that the mean Johnsen's score (MJS) was decreased (p < 0.05) in group 2 compared to the group 1 Table 1 and Fig. 2).
On the other hand, the MJS was higher (p < 0.05) in group 3 and group 4 than group 2. Histopathological examination showed that the diameter of seminiferous tubules was decreased (p < 0.05) in the group 2 compared to the group 1.Similarly, the thickness of seminiferous tubules was decreased (p < 0.05) in the group 2 compared to the group 1.
In addition, the diameter of seminiferous tubules was increased (p < 0.05) in the group 3 and group 4 compared to the group 2. In the same manner, the thickness of the seminiferous tubules was higher (p < 0.05) in the group 3 and group 4 than group 2 (Table 1).
Fig. 1. The serum concentrations of A) insulin and B)
testosterone in the group 1 (control), group 2 (diabetic control), group 3 (diabetic treated with 100 mg kg -1 saponin) and group 4 (healthy saponin treated mice).abc Indicate significant difference between control (Group 1) and other groups (p < 0.05).
Parameters
As shown in Table 1, a substantial increase in the MDA levels was observed in the testes of group 2 compared to the group 1 (p < 0.05).The group 3 (and group 4 showed a dramatic decline in serum MDA levels compared to group 2 (p < 0.05).The SOD activity was decreased in group 2 compared to the group 1 (p < 0.05).The treatment of the diabetic group with saponin (group 3) elevated the activity of SOD enzyme in comparison with group 2 (p < 0.05).The SOD activity was also increased in group 4 compared to the group 2. The activity of the GPx enzyme was also decreased in group 2 compared to the group 1 (p < 0.05).Furthermore, group 3 and group 4 indicated higher GPx enzyme activity compared to the group 2 (p < 0.05).
Discussion
The present study examined the ameliorative effect of saponin on diabetes-induced injuries in male mice reproductive system.The findings of the present study showed that saponin declined the blood glucose and oxidative stress markers in the testes of diabetic mice.Diabetes produces testicular dysfunctions and reportedly, treatment with saponin improves these functional deficiencies via its anti-oxidant and anti-diabetic properties. 9,10Accordingly, some studies have reported that treatment of STZ-induced diabetic mice with saponin reduces the blood glucose levels and increases the tissue sensitivity to insulin. 18,19In another study, the saponincontained fraction of the charantia plant stimulated insulin secretion in an in vitro, static incubation assay. 20The hypoglycemic effect of saponin is related to its ability to increase the sensitivity of tissues to insulin. 19,21n diabetic patients, in addition to an enhanced amount of blood glucose, the balance between the generation and resolution of free radicals is also suspended.As a result, free radical levels increase and cause oxidative stress. 7,12xidative stress results in cell injury via mechanisms such as lipid peroxidation and DNA and protein oxidative damages. 22The results of the present study showed that diabetes remarkably incremented the MDA (a lipid peroxidation marker) levels in the testicular tissue of diabetic mice, indicating that lipid peroxidation had been elevated.This finding corresponds to the results of previous research on the effects of oxidative stress on the testis of diabetic mice. 5,7Several studies in this context have reported an increase in lipid peroxidation and MDA level in the diabetic patients. 23Other studies have reported that saponin scavenges the free radicals generated during lipid peroxidation. 24Hence, the decline in testis MDA concentrations in the saponin-treated group may be related to the anti-oxidant effects of saponin.Akbarizare et al., 25 have showed that saponin decreases the MDA level probably due to its anti-oxidant properties.
The activity of SOD dramatically declined in the diabetic mice in this study.These results confirm the findings of previous studies.The SOD is known as one of the most important enzymes of the anti-oxidant system.It mainly catalyzes the conversion of superoxide anion radicals to H2O2.Through this procedure, the toxicity of superoxide is decreased and no free radicals from superoxide are produced. 22The activity of SOD was remarkably enhanced in the serum of diabetic mice being treated with saponin in contrast to the diabetic control group in the present study.This is in line with the related literature.Hu et al., 26 have showed that saponin increases the serum SOD levels and the protection against cisplatin-evoked intestinal injury via multiple ROS-mediated mechanisms.
In the present research, the GPx enzyme activity was intrinsically reduced in the diabetic mice compared to the control group.However, it was notably increased in the saponin-treated group compared to the diabetic control group.The GPx, an anti-oxidant enzyme, is another enzyme with detoxification effects against free radicals. 27A decline in the activity of GPx in this study can be due to the increment in H2O2 generation because of glucose autoxidation and non-enzymatic protein glycation, causing oxygen free radicals production. 28It is well-known that anti-oxidant therapy increases GPx activity. 29n the present study, the STZ-induced diabetes in mice resulted in alterations in the histological indices of testicular tissue.The treatment of the diabetic mice with saponin ameliorated most of the diabetes-induced deficits as well as spermatogenesis.These alleviating effects in the treated animals were almost similar to those of the healthy control group (group 4).
The reduction or absence of insulin can also decrease testosterone concentrations causing testicular atrophy.Insulin itself, is known as an anti-apoptotic factor that can control testicular apoptosis and reproductive malfunction resulted from diabetes. 30n line with the findings of this study, previous studies have indicated that medicinal plants containing flavonoids can improve sperm quality and testosterone levels. 5,7,31In a similar study, the increased rate of testicular germ cell death through apoptosis in STZinduced diabetic rats was protected by Dracaena arborea aqueous extract containing saponins. 32Feasible mechanisms complicated in the recuperation of testicular oxidative stress by saponin in diabetic mice can be described by its anti-oxidant property, decreased blood glucose and enhanced insulin secretion. 33he treatment of diabetic mice with saponin ameliorated diabetes-induced histological alterations in the seminiferous tubules.In this regard, the MJS and diameter and thickness of seminiferous tubules were decreased in the diabetic mice in the present study.These alterations are often important indicators of spermatogenic dysfunction alongside to the decreases in sperm production. 34All these alterations could be due the toxic effect of STZ on male reproductive system via decrease in testosterone concentrations and consequently interrupting testicular function. 35This late event could resulted in the reduction and death of germ cells. 36The oxidative-induced free radicals are proposed to explain the etiology and pathophysiology of the biological effects of diabetes mellitus.In this regard, the free radicals generated by STZ metabolism can damage DNA and chromosomes resulting in the cell death via apoptosis or necrosis. 35oreover, the serum testosterone levels were decreased in diabetic mice, which may be related to the testicular tissue damage and Leydig cell injuries.However, treatment with saponin was able to ameliorate these damages.In this regard, Shoorei et al., 7 have reported that diabetes induces testicular tissue damage and decreases testosterone levels in diabetic rats.
In conclusion, diabetes exerts a negative effect on the testis and sperm quality through oxidative stress.Saponin has a potent effect on the anti-oxidant system activation in reducing the oxidative stress induced by diabetes.However, further detailed researches are required to confirm these results. | 2024-01-03T05:06:19.864Z | 2023-11-15T00:00:00.000 | {
"year": 2023,
"sha1": "bd61a1c6e32cd372e3882c06f34152be2d3ff6c7",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "bd61a1c6e32cd372e3882c06f34152be2d3ff6c7",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246046551 | pes2o/s2orc | v3-fos-license | Genome-Wide Methylation Profiling of lncRNAs Reveals a Novel Progression-Related and Prognostic Marker for Colorectal Cancer
Sporadic colorectal cancer (CRC) develops principally through the adenoma-carcinoma sequence. Previous studies revealed that DNA methylation alterations play a significant role in colorectal neoplastic transformation. On the other hand, long noncoding RNAs (lncRNAs) have been identified to be associated with some critical tumorigenic processes of CRC. Accumulating evidence indicates more intricate regulatory relationships between DNA methylation and lncRNAs in CRC. Nevertheless, the methylation alterations of lncRNAs at different stages of colorectal carcinogenesis based on a genome-wide scale remain elusive. Therefore, in this study, we first used an Illumina MethylationEPIC BeadChip (850K array) to identify the methylation status of lncRNAs in 12 pairs of colorectal cancerous and adjacent normal tissues from cohort I, followed by cross-validation with The Cancer Genome Atlas (TCGA) database and the Gene Expression Omnibus (GEO) database. Then, the abnormal hypermethylation of candidate genes in colorectal lesions was successfully confirmed by MassARRAY EpiTYPER in cohort II including 48 CRC patients, and cohort III including 286 CRC patients, 81 advanced adenoma (AA) patients and 81 nonadvanced adenoma (NAA) patients. DLX6-AS1 hypermethylation was detected at all stages of colorectal neoplasms and occurred as early as the NAA stage during colorectal neoplastic progression. The methylation levels were significantly higher in the comparisons of CRC vs. NAA (P < 0.001) and AA vs. NAA (P = 0.004). Moreover, the hypermethylation of DLX6-AS1 promoter was also found in cell-free DNA samples collected from CRC patients as compared to healthy controls (P adj = 0.003). Multivariate Cox proportional hazards regression analysis revealed DLX6-AS1 promoter hypermethylation was independently associated with poorer disease-specific survival (HR = 2.52, 95% CI: 1.35-4.69, P = 0.004) and overall survival (HR = 1.64, 95% CI: 1.02-2.64, P = 0.042) in CRC patients. Finally, a nomogram was constructed and verified by a calibration curve to predict the survival probability of individual CRC patients (C-index: 0.789). Our findings indicate DLX6-AS1 hypermethylation might be an early event during colorectal carcinogenesis and has the potential to be a novel biomarker for CRC progression and prognosis.
Sporadic colorectal cancer (CRC) develops principally through the adenoma-carcinoma sequence. Previous studies revealed that DNA methylation alterations play a significant role in colorectal neoplastic transformation. On the other hand, long noncoding RNAs (lncRNAs) have been identified to be associated with some critical tumorigenic processes of CRC. Accumulating evidence indicates more intricate regulatory relationships between DNA methylation and lncRNAs in CRC. Nevertheless, the methylation alterations of lncRNAs at different stages of colorectal carcinogenesis based on a genome-wide scale remain elusive. Therefore, in this study, we first used an Illumina MethylationEPIC BeadChip (850K array) to identify the methylation status of lncRNAs in 12 pairs of colorectal cancerous and adjacent normal tissues from cohort I, followed by crossvalidation with The Cancer Genome Atlas (TCGA) database and the Gene Expression Omnibus (GEO) database. Then, the abnormal hypermethylation of candidate genes in colorectal lesions was successfully confirmed by MassARRAY EpiTYPER in cohort II including 48 CRC patients, and cohort III including 286 CRC patients, 81 advanced adenoma (AA) patients and 81 nonadvanced adenoma (NAA) patients. DLX6-AS1 hypermethylation was detected at all stages of colorectal neoplasms and occurred as early as the NAA stage during colorectal neoplastic progression. The methylation levels were significantly higher in the comparisons of CRC vs. NAA (P < 0.001) and AA vs. NAA (P = 0.004). Moreover, the hypermethylation of DLX6-AS1 promoter was also found in cell-free DNA samples collected from CRC patients as compared to healthy controls (P adj = 0.003). Multivariate Cox proportional hazards regression analysis revealed DLX6-AS1 promoter hypermethylation was independently associated with poorer
INTRODUCTION
Colorectal cancer (CRC) is the third most commonly diagnosed cancer and the second leading cause of cancer-related death worldwide, with an estimated 1.9 million new cases and 935,000 deaths in 2020 (1). The majority of CRC cases are sporadic and develop principally through the adenoma-carcinoma sequence (2). It is well established that the gradual accumulation of multiple genetic and epigenetic changes plays a key role in the initiation and progression of colorectal carcinogenesis (3). In addition to conventional genetic variants, the regulatory contribution of epigenetic alterations has also been identified as a causative factor during cancer initiation and progression.
To date, aberrant DNA methylation, primarily in the form of hypermethylated or hypomethylated CpG dinucleotides within the genome, is one of the most extensively studied epigenetic alterations in human cancer (4). In particular, hypermethylation of gene promoter regions, which is frequently characterized by transcriptional silencing, remains the most dominant phenomenon during cancer development (5). Many studies have reported DNA methylation changes in cancer-related genes in CRC using genome-wide-based approaches or candidate gene strategies (6)(7)(8). Notably, these aberrant methylation alterations occur more frequently at the early stages of neoplastic progression (6). Indeed, hierarchical hypermethylation patterns of CRC-related suppressor genes, such as SFRP2, SEPT9 and MPPED2, have been observed throughout the progression stages of colorectal carcinogenesis (9)(10)(11). Taken together, these findings indicate that abnormal changes in DNA methylation might be hallmarks of CRC initiation and progression. DNA hypermethylation might be one of the first detectable neoplastic alterations associated with carcinogenesis.
Currently, these former so-called useless transcripts have been proven to be important regulators involved in biological, developmental, and pathological processes (13,14). Remarkably, accumulating evidence supports more intricate regulatory relationships between DNA methylation and lncRNAs (15,16). For instance, by performing an integrated analysis of epigenome and transcriptome data, Miller-Delaney et al. revealed that differential methylation might play an important role in the transcriptional regulation of lncRNAs in human temporal lobe epilepsy (17). He et al. identified 18 lncRNAs involved in methylation modifications that contributed to the tumorigenesis and development in glioma (16). Nevertheless, methylation studies of lncRNAs in CRC have largely been based on candidate gene strategy (18,19). LncRNA methylation as biomarkers of CRC identified based on a genome-wide scale remain elusive.
Therefore, in this study, we first used an Illumina MethylationEPIC BeadChip (850K array) to identify the methylation status of lncRNAs in CRC. Then, we performed a technical validation of six candidate genes with MassARRAY EpiTYPER in CRC, followed by a comprehensive study to analyze the DLX6-AS1 methylation pattern at different stages of colorectal neoplasms, from nonadvanced adenoma (NAA) to advanced adenoma (AA) to colorectal carcinoma. Furthermore, we evaluated the DLX6-AS1 methylation levels in peripheral blood leucocyte DNA and analyzed their consistency with local lesions from the same patient. The methylation status of the DLX6-AS1 promoter in cell-free DNA (cfDNA) of CRC patients was also evaluated. In addition, we performed survival analysis to clarify the prognostic role of methylated DLX6-AS1 in CRC prognosis. A nomogram was established to predict the survival rate for CRC patients.
Study Design and Participants
A flowchart for this study is shown in Figure 1. Briefly, this study was carried out in three cohorts. First, a genome-wide methylation scan by 850K array on cancerous and paired normal tissues from 12 CRC patients in cohort I was performed, followed by crossvalidation using DNA methylation data from the TCGA database (https://cancergenome.nih.gov) and the GEO database (https:// www.ncbi.nlm.nih.gov/geo/). The DNA methylation data from the TCGA and the GEO were generated using an Illumina HumanMethylation450 BeadChip (450K array) in 438 CRC tissue samples (393 tumor, 45 normal) and 208 CRC tissue samples (104 tumor, 104 normal), respectively. An overview of the external datasets used in this study is shown in Supplementary Table S1. Then, 48 pairs of CRC tissue samples from cohort II were tested. Additionally, the methylation levels of DLX6-AS1 were further validated in cohort III, which consisted of 286 CRC patients, 81 AA patients and 81 NAA patients. The characteristics of the participants in each cohort subjected to the tissue-based methylation analysis are shown in Table 1.
To evaluate the DNA methylation levels in peripheral blood, we randomly sampled 60 CRC patients and 60 adenoma patients with complete tissue-based DNA methylation data from cohort II and cohort III, and 60 healthy controls from a populationbased cohort. The DNA methylation status of the same region as measured in tissue samples was tested in each sample of peripheral blood leucocyte DNA. The characteristics of the participants subjected to the peripheral blood-based methylation analysis are shown in Supplementary Table S2.
To evaluate the DNA methylation levels in cfDNA, the DNA methylation data generated by 850K array in 7 cfDNA samples (3 CRCs, 4 healthy controls) were obtained from GEO database (Supplementary Table S1).
To evaluate the influence of DLX6-AS1 methylation on survival, CRC patients with successfully measured DNA methylation data in our cohort II and cohort III were pooled together, and CRC patients with both available methylation data and survival information from the TCGA database were used as an external validation.
CRC patients from Shaoxing People's Hospital were enrolled between January 2015 and July 2018. Participants with AA or NAA and healthy controls were selected from an ongoing population-based cohort since 1989 in Jiashan County, which has been described previously (11). All participants were ethnic Han Chinese from Zhejiang Province and were pathologically confirmed, with no familial adenomatous polyposis (FAP), no previous history of CRC and no preoperative anticancer treatment. For each participant, histologically confirmed tissue samples, including a colorectal lesion (carcinoma or adenoma) and an adjacent normal mucosa sample, and peripheral blood samples were obtained. The adjacent normal mucosa was collected from the colonic mucosa 5 cm distal from the main neoplasm. Adenomas were classified as AA (any adenoma ≥ 1 cm, high-grade dysplasia, or with tubulovillous or villous histology) and NAA (adenomas < 1 cm without advanced histology) according to current guidelines (20). The TNM staging classification for CRC was determined according to the 7th edition of the American Joint Committee on Cancer (AJCC) cancer staging manual (21).
The study protocol was approved by the Medical Ethics Committee of Zhejiang University School of Medicine. Before basic information and sample collection, written informed consent was obtained from all recruited participants.
DNA Extraction and Bisulfite Modification
Genomic DNA from fresh-frozen samples and peripheral blood leukocytes was isolated using a DNA Tissue Kit (TianLong Biotech, Xi'an, China) and a RelaxGene Blood DNA System (TianGen Biotech, Beijing, China), respectively. Bisulfite treatment was conducted on genomic DNA (500 ng) using the EZ Methylation Gold Kit (Zymo Research, Irvine, CA, USA). All procedures were conducted in accordance with the manufacturer's instructions.
Illumina Methylation Assay
Genome-wide DNA methylation profiling was analyzed using the 850K array in 12 pairs of cancerous and adjacent normal tissues according to the manufacturer's instructions as described in a previous study (11). In this study, the raw array data were processed using the ChAMP package in R software for deriving the methylation level, which was generated as beta values (fraction methylation values between 0 and 1). We focused mainly on probes located in the promoter region of lncRNAs, which was defined as 1500 bp upstream and downstream from the transcription start site (TSS). The lncRNA annotation file was obtained from LNCipedia (https://hg19.lncipedia.org/) and the mapping procedure was conducted using the bedtools (22). Probes were selected on the basis of showing a difference in methylation of ≥ 0.20 and an adjusted P value (Benjamini-Hochberg method) < 0.05. To crossvalidate the results based on our samples, the eligible methylation data in TCGA and GEO were obtained and analyzed. The detailed procedures of data processing have been supplemented in the Supplementary Methods. Due to the larger coverage of the 850K array as compared to 450K array, the new probes in 850K array were cross-validated by the average beta value of the promoter regions of the target genes in 450K array.
Sequenom MassARRAY EpiTYPER Assay
The methylation levels of particular CpG sites located in the promoter region of candidate genes were verified using MassARRAY EpiTYPER (Sequenom, San Diego, CA). The schematic representation of each candidate gene is provided in the UCSC browser (http://genome.ucsc.edu). The primers were designed using EpiDesigner (http://epidesigner.com, Supplementary Table S3). The analyzed sequences are shown in Supplementary Figures S1-6. In some cases, fragments resulting from the T-cleavage reaction may contain small groups of adjacent CpG sites and are therefore referred to as "CpG units". CpG sites that were outside of the mass spectrometry analytical window (low or high mass) were filtered out. The mass spectra were collected on a MassARRAY Compact MALDI-TOF system (Sequenom, BioMiao Biological Technology, Beijing, China), and the methylation proportions of individual units on the spectra were generated by EpiTYPER software (Sequenom, San Diego, CA). Methylation levels ranging from 0 (completely nonmethylated) to 1 (fully methylated) are presented. For each gene, CpG unites with missing values in more than 20% of the samples were removed, as well as samples with missing values in more than 20% of CpG unites. The average methylation value of all CpG units was calculated as a representation of the region-specific gene methylation level.
Statistical Analysis
Statistical analyses were performed in R software (version 3.6.2, R Foundation for Statistical Computing, Vienna, Austria). Continuous variables are presented as the mean and standard deviation (SD), and categorical variables are presented as the frequency.
A paired Student's t test was used to assess the differences in DNA methylation levels between colorectal lesion tissues and paired normal tissues. Analysis of variance (ANOVA) followed by Bonferroni's posttest was used to examine significant differences between different groups. Pearson correlation analyses were used to evaluate the consistency of DLX6-AS1 methylation levels between peripheral blood and local lesions of the same patients with CRC or adenoma. The performance of the mean methylation level of candidate genes in distinguishing colorectal lesion tissues from their adjacent normal tissues was tested by receiver operating characteristic (ROC) curve analysis, and the area under the curve (AUC), sensitivity, and specificity were calculated. In the survival analysis, we adopted the best Youden index based on the time-dependent ROC curve as an optimal cutoff to dichotomize the study patients into high-risk and low-risk groups. Survival differences between groups were assessed using the Kaplan-Meier test and compared by the logrank test. Hazard ratios (HRs) and 95% confidence intervals (95% CIs) were calculated by univariate and multivariate Cox proportional hazards regression analyses. The multivariate analysis was adjusted for age, sex and TNM stage. A nomogram was established to predict the 1-, 2-, 3-and 4-year survival for CRC patients. Harrell's concordance index (C-index) was measured to quantify the discrimination ability of the nomogram, while the calibration curves were used to evaluate whether the predicted survival probabilities were consistent with those observed. All analyses were carried out in a two-sided manner, with a P value < 0.05 regarded as statistically significant.
Discovery of Differentially Methylated lncRNAs From Genome-Wide Profiling
By DNA methylation profiling, a total of 185 differentially methylated CpG sites mapping to the promoter of lncRNAs (all with P adj < 1*10 -5 and b difference > 0.20) were identified by the 850K array generated from 12 pairs of colorectal cancerous and adjacent normal tissues, followed by cross-validation using DNA methylation data generated by the 450K array in CRCs from the TCGA database (tumor=393, normal=45) and GEO database (tumor=104, normal=104), respectively (Supplementary Table S4). Among them, 95.14% (176/185) of the identified CpG sites were significantly hypermethylated and 4.86% (9/185) were significantly hypomethylated. The methylation levels for each differentially methylated CpG sites are shown by heat maps (Figure 2). Among the list of CpG sites, we focused on six sites ranking on the top (cg24014202 in DLX6-AS1, cg18323466 in lnc-DPH5-1, cg08430489 in lnc-PRSS2-6, cg17722675 in lnc-RPS12-6, cg00159100 in lnc-SFRP4-2, cg27442308 in SOX21-AS1) for following technical confirmation analysis (Figure 3), which were considered as candidate biomarkers.
Elucidation of the Aberrant DLX6-AS1 Methylation Pattern During Colorectal Neoplastic Progression
To elucidate the DLX6-AS1 methylation pattern during colorectal neoplastic progression, the methylation status was assessed in colorectal lesion tissues and adjacent normal tissues from 286 CRCs, 81 AAs and 81 NAAs in cohort III with MassARRAY EpiTYPER. Among them, 433 histologically confirmed colorectal lesion tissues (283 CRCs, 76 AAs and 74 NAAs) and 441 adjacent normal tissues (284 CRCs, 80 AAs and 77 NAAs) were successfully measured. DLX6-AS1 hypermethylation was detected at all stages of colorectal neoplasms, even as early as the NAA stage. Table 2), the DLX6-AS1 promoter was revealed to be significantly hypermethylated between CRC vs. NAA (P < 0.001) and AA vs. NAA (P = 0.004) but not between CRC vs. AA (P = 1.000).
Evaluation of DLX6-AS1 Methylation Levels in Peripheral Blood and Their Consistency With Local Colorectal Lesions
To evaluate the potential of DLX6-AS1 methylation as a noninvasive biomarker for the diagnosis of colorectal neoplasms, DLX6-AS1 methylation levels were measured in the peripheral leucocyte DNA of 60 CRC patients, 60 adenoma patients and 60 healthy controls. However, there were no significant differences in peripheral bloodbased DLX6-AS1 methylation levels in multiple comparisons between CRC patients, adenoma patients and healthy controls (Supplementary Table S5). Even though some CpG units, such as CpG_2.3, reached a statistically significant level (P = 0.017), the methylation levels did not differ much across the different groups. When evaluating the consistency between peripheral blood and local lesions from the same patients (Supplementary Table S6), the Pearson correlation analysis showed poor correlations between matched peripheral blood and local lesions (P = 0.362 for CRCs and 0.893 for adenomas, respectively, in average methylation levels).
DLX6-AS1 Methylation in Cell-Free DNA Samples From Colorectal Cancer
To identify the methylation status of the DLX6-AS1 promoter in cfDNA of CRC patients, we analyzed the methylation data rates than those with a low methylation status (P = 0.017, Figure 6A).
Construction of a Nomogram Model to Predict the Survival
We further built a nomogram, including the methylation status of DLX6-AS1 and clinical factors (age, gender, and TNM stage).
The nomogram served as an individual's prognostic predictor to predict the probability of disease-specific survival with 1-, 2-, 3-, and 4-year for CRC patients ( Figure 7A). The C-index of the nomogram for predicting the DSS of CRC patients was 0.789 (95%CI: 0.681-0.897), and calibration curves for the 1-, 2-, 3-, and 4-year survival probability demonstrated optimal agreement between the prediction and actual observation ( Figures 7B-E). Similar results were observed in the TCGA dataset (Supplementary Figures S11).
DISCUSSION
In this study, we performed a comprehensive DNA methylation profiling of lncRNAs in CRC and identified the novel methylated lncRNA, DLX6-AS1, as a promising biomarker. We validated the hypermethylation of DLX6-AS1 in CRC, and further elucidated that the hypermethylation occurred since the NAA stage during multiple steps of the adenoma-carcinoma sequence. Further comparisons revealed that DLX6-AS1 methylation was able to differentiate between CRC vs. NAA and AA vs. NAA. Moreover, the DLX6-AS1 promoter hypermethylation was also identified in cfDNA of CRC patients as compared to healthy controls. Finally, survival analysis demonstrated DLX6-AS1 hypermethylation as an independent predictor of poorer DSS and OS for CRC patients, and nomograms were constructed to predict the survival probability of individual CRC patients. Most sporadic CRCs develop from dysplastic adenomas over a long time (2). This provides a desirable opportunity to detect CRC at an early curable stage and to screen for potentially premalignant lesions (23). Aberrant DNA promoter methylation has previously been revealed to be an early event in CRC development (24). For example, by conducting a series of genome-wide DNA methylation assays among 20 normal and pre-CRC samples, including 18 low-grade adenomas and 22 high-grade adenomas, Fan et al. found that the methylation alterations detected in low-grade adenoma were maintained or increased in high-grade adenoma and cancer (25). Several studies on DNA methylation biomarkers tested in fecal (26,27) and blood (28,29) samples indicated the potential of epigenetic biomarkers for early CRC diagnosis. The present study showed that DLX6-AS1 hypermethylation was detectable since the NAA stage during colorectal neoplastic progression, suggesting that this epigenetic change is a candidate driver of tumor progression. Thus, DLX6-AS1 hypermethylation might be a promising biomarker for the early detection and risk assessment of CRC.
It should be kept in mind that different histological adenomas differ in the risk of colorectal neoplastic progression (30). Based on a prospective cohort study, Click et al. revealed that patients with AA carried a higher risk of developing CRC than patients with NAA (31). To date, molecularly defined colorectal adenomas at high risk of progressing to CRC are limited (32). At the epigenetic level, Semaan et al. identified varied differences in SEPT9 and SHOX2 methylation levels among CRC, AA and NAA tissues (10). The present study revealed significant differences in DLX6-AS1 methylation levels between CRC vs. NAA and AA vs. NAA. However, no significant differences in methylation levels were identified between AA and CRC, thus indicating that the biological processes inherent to CRC might probably be more active in AA than in NAA. These epigenetic features might be used to help characterize patients at a high risk for malignancy in the future.
Growing efforts have been made to identify noninvasive biomarkers for the early detection of CRC (33)(34)(35). Based on peripheral blood, Heiss et al. (36) reported the leukocyte DNA methylation of KIAA1549L and the leukocyte DNA methylation of BCL2 as potential biomarkers for early CRC diagnosis. In the present study, we did not find significant differences in peripheral blood-based DLX6-AS1 methylation levels between CRC patients, adenoma patients and healthy controls. Thus, the potential of this methylation marker in peripheral blood for early diagnosis requires further investigation. In fact, it remains controversial whether the DNA methylation alterations in peripheral blood are actually a response of the hematopoietic systems to tumor development (37). Another point of controversy to mention is whether the DNA methylation status measured in peripheral blood leukocytes could reflect the methylation status of local tumor lesions (38). To address this controversy, we then compared the methylation levels of DLX6-AS1 between matched peripheral blood and local lesions. However, the lack of a correlation between them in the present study provides little evidence for the tissue origin of leukocyte methylation. These results indicate a distinct tissue-specific pattern of DNA methylation in CRC. As cfDNA is tumor derived and carries cancer-specific genetic and epigenetic aberrations (28,39), we then observed the DLX6-AS1 hypermethylation in the cfDNA samples from CRC patients as compared to healthy controls. Altogether, the methylation changes identified in our study might suggest a potential target for the study of cfDNA methylation for early cancer detection and tissue-of-origin mapping for metastases.
In clinical practice, CRC patient prognosis relies mostly on pathological staging according to the TNM system (40,41). However, there are considerable variations in survival among individuals with the same staging (42), underlining the need for additional prognostic and predictive molecular markers. Here, we identified that DLX6-AS1 methylation was associated with CRCspecific survival. Importantly, the identified methylation signature was independent of classical prognostic risk factors and could therefore be of added value when implemented in the clinic. DLX6-AS1 was reported to participate in tumor progression (45), indicating its potential roles in cancer prognosis. Our study showed DLX6-AS1 methylation to be associated with CRC-specific survival for the first time. Besides, the nomogram was generated to predict the survival probability of individual CRC patients and the calibration plots indicated that the predicted survival was consistent with the observed survival.The findings from this study indicate the potential importance of DNA methylation in CRC prognosis and provide clues to help improve clinical decision-making precision in the future. We are aware of several limitations of this study. First, a direct explanation for the associations between DNA methylation and gene expression were limited since we are currently unable to measure the matched DLX6-AS1 expression levels. Second, although hypermethylation of DLX6-AS1 was observed in cfDNA samples by the 850K array in GEO database, further studies are needed taking into consideration of the low proportion of circulating tumor DNA in cfDNA and the currently very limited sample size. Third, as the follow-up in our study was relatively short, studies with longer clinical surveillance are warranted to bolster the reliability of the identified potential prognostic methylation biomarker. Last, although we found that the aberrant methylation of DLX6-AS1 might serve as a potential biomarker for CRC progression and prognosis, external validation with larger and diverse study populations is still required to further confirm the clinical value of DLX6-AS1 methylation in CRC.
In summary, based on a systematic evaluation of the DNA methylation pattern of lncRNAs in CRCs by genome-wide methylation profiling, the current study is the first to identify that the promoter region of DLX6-AS1 was hypermethylated in CRC and its premalignant lesions. We additionally revealed that hypermethylation was independently associated with poorer DSS and OS in CRC patients. Thus, DLX6-AS1 hypermethylation might occur at an early stage during colorectal carcinogenesis and has the potential to be a biomarker for the progression and prognosis of CRC.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Medical Ethics Committee of Zhejiang University School of Medicine. The patients/participants provided their written informed consent to participate in this study. | 2022-01-20T14:20:56.866Z | 2022-01-20T00:00:00.000 | {
"year": 2021,
"sha1": "cba082cc650adea9197554d87698edf9c9c0c5af",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2021.782077/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "cba082cc650adea9197554d87698edf9c9c0c5af",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9703836 | pes2o/s2orc | v3-fos-license | Representation and transformation of sensory information in the mouse accessory olfactory system
In mice, nonvolatile social cues are detected and analyzed by the accessory olfactory system (AOS). Here we provide a first view of information processing in the AOS with respect to individual chemical cues. 12 sulfated steroids, recently-discovered mouse AOS ligands, caused widespread activity among vomeronasal sensory neurons (VSNs), yet VSN responses clustered into a small number of repeated functional patterns or processing streams. Downstream neurons in the accessory olfactory bulb (AOB) responded to these ligands with enhanced signal/noise compared to VSNs. Whereas the dendritic connectivity of AOB mitral cells suggests the capacity for broad integration, most sulfated steroid responses were well-modeled by linear excitatory drive from just one VSN processing stream. However, a significant minority demonstrated multi-stream integration. Most VSN excitation patterns were also observed in the AOB, but excitation by estradiol sulfate processing streams was rare, suggesting AOB circuit organization is specific to the biological relevance of sensed cues.
The mouse AOS begins in the vomeronasal organ (VNO), a blind-ended tube in the nasal cavity. In the VNO, each vomeronasal sensory neuron (VSN) expresses one, or a few 4 , of approximately 300 G protein-coupled vomeronasal receptors. Vomeronasal receptors are classified broadly into two families, called V1R and V2R, with a dozen sub-families recognized among the V1Rs 5 . Axons from VSNs expressing the same receptor type project to pooled synaptic structures called glomeruli in the accessory olfactory bulb (AOB), at the posterior face of the olfactory bulb 6 , 7 . There, they provide glutamatergic input to their postsynaptic partners in the AOB, including the projection neurons, which we will call "mitral cells" by convention (but see 8 ).
In contrast to their analogs in the MOB, mitral cells in the AOB possess dendrites that innervate multiple glomeruli 9 . In principle, this could allow AOB mitral cells to integrate excitatory signals from multiple receptor types. Two anatomical studies have traced the dendrites of mitral cells innervating glomeruli targeted by individual vomeronasal receptors 10 , 11 . These studies tagged different vomeronasal receptor sub-types, and came to opposing conclusions about the degree to which mitral cells receive inputs from VSNs expressing different receptor types. Given our ignorance of the ligand-binding properties of different vomeronasal receptors, and the possibility that homologous receptors might have similar or identical ligand-binding properties, anatomical studies do not directly address the core issue: whether, or how often, mitral cells integrate input from processing streams that differ functionally, generating new patterns of stimulus responses not observed in VSNs.
A functional understanding of sensory integration in the AOS is lacking for a straightforward reason: there has never been a report of the sensory responses of AOB mitral cells to individual chemical compounds. Aside from technical barriers that have only recently been overcome 12 -14 , the major impediment has been a lack of identified ligands exciting more than a small percentage of VSNs. Recently it was discovered that sulfated steroids, present in female mouse urine, activate VSNs; furthermore, a collection of synthetic sulfated steroids excited a large percentage of VSNs, exceeding the combined activity of previously-identified ligands by many-fold 15 . The effectiveness and diversity of synthetic sulfated steroid molecules makes them an attractive set of ligands to use to investigate the physiological principles of odorant encoding in the VNO and processing in the AOB.
We chose a battery of 12 synthetic sulfated steroids, including sulfated androgens, estrogens, pregnanolones, and glucocorticoids, and recorded spiking responses to these stimuli from VSNs and AOB mitral cells. 26% of VSNs responded to this battery, yet their responses clustered into just 8 common patterns, suggesting the sensory capacity of this VSN population focuses into a few functional "processing streams". From AOB neuron responses to these stimuli, we identified several principles of sensory transformation. First, we observed enhanced signal-to-noise in the AOB neurons, a feature which may have aided detection of response patterns in AOB recordings that were near threshold in VSN recordings. Second, linear models suggested most AOB neurons are excited by a single VSN steroid processing stream; only 10% of neurons demonstrated multi-stream excitatory integration. However, an additional 14% of AOB neurons were co-activated by steroid processing stream(s) and urinary cues, suggesting some capacity for diverse functional integration. Finally, most steroids excited similar percentages of VSNs and AOB neurons, with the exception of sulfated estrogens, for which anterior AOB responses were rare. These results reveal major principles of organization in a behaviorally important, but littleexplored, neural circuit.
1) AOB neurons respond to synthetic sulfated steroids
There exist only a few reports of the sensory responses of mammalian AOB neurons 13 , 14 , 16 , 17 , and none have examined responses to single compounds. We selected a group of 12 sulfated steroids 15 including members of the androgen, estrogen, pregnanolone, and glucocorticoid families as a test battery for eliciting responses in AOB mitral cells ( Fig. 1a; Supplementary Table 1). These 12 sulfated steroids were delivered to the lumen of the VNO at 10 μM while recording spiking activity from the external cellular layer 8 (where mitral and tufted cells reside) of the anterior AOB in an ex vivo preparation 14 (Fig. 1b).
AOB neurons responded to particular sulfated steroids as well as dilute urine stimuli, and the pattern of activity was reproducible across randomized, interleaved presentations (Fig. 1c). Firing rates closely tracked the time course of the stimulus (Fig. 1d, e). Cells typically displayed temporal modulation, reaching peak firing rates during the stimulus and decreasing with a variable time course. We recorded from 103 AOB cells in this study, encountering neurons that were excited by just one compound (Fig. 2a), neurons that were inhibited by multiple compounds (Fig. 2b), and neurons displaying a mix of excitation and inhibition (Fig. 2c) to different sulfated steroids. A few neurons were excited by diverse classes of steroids, including both 19-carbon (androgen) and 21-carbon (pregnanolone and glucocorticoid) steroids (Fig. 2d). In total, 58% (60/103) of the neurons encountered in the anterior AOB external cellular layer responded significantly to at least one test stimulus (Fig. 2e), and 41% (42/103) responded to sulfated steroids at 10 μM. 71% (30/42) of steroidresponsive AOB neurons responded significantly to two or more sulfated steroids (Fig. 2f), and 45% (19/42) co-responded to both urine and at least one sulfated steroid. These data further demonstrate the propensity of the sulfated steroid class to drive activity in the AOS.
The range of response patterns indicated that AOB neurons have diverse chemical receptive fields. Lifetime sparseness, a metric that represents relative sharpness of molecular tuning 18 , indicated a bimodal distribution of tuning ( Supplementary Fig. 1). The diverse activity patterns we observed in AOB neurons suggest that these cells may combine several types of inputs about different molecular features. However, without knowing how these molecules activate sensory input cells in the vomeronasal organ, it would be difficult to draw conclusions about how or whether the representation of sensory information changes due to processing in the bulb. We therefore sought to determine how molecular features of this battery of sulfated steroids are encoded by vomeronasal sensory neurons.
2) VSNs respond to sulfated steroids with higher variance
Using planar multi-electrode arrays, we isolated the spiking responses of individual VSNs during interleaved stimulation with the 12-steroid battery at 100 nM, 1 μM, and 10 μM (Fig. 3). Some VSNs responded to just one of the 12 compounds (Fig. 3a), while others responded to multiple compounds, with graded responses as a function of concentration (Fig. 3b). At 10 μM, many VSNs were activated strongly, and the resulting depolarization caused extracellular spike amplitudes to decrease in a way that prevented distinguishing later spikes from noise or from spikes fired by other cells (Supplementary Fig. 2). VSN responses were therefore quantified using a variable time window, where the boundaries of the time window are set by utilizing information about the response across concentrations (see Methods).
Comparing lifetime sparseness values between the two populations revealed little more about the nature of the information processing at this stage, perhaps due to a differential influence of trial-trial variability ( Supplementary Fig. 1). Indeed, analysis of trial-trial signal-noise in our VSN population showed that VSN responses were more than two times as variable across trials as AOB responses (0.51 standard deviation/mean linear regression slope for VSNs vs. 0.24 for AOB neurons; Fig. 3e and Supplementary Fig. 3). We encountered higher variability in VSN responses compared to AOB responses at all integration windows and durations tested ( Supplementary Fig. 3), suggesting the source of this variability is biological (not dependent on the specific firing rate metrics used). The increase in signal-noise in the AOB suggests that AOB mitral cells perform signal enhancement, a feature observed in other olfactory circuits 19 , 20 .
3) VSN responses to sulfated steroids are stereotyped
VSNs express a single allele of one of the major families of vomeronasal receptors 6 , 7 so the response profiles of individual VSNs to the sulfated steroids in our battery should reflect the affinities of the expressed vomeronasal receptors. We therefore wondered whether we could identify any structure or pattern across the recorded VSN population (Fig. 3c). To evaluate this possibility, we first compared the observed collection of responses to a simulation in which a response was generated randomly according to the percentage of neurons responding to each odorant (Supplementary Table 1). In this particular analysis, we discarded the actual firing rate and instead represented each neuron/ligand pair as a yes/no response (Fig. 3f); this allowed us to catalog both observed and simulated responses into 2 12 = 4096 different bins.
In the actual data set, certain response patterns were observed much more frequently than expected by chance (Fig. 3f); similarly, other patterns that were common in a random model were absent in the observed data set. These apparent differences were evaluated statistically by computing a measure of the "disorder," the entropy, for both the observed set of responses and 10 6 simulated data sets containing the same number of neurons (Fig. 3g). The observed data set was more ordered (entropy of 4.7 bits) than the simulated data sets (entropy 5.3 ± 0.2 bits, p < 0.001). The lower entropy indicates more structure to the observed population of responses than expected from probabilistic sampling; in particular, the observation that certain responses occurred repeatedly (Fig. 3f) suggested that the responses might be analyzed in terms of distinct types.
4) Identification of VSN functional processing streams
Normalized (analog) responses of 75 steroid-responsive VSNs at 10 μM were analyzed with an automated clustering algorithm (see Methods). The algorithm identified 8 clusters based on these activity patterns (Fig. 4a); all steroid-responsive VSNs were associated with one of the 8 clusters. Inspecting these clusters revealed several suggestive patterns connecting physiological response to steroid molecular identity (Fig. 4b). 5 of the 8 clusters involved strong responses to one or more androsten-or estradiol-based sulfated steroids (Fig. 4a). Collectively, these accounted for 64% (48/75) of the steroid-responsive VSNs. The high percentage of responses to these compounds, as well as the diversity of response profiles within this sub-class of molecules, indicate that VSNs express vomeronasal receptors capable of detecting small structural differences within these sulfated steroid sub-families (Fig. 4b). One cluster responded selectively to P8200 (epipregnanolone sulfate), one of 3 stereoisomers in our battery from the pregnanolone class of sulfated steroids (Fig. 4a, cluster 2). The two remaining clusters responded to sulfated glucocorticoids (Q1570 and Q3910, Fig. 4a, clusters 1 and 4), similar to neurons identified previously 15 . Within cluster 1 we observed multiple neurons that were co-activated by 1:100 dilute BALB/c female urine (Fig. 4a), and others that were insensitive. To acknowledge that these may represent unique populations, we split cluster 1 into a female urine-unresponsive and female urine-responsive subpopulations (cluster 1a and 1b, respectively). Among steroid-responsive neurons, no other cluster contained a substantial number of neurons responsive to 1:100 dilute BALB/c urine.
These commonly-observed patterns represent "processing streams," and comprise the input received by neurons in the AOB. We therefore revisited our mitral cell recordings to investigate whether these patterns were maintained or modified in the AOB.
5) Identifying functional integration modes in the AOB
We studied information transformation from the defined VSN input types to AOB neurons using several complementary approaches. The primary analyses were cluster analysis and linear modeling (Fig. 4c-d and Fig. 5). We performed separate cluster analysis on the 42 steroid-responsive AOB neurons, revealing 7 distinct categories, 5 of which closely resembled patterns observed in VSNs (Fig. 4c). Projecting the AOB dataset onto the coordinate space defined by the first 3 linear discriminators for VSN clusters revealed that response patterns of AOB neurons with apparent VSN homologs (clusters 1, 2, 3, 5, and 7) were statistically indistinguishable from those defining the VSN clusters (Fig. 4d). AOB clusters, like VSN clusters, included some heterogeneity. However, comparison of intra-and inter-cluster Euclidean distances for these populations revealed no systematic difference in cluster tightness or separation ( Supplementary Fig. 4). A straightforward explanation for the homologous VSN and AOB clusters might be that these AOB neurons act as "functional relay" neurons for these stimuli, effectively copying the stereotyped activity patterns identified in the VSN population.
To test this hypothesis, we implemented a linear integration model (Fig. 5a) that attempted to reconstruct activation patterns of individual neurons using weighted linear inputs from the 8 VSN cluster means ( Fig. 5b and Supplementary Fig. 5). For steroid-only modeling, clusters 1a and 1b were grouped together. Overall, the linear input model satisfactorily accounted for the steroid responses of 91% of VSNs using a single input, and 95% of VSNs overall, indicating that, once trial-to-trial variability of the responses is taken into account, individual VSNs were well-represented by the cluster means. Linear modeling of AOB steroid responses indicated that, in the five homologous clusters, 80% of the neurons (16/20) were well-modeled as receiving a single input from the corresponding VSN cluster (Fig. 5c), and we designated these neurons "functional relays" with respect to these stimuli. However, this analysis also confirmed that the homologous AOB clusters included some responses that differed from VSN cluster means. For example, the cell highlighted in Figure 2d (assigned to cluster 5 in Fig. 4c) was not adequately modeled by a single input from the corresponding VSN cluster mean. Cell responses that could not be explained by a single input are analyzed more thoroughly below.
We analyzed each of the AOB neurons in the emergent AOB clusters (clusters 9 and 10; Fig. 4c) using linear modeling. The first, cluster 9, had a steroid response pattern that was, in 2 of 5 cases, satisfactorily modeled by linear combination of one excitatory and one inhibitory input (from VSN cluster 1 and cluster 4, respectively; Fig. 5b). These cells and others that required integration from just one excitatory input in addition to one or more inhibitory inputs were designated "single integrators" to acknowledge that their responses were influenced by more than one identified processing stream.
The other emergent steroid-specific activity pattern observed in the AOB neural population (cluster 10, Fig. 4c) included broad pregnanolone responses, and did not appear to be derived from a recorded VSN population. Indeed, linear modeling failed to satisfactorily account for the steroid responses of a majority of the cells in this population (7/9 or 78%; Fig. 5d). Since AOB neurons perform signal/noise enhancement (Fig. 3e), we hypothesized that these AOB neurons may enhance responses not readily evident in the VSN recordings. We explored this possibility using three complementary approaches: inspection of VSN responses to sulfated steroids at expanded concentration ranges (Fig. 6a), linear-nonlinear modeling ( Fig. 5d; see also Methods), and model residual analysis (Fig. 6b).
If our VSN recordings missed a population of near-threshold pregnanolone responses, we hypothesized that the "missing" VSN patterns might be observed at concentrations greater than 10 μM. We therefore examined a separate set of experiments on VSNs in which we had co-applied at least 3 sulfated pregnanolones (including stereoisomers of those used in the main study) over a large range of concentrations (10 −9 to 10 −1 M; Fig. 6a). 4 of the 100 sorted VSNs displayed a pattern of activation reminiscent of the broad pregnanolone responses in AOB cluster 10 (3 of these 4 also responded to female urine; Fig. 6a). For these neurons, the threshold for a detectable pregnanolone response was essentially at 10 μM, the concentration used for comparing VSN and AOB neuron responses (Fig. 6a). Given these data, it seemed likely that the pregnanolone-responsive AOB neurons in cluster 10, perhaps as a result of signal/noise enhancement, identified a population of near-threshold VSN responses.
We examined whether these responses could be extracted from the main data set by adding a nonlinear gain term to the linear model. Despite its additional flexibility, this term was ineffective at increasing the number of satisfactory model solutions in cluster 10 (achieving just one additional satisfactory solution; Fig. 5d). The failure of a nonlinear model to account for the discrepancy further confirmed the apparent absence of a VSN cluster containing suitable pregnanolone responses. Further evidence was found in z-score residuals from linear-nonlinear modeling attempts, which suggested that the strongest deficit in the explanatory power of VSN cluster means was associated with the two sulfated pregnanolones not represented in cluster 2 (Fig. 6b). We concluded that AOB neurons sharing these residual patterns likely receive excitatory input from a pregnanolone-sensitive processing stream, making them likely "functional relays" or "single integrators" (in the absence or presence of inhibitory steroid responses, respectively). A third residual pattern was dominated by the sulfated androgen A7010 (testosterone sulfate). This residual pattern was, however, adequately modeled as receiving input from the mean of VSN cluster 3. This may indicate that VSN cluster 3 actually contains two subpopulations with different affinities for A6940 and A7010, but for which more VSNs would have been required to statistically justify separation into distinct clusters.
After considering the results from clustering, linear-nonlinear modeling, and residual analyses, 33 of the 42 AOB neural responses (78.5%) were classified as either functional relays or single integrators. The remaining 9 neurons remained without satisfactory model solutions for their sulfated steroid responses. Many of these neurons (5) had broad inhibitory responses to steroids and were not assigned to one of the AOB clusters (Fig. 4c); given the lack of information about these neurons' excitatory responsiveness, we did not assign them to any particular functional class. Finally, we identified 4 neurons that were either (a) best modeled as receiving more than one excitatory input or (b) best modeled with a single defined input and lacking a large excitatory component (Fig. 7a). These neurons represented 9.5% of the AOB population (4/42 neurons), and suggested that a small percentage of AOB neurons receive excitatory input from two or more of the steroid processing streams excited by our stimulus set. Because these cells were excited by multiple processing streams, we designated them "multi-integrators".
6) Urine-and-steroid responses reveal more multi-integrators
Although 78.5% of AOB response patterns indicated exclusive excitatory integration from just one defined functional processing stream, many of these same neurons were co-excited by male and/or female mouse urine ( Fig. 1 and Fig. 2d). In some cases (5 of 19 urine-and steroid-sensitive neurons), the urine responsiveness of the corresponding VSN processing stream predicted the AOB responses (Fig. 4a, cluster 1b, Fig. 6a), and therefore did not suggest any additional excitatory integration. Another 5 of the 19 were the "unclassified" neurons; because urine is a diverse ligand source, excitation by urine alone cannot distinguish single integrators from multi-integrators, and these neurons thus remained "unclassified".
The remaining 9 urine-and steroid-sensitive neurons could not be explained by the urine responsiveness of the steroid-defined processing streams. 3 of these were already classified as multi-integrators based on their steroid responses, but the other 6 neurons had appeared as "functional relays" or "single integrators" by their steroid response patterns alone (Fig. 7a). Because their response patterns indicate integration from one defined processing stream in addition to unknown cues in urine, these 6 neurons are most appropriately designated "multi-integrators". The final numbers of assigned functional types were 19 functional relay (45%), 8 single-integrator (19%), 10 multi-integrator (24 %), with 5 neurons (12%) remaining unclassified for lack of a clear excitatory steroid response (Fig. 7b).
7) Anterior AOB responses to estrogen sulfates are rare
Finally, perhaps the most surprising aspect of the VSN-AOB comparison was the apparent absence of anterior AOB responses corresponding to two prominent VSN types, clusters 6 and 8. The neurons in these two clusters, along with those in cluster 7, all responded to at least one sulfated estrogen. In the VSN population, these represented 48% of the responsive cells; we encountered only 5 sulfated estrogen-responsive cells in our AOB population (12%) (p < 10 −6 assuming random sampling from the VSN population).
The relative lack of sulfated estrogen-sensitive neurons in the anterior AOB may have important implications for mouse accessory olfactory system function and its role in behavior. We analyze possible explanations for the absence of these response types in the anterior AOB in the discussion.
Discussion
These results represent the first use of individual compounds to study sensory processing in the AOB. We found that 12 sulfated steroids collectively excite a large percentage of cells in the VNO and AOB. We encountered the same response patterns repeatedly across recordings, allowing responses to the 12 steroids to be grouped into distinct processing streams. Because of these stereotyped response profiles, we were able to meaningfully compare sensory responses in VSNs with those of AOB neurons, yielding several new insights into the computations, and perhaps connectivity, of the AOB. The stereotypy of the recorded responses enabled a depth of analysis that has only recently become possible in the much longer-studied main olfactory bulb 21 -24 . We expect that the stereotypy of response profiles in the accessory olfactory system will remain a powerful asset in future studies. Below we consider the main implications of these findings in greater detail.
1) VSN responses cluster into a few processing streams
Olfactory receptor neurons in several species are sensitive to multiple odorants 20 , 24 -28 , but the response properties of VSNs to many individual compounds have not yet been systematically investigated. We found evidence for both narrowly-tuned (Fig. 3a) and more broadly tuned VSNs (Fig. 3b-e). The responsiveness of many VSNs (Fig. 3c) indicates that vomeronasal receptors are sensitive to multiple sulfated steroids. This result confirms that vomeronasal receptors, like olfactory receptors in other systems, allow VSNs to represent odors using combinatorial codes. The most striking feature of the responses of the VSN population is its apparent structure: patterns of cross-stimulus responsivity were far from random ( Fig. 3f-g). The precise form of organization was revealed most directly by a cluster analysis.
Because individual VSNs express one, or at most a few, vomeronasal receptors, the AOS has hundreds of molecularly distinct sensory neuron types 4 , 6 , 7 . However, it is possible that different receptor types have significantly-overlapping functional properties; for example, the ∼180 members of the V1R family have been grouped into just 12 defined sub-families 29 based on amino acid sequence homology. We found that responses to the 12 sulfated steroids were highly stereotyped and could be clustered into just 8 distinguishable patterns. One interpretation of this result is that all the cells within the identified clusters express the same receptor type. Because these patterns account for the stimulus-selectivity of all steroidresponsive neurons, 26% of the recorded VSNs, this interpretation would require that a quarter of VSNs collectively express just 8 out of approximately 300 receptor types 3 . This is plausible in light of the observation that, in the main olfactory system, a few receptor types are expressed by a sizable percentage of sensory neurons 30 , 31 . Alternatively, it is possible that a single identified processing stream includes multiple receptor types that have very similar ligand-binding properties. The small heterogeneity in several clusters (particularly in clusters 1, 2, 3, and 8; Fig. 4a) is consistent with this explanation. If this is the case, a yetlarger collection of VSN recordings might provide sufficient statistical evidence to justify splitting some of these into sub-clusters. Furthermore, we performed clustering on steroid responses at a fixed concentration (10 μM); it seems possible that a more extensive investigation of concentration-tuning may reveal functional differences among cells within a single cluster. To unambiguously determine whether these response clusters originate from a small number of receptor types or from much of the V1R family will ultimately require studies that match ligands to their receptors.
Whatever its underlying mechanism, the stereotypy of VSN responses has profound implications for our understanding of the downstream circuitry of the AOS. By comparison with the olfactory system of insects 20 , 32 , fish 33 , and the main olfactory system in mammals 23 , 24 it is striking that the steroid-responsive VSNs, a quarter of the VSNs recorded in this study, have sensory properties that can be broadly classified into so few functional types. Given that there are on the order of 300 molecularly distinct VSN types, stimulation of ∼25% of these neurons might, in principle, reveal ∼75 unique response patterns. Thus, even if future studies provide evidence for sub-types among some of the 8 clusters identified here, it seems clear that much of the information about these stimuli can be captured by a relatively modest number of functional processing streams.
This stereotyped representation of olfactory information by VSNs is the main characteristic that permitted a detailed comparison of VSN and AOB sensory responses. While these results in no way obviate the importance of developing markers for particular cell types, we suspect that the mammalian AOS will prove to be unusually amenable to circuit dissection through careful analysis of responses to defined ligands.
2) Multi-stage analysis reveals principles of AOS processing
The AOB modifies the representation of sensory information from VSNs in several important respects. Most straightforwardly, AOB neurons encode sensory information with a signal-to-noise ratio that, on average, is more than two-fold higher than VSNs (Fig. 3e). The difference in trial-trial variability may partially derive from the difference in preparations, but was evident across several analytical approaches. One obvious biological explanation for this increase is averaging across the inputs from multiple VSNs; in this regard, the glomerular architecture of the AOB serves as a natural anatomic substrate. In theory, given N identical VSN inputs, an optimal encoder (in particular, one not constrained by spiking) could improve the signal-to-noise by a factor of . If we estimate (based on volume) ∼10 5 VSNs in the VNO, then each receptor type might be expressed by ∼300 VSNs on average. Thus, the number of VSNs is clearly more than adequate to account for the two-fold observed improvement in signal-to-noise. Incomplete sampling of the VSN input population, spike encoding, or other bottlenecks may account for the fact that noise reduction falls short of the ideal.
AOB mitral cells have been observed to be capable of integrating from multiple receptor types 11 , but it has been unclear whether the multiplicity of glomerular innervation results in a substantial functional broadening of excitatory receptivity. Using cluster analysis and linear-nonlinear modeling, we found that 64% anterior AOB neurons could be described as receiving excitatory input from a single steroid-defined input stream. Rare AOB neurons (4/42, 9.5%) required excitatory integration from multiple identified processing streams, and many of these neurons were sensitive to different classes of steroids. More evidence for excitatory integration came from AOB neurons that were co-excited by both urine and sulfated steroids, of which 6 were also likely multi-integrators, bringing the total number of these neurons to 10/42 (24%). The 4 multi-integrators identified by steroid responsiveness alone provide clearest evidence that some neurons integrate across broad classes of steroidal ligands; the 6 additional cells identified through their co-activation by steroids and urine might be further examples of integration of divergent molecular features, or alternatively might be due to unknown urinary sulfated steroids with similar structures to the pure ligands that also activated these neurons. A complete description of the nuances of mitral cell integration awaits more comprehensive identification of the molecules that comprise the AOS stimulus space.
These data provide the first direct physiological evidence of functional diversity among the AOB mitral cell population ( Supplementary Fig. 6), complementing the dendritic diversity observed across several anatomical studies 10 , 11 , 34 . The prominence of apparent functional relays and single integrators in the anterior AOB suggests that most of these neurons contact glomeruli with similar chemical sensitivities. These neurons might correspond to those with "simple" or "strip type" dendritic connectivity patterns 34 and possibly neurons sending dendrites to glomeruli labeled by the same or similar vomeronasal receptor types 10 , 11 . The multi-integrator population, on the other hand, confirms the long-hypothesized mitral cell capacity for functional recombination, possibly as a consequence of direct excitatory input from distinct receptor types 11 .
Finally, it is noteworthy that two prominent clusters of VSN responses, both signaling the presence of estrogen-family compounds, are "missing" from our recorded responses in the anterior AOB (Fig. 4c). One interpretation of this result is that we did not target our recording electrodes to the part of the AOB responding to these stimuli; if so, these estrogen-responsive cells are localized in the AOB differently from other steroid-responsive cells, implying functional spatial organization in the AOB. Alternatively, it is possible that responses to these compounds are "masked" by circuit properties of the AOB. Given recent observations of disproportionate inhibition of male urine-selective mitral cells by compounds in female urine 13 , these may hint that sulfated estrogen-responsive pathways serve a largely inhibitory role in the AOB.
We collected BALB/c male and female urine from ∼30 individual mice of both sexes for 5 consecutive days by flash-freezing fresh urine in liquid nitrogen as described previously 12 . Frozen urine pellets were passed through a wire mesh to remove large particles, and later thawed, pooled, and centrifuged at 500-1000 rpm for 2 minutes to remove large particles. Urine was aliquoted in small volumes and stored at −80 °C until dilution into oxygenated Ringer's solution immediately prior to each experiment.
We purchased sulfated steroid compounds (Table 1) from Steraloids, Inc. (Newport, RI) and stored them at −20 °C in solid form. 100 mM stock solutions of all steroids were made using methanol or water as the solvent and stored at 4 °C until dilution into oxygenated Ringer's solution immediately before each experiment. We included 0.01% methanol in each stimulus solution as a vehicle control.
Animals and ex vivo tissue preparation
The Washington University Animal Studies Committee approved all procedures. Male B6D2F1 mice aged 8-12 weeks postnatal were used in all experiments. Prior to dissections, mice were anesthetized with isofluorane and decapitated. For AOB recordings, we prepared the tissue for ex vivo recording as described previously 14 . We isolated a single hemisphere of the anterior, dorsal mouse skull containing one bony capsule of the vomeronasal organ and connected ipsilateral AOB in oxygenated, ice-cold aCSF containing an additional 7 mM MgCl 2 to limit excitotoxicity. The preparation was adhered to a small delrin plastic plank with tissue adhesive, then placed into a custom-designed physiology chamber. The tissue was superfused with oxygenated, room-temperature aCSF while the septal cartilage, septal bone, and blood vessels overlaying the AOB were carefully removed, exposing the vomeronasal axons and AOB surface to the superfusion solution. We attached a small polyimide cannula (0.0056″ inner diameter, A-M Systems Inc., Carlsborg, WA, USA) to a pneumatic stimulus-delivery device (AutoMate, Berkeley, CA, USA) and inserted the open end into the lumen of the VNO. Oxygenated Ringer's-based solutions were pumped constantly into the VNO at 0.2-0.3 mL/min. During the 4-6 hour recording periods, warmed (33-35 °C), oxygenated aCSF was superfused at a constant rate of ∼8 mL/min.
For VSN recordings, we dissected the vomeronasal epithelium away from the VNO and basal lamina, placing its basal layer on a planar multi-electrode array (MEA) with 60 microelectrodes arranged in two groups of 6×5 spaced 30 μm apart 35 . A continuous stream of oxygenated, warmed Ringer's solution was directed at the neuroepithelium at 3 mL/min. for the 4-5 hour duration of the experiments. Each minute, the flow of Ringer's was substituted with a 0.5 mL pulse of stimulus using a microinjection robot (Gilson, Middleton, WI, USA). A total of 39 mice were utilized to acquire these data, with 24 for AOB experimental preparations and 15 for VSN experimental preparations.
Electrophysiology
We made AOB recordings using single glass electrodes with 4-8 MΩ tip resistance similar to previous studies 13 , 14 . Microelectrodes were filled with filtered aCSF and advanced into the AOB from the dorsal surface using a micromanipulator (Siskiyou Design Instruments, Grants Pass, OR). Distance from the point of entry was measured with a digital micrometer attached to the micromanipulator (Siskiyou). We frequently encountered AOB cells with large positive-polarity spikes and complex waveforms (with multiple peaks and/or points of inflection) between 200-400 μm from the AOB surface 13 . Recordings were made between 150 and 400 μm from the AOB surface, consistent with the boundaries of the external cellular layer 8 in which mitral cells and their dendrites reside. Voltages were amplified with an extracellular head stage attached to a dual-mode amplifier (Dagan, Minneapolis, MN, USA), high pass filtered at 30 Hz, and digitized using an analog-digital converter (National Instruments, Austin, TX, USA). Stimuli were delivered in randomized, interleaved blocks using custom acquisition software, and a precise analog copy of stimulus valve opening and closing times was recorded in separate acquisition channels to insure synchronization. Stimuli were delivered to the VNO lumen for 5s, which we found to be sufficient to elicit strong responses in AOB neurons (Fig. 1).
VSN signals were acquired with a planar MEA (Multichannel Systems, Reutlingen, Germany), amplified (Multichannel Systems), and digitized (National Instruments). We controlled robotic stimulus delivery using custom multichannel recording software 15 . As with AOB recordings, we recorded the stimulus timing in a dedicated channel.
We analyzed extracellular recordings to extract single-unit activity. AOB neurons were sorted as described previously 13 , 14 ; VSN waveforms were sorted using a multichannel template algorithm 36 modified to allow simultaneous (rather than sequential-greedy) waveform fitting (T.E. Holy, data not shown). To be included in the analysis, single units had to have separable waveforms and evident refractory periods.
Statistical and firing rate analysis
We grouped the spike times of sorted waveforms from AOB recordings by stimulus identity and aligned them so that time 0 corresponded to valve opening (Fig. 1c). For heat-map representations of the peristimulus time histogram (PSTH), spike times were grouped in 100 ms bins (10 Hz) and low pass filtered at 1 Hz (Fig. 1e). We summarized the responses of mitral cells to each stimulus by computing the average change in firing rate above spontaneous levels in the time window beginning 1 second post stimulus onset and ending 1 second post stimulus offset (Δr).
We similarly aligned spike times of sorted waveform peaks from VSN recordings. Because intense stimulation can lead to a large decrease in VSN spike amplitude (sometimes down to the noise level of the recordings, see Supplementary Fig. 2), firing rate could not always be determined from extracellular recordings at later times of the response, particularly at the highest concentrations. Consequently, Δr was computed using a time window that was fixed at its onset (set to be one second prior to the earliest response of any cell in the preparation to any stimulus, typically 1-2 s after valve opening), but variable in its duration (for significant responses, most frequently 2.5-3.5 s, median 6.4 s). The length of the averaging window was set to maximize the total change in firing rate, subject to the constraint that the end points had to decrease monotonically with concentration. We call the resulting metric Δr monotonic ; it will be described in detail elsewhere (H.A. Arnson et al, unpublished data).
The criteria chosen for determining statistical significance were Wilcoxon rank sum test p < 0.05 compared to Ringer's controls and Δr monotonic (VSNs) or |Δr| (AOB neurons) > 1 Hz. We used Wilcoxon rank sum tests to determine statistical significance (p < 0.05) unless otherwise indicated.
Calculating the entropy of VSN responses
For entropy analysis, we converted VSN responses to all sulfated steroids at 10 μM to binary signals (1 = responsive, 0 = nonresponsive; see Fig. 3g). We simulated a population of 10 5 neurons with the same statistical probability of responding to each of the 12 ligands as our observed dataset (Supplementary Table 1). To calculate entropy, we used the same probabilistic sampling method to simulate 10 6 datasets of the same size as our VSN dataset (75 responsive neurons). The Shannon entropy, H, was calculated as described 37 : where n is the number of unique binary response patterns and p i is the probability of observing a response to the i th compound.
Statistical probability was determined by comparing the Shannon entropy of the observed dataset with the cumulative probability of achieving the value in the simulated set.
Neural response pattern classification through clustering
We analyzed neuronal responses with an automated clustering algorithm based on mean shift 38 , modified to allow the smoothing radius to depend upon the local uncertainty about mean shift step size (T. E. Holy, unpublished data). The clustering algorithm was used to identify patterns in cell responses across all sulfated steroids at 10 μM. We selected neurons achieving statistical significance at the p < 0.1 level (i.e. responsive neurons plus a marginally-responsive group) for initial clustering. All combinations of 6 of the 12 sulfated steroids were passed through the clustering algorithm independently, and clustering results from each combination were tallied in a similarity matrix. Cells that frequently appeared together in clustering trials were thus assigned maximum similarity. Final cluster identities were assigned by using the same algorithm on multidimensional scaling (11-dimensional) of the similarity matrix. Following clustering, we removed marginally-responsive cells (all cells with 0.05 < p < 0.1) from subsequent analysis. Responsive cells that were associated with clusters of marginally-responsive cells were designated as "unclustered". All 75 responsive VSNs (Wilcoxon rank sum test p < 0.05) were assigned to one of the 8 clusters shown in Figure 4a. In the AOB population, 8 of the 42 responsive cells (19%) were "unclustered".
To determine whether the differences seen in clustering between the VSN and AOB cell populations were based on identifiable differences in the data sets, we calculated a discriminability index (d′) based on linear discriminant analysis (LDA). AOB data were projected along the first three LDA eigenvectors chosen for the VSN dataset, We computed the centroid of each VSN cluster using Gaussian mixture modeling, then calculated the Euclidean distances between an equal number of VSNs and AOB neurons (typically 3-5 per cluster) and the centroids We then converted distances to z-scores by normalizing by the standard deviation for each dimension. And calculated statistical separation between the AOB and VSN data using d′ analysis (Fig. 4d).
Linear and linear-nonlinear integration models
We designed a linear integration model that constructs normalized firing rate responses r i for all stimuli (12 sulfated steroids at 10 μM) from a weighted linear sum of normalized template responses. Our linear input equation was: where r 0 is a scalar firing rate offset R ij is the response to the i th stimulus of input template j w j is the scalar weight assigned to input template j For each responsive neuron, we normalized the responses to a unit magnitude, then sought w j for the 8 VSN clusters identified in Figure 4a Tests with simulated data mimicking the characteristics of real AOB responses demonstrated that this procedure virtually eliminated over-fitting (which occurred 20% of the time if the best fit was chosen, but only 0.6% of the time if the satisfactory model with fewest inputs was chosen). This necessarily increased the false negative rate, but only resulted in a 7% increase in the median z-score residual power (from 8% to 15%). In other words, the signals missed by choosing the lowest-order solution instead of the best overall solution were minor compared to the overall signal.
For linear-nonlinear models, we modified the linear model to include a nonlinear component: where s(y) is the "sign" of y x 0 is a scalar offset term bounded between −0.5 and +0.5 and γ is a scalar exponent between 0.2 and 5.
The nonlinear model allowed linear responses to be thresholded and modulated by the exponent to best fit the observed pattern of responses. We evaluated goodness of fit using the χ 2 cumulative distribution function for N−(3+m) degrees of freedom. Examples of linear model fits are presented in Supplementary Figure 5. In cases where our acceptance criterion was not reached, we identified the best fit attempt as the fit with the lowest (1-p) value. Examples of best rejected linear-nonlinear fits are presented in Figure 7a.
Supplementary Material
Refer to Web version on PubMed Central for supplementary material. indicates the average change in firing rate inside the window between the faint vertical lines. This cell responded strongly to BALB/c male urine and two sulfated glucocorticoids (Q1570: corticosterone 21-sulfate; Q3910: hydrocortisone 21-sulfate) that differ in their structure only by a hydroxyl group at carbon 17. Figure 2f is re-plotted in gray in the background. (e) Plot of the response amplitude (abscissa) versus the standard deviation of that response across trials (ordinate) of all statistically significant VSN and AOB responses to stimulation (black and green circles, respectively). Solid lines indicate linear regression lines (VSN slope: 0.51; AOB slope: 0.24). (f) Comparison of observed binary response patterns with expectations from random sampling. Top: the most frequently expected binary response patterns in VSNs expected from random sampling (sorted by rank). Bottom: distribution of occurrences of each pattern above for simulated data (red trace) and observed data (filled gray bars). (g) Entropy of simulated (open bars) and observed data (gray arrow). The probability of encountering a data set with such a low entropy value is less than 10 −3 by random sampling. Sensory responses to sulfated steroids can be grouped into functional categories. (a) We identified 8 clusters of similar responsiveness to sulfated steroids at 10 μM identified in the VSN data set. We show normalized Δr monotonic responses of the 75 steroid-responsive VSNs in their respective clusters. Asterisks indicate clusters we did not encounter in the AOB data set. Cluster 1 included several neurons that responded to 1:100 female mouse urine, and others that did not. As these may represent functionally separable populations, we separated them into urine-unresponsive (cluster 1a) and urine-responsive (cluster 1b) subgroups. (b) Molecular features that, for clusters 1, 3, 6, 8, and 10, distinguished active from inactive steroids. Common features are highlighted in red. The grayed groups in cluster 3 indicated that a distinguishing feature of steroids that activate neurons in this cluster is the lack of a hydroxyl group at carbon 13. (c) We identified clusters of responsiveness to sulfated steroids at 10 μM in the AOB neuron data set independently of the VSN cluster identities. We show Δr norm responses of AOB neurons in their respective clusters. The "unclustered" region shows the neurons most associated with marginally-responsive neurons. Asterisks indicate clusters we did not encounter in the VSN data set. (d) Discriminability index (d′) comparing the steroid response patterns found in the VSN data set to the AOB data set along the first 3 linear discriminant eigenvectors. The dotted line indicates d′ = 3, corresponding to a high degree of separability. Asterisks indicate clusters not present in the AOB data set. A linear integration model indicates most AOB neurons receive functional input from a single defined processing stream. (a) Model schematic. We supplied VSN cluster means as potential inputs to AOB cells. Hypothetical responses to 4 single molecules are shown above each hypothetical input template (blue circles labeled A-E). The maroon circle at the bottom represents a hypothetical observed AOB cell, and its response to all 4 single molecules is displayed to the right. We modeled AOB cell responses by adding one weighted template at a time until a fit reached our statistical criterion. Red hues indicate positive changes in firing rate, or an excitatory coupling, and blue hues indicate negative changes in firing rate, or an inhibitory coupling. (b) Examples of linear model solutions for two VSNs and three AOB cells. Open circles designate mean observed responses; error bars represent standard errors of the mean. The red line indicates the linear model solution with the fewest linear inputs. "Input" refers to the identity of the VSN input types (by cluster ID in Fig. 4a). "r0" refers to the linear offset. "Wts" refers to the linear weights assigned to respective inputs. (c) Percentage of linear modeling attempts for VSNs (filled bars) and AOB cells (open bars) that satisfactorily fit observed responses using a single template. Model performance is grouped by cluster number from Residuals of model fitting reveal input patterns missing in the VSN data set. (a) The AOB responses identified as cluster 10 showed excitation to multiple members of the pregnanolone class of steroids at 10 μM, but we did not observe such a response profile in VSNs at 10 μM. Investigation of VSN responses in a separate dataset revealed broad pregnanolone responses, shown on the log-linear plot, but only at concentrations > 10 μM. These neurons tended also to respond to 1:100 BALB/c female urine. The gray shaded region indicates the concentration range sampled in the main VSN dataset used for clustering analysis. (b) Cluster analysis of linear-nonlinear model fit residuals indicated 3 common patterns unaccounted for in the VSN input population (labeled A-C). Heat map indicates the power in the residuals for each of the 12 sulfated steroids, measured in terms of the ratio between the value and uncertainty (z-score). Summary of observed response patterns in AOB neurons. (a) Identification of "multiintegrator" AOB neurons receiving excitatory inputs from two or more processing streams. (top) Best unsuccessful linear-nonlinear model fitting for steroid-only (red trace) and steroid-plus-urine (blue trace) data. Open black symbols indicate the observed normalized firing rates; error bars represent standard errors of the mean. Neither attempt was able to account for the large excitatory response to P3817 (dotted gray circle). (bottom) Linearnonlinear solutions for a cell identified as a "single integrator" by steroid-only fits (red trace) and as a "multi-integrator" when urine responses were included (blue trace). (b) Proportion of AOB neurons falling into four categories based on linear-nonlinear model results. "Unclassified" cells did not meet the criteria for classification in the three main categories. | 2016-05-04T20:20:58.661Z | 2010-05-09T00:00:00.000 | {
"year": 2010,
"sha1": "9c90eb658036b01087013e7739e902bd6a2742b0",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc2930753?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "a3c7d983ccb49f4f2f8832c422400cc2feed5b53",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
54853220 | pes2o/s2orc | v3-fos-license | Experimental investigation of pool boiling of water-based AL 2 O 3 nanofluid on a copper cylinder
Saturated boiling of nanofluid on a copper cylinder is experimentally studied. The studied nanofluid were prepared using distilled water and Al2O3 nanoparticles. The volume concentration of the nanoparticles was equal 3 %. Cylinder diameter was equal 25 mm. The time dependence on the excess temperature at different boiling regimes were obtained. It shown increase of heat transfer coefficient under boiling of nanofluid.
Introduction
Studies of the last two decades have shown that nanofluid have unusual transfer properties.In particular, small adds of nanoparticles into carrying liquid can considerably increase its heat conductivity and viscosity [1,2].This stimulated many thermophysical applications of nanofluid, particularly, aimed at intensification of heat exchange.It has been found that nanofluid have really enhanced coefficients of heat transfer (see, for example, [3,4] and literature cited therein).The further intention to increase this coefficient stimulated the study of heat transfer of nanofluid during boiling.These works have been performed quite intensely during the last decade.Nevertheless, the obtained results are rather contradictory.For example, in [5] it is noted that adding of nanoparticles does not change heat transport considerably, with an even decrease in the heat transfer coefficient during boiling been seen in [6].Conversely, in [7,8], this coefficient rose.
Saturated boiling of nanofluid on a cylindrical heater is experimentally studied by researchers in their works [9,10].The studied nanofluid were prepared using distilled water and iron oxide (II, III) nanoparticles or diamond nanoparticles.The volume concentration of the nanoparticles was changed from 0.05 to 1.0 %, their diameters varied from 10 to 100 nm, and the heater diameter was increased from 0.1 to 0.3 mm.It is established that the critical heat flux density in boiling nanofluid depends on the size and material of nanoparticles and on the heater diameter.The critical heat flux density increases with increasing size of nanoparticles and decreases with increasing diameter of the heater.
In this work, saturated boiling of the nanofluid were prepared using distilled water and Al 2 O 3 nanoparticles was experimentally studied.The present work was aimed at studying the influence of nanoparticles on rate of cooling and heat transfer coefficient under boiling of nanofluid on a copper cylinder.
Experimental apparatus and procedure
The study of boiling process occurred on the copper cylinder (diameter is 25 mm).
Method of procedure.The cylinder is heated in the furnace to temperature of 350-500 ºC.After that it is immersed within studied fluid (distilled water or nanofluid) to the depth of 15-60 mm (see figure 1).Since the surface of the cylinder has a high temperature, when immersed in water formed a vapor film.The studied fluid was heated up saturation point to not spend the warm on heating fluid.Cooldown period occurs at a constant coolant temperature.The value of heat transfer coefficient is constant in the film boiling; that is the boundary condition for steady cooling regimen.Cylinder temperature is measured using thermal couple which mounted into it.The decline rate of the cylinder and the heat transfer coefficient are meant on the score of evidence from the research.Study of boiling was drown using the distilled water, water-based nanofluid and Al 2 O 3 nanoparticles.The volume concentration of the nanoparticles in water was equal 3 %.Average particle size of Al 2 O 3 nanoparticles is 36 nm.Preparation of nanofluid was carried out based on standard two step process.After adding to the base fluid the required amount of nanopowder, the nanofluid was first thoroughly mixed mechanically, and then was placed for the half-hour into an ultrasonic disperser Sapphire to destruct conglomerates of particles.The nanoparticles were purchased from "Advanced Powder Technologies" company LLC (APT) (Tomsk).
Results and discussion
The series of five experiments was carry out for distilled water and the nanofluid containing Al 2 O 3 nanoparticles.
Boiling regimes in distilled water at different times was shown in figure 2. Unfortunately, addition of nanoparticles leads of decrease in transparency of nanofluid.For this reason the photo of boiling of Al 2 O 3 nanofluid are less informative, but the process of boiling are similar.
The dependence of excess temperature θ (the temperature difference between the temperature of cylinder surface and the saturation temperature of the fluid) on time, obtained in the experiment, were shown in figure 3. It can be observed that the boiling of distilled water and the nanofluid are identical in order of merit, but they differ quantitatively because it is necessary to notice that: a) a transition region from film boiling to transition boiling (see figure 4) -set transition in the nanofluid occurs in about 5 seconds earlier then in distilled water and excess temperature θ is at about 10-15 ºC higher in the first instance, which is equivalent of increase of heat transfer coefficient of the nanofluid on 7.5-10 % in comparison to heat transfer coefficient of distilled water.Thermophysical Basis of Energy Technologies -2016 b) a transition region from transition boiling to nucleate boiling (see figure 5)set transition in the nanofluid occurs in about 25-30 seconds earlier then in distilled water, which is equivalent of increase in heat transfer coefficient of the nanofluid on 20-40 % in comparison to heat transfer coefficient of distilled water.
Conclusions
Based on the experimental results, it may be concluded that using of nanofluid, connected to phase transition, is effectually.Using nanofluids increases the heat transfer coefficient, this can increase the energy efficiency of setups in which the main process is boiling.
The current work is performed at partial support of the projects funded by the Russian Foundation for Basic Research and Krasnoyarsk Regional Fund for Support of Scientific and Scientific-Technical Activities (Contract No. 16-48-243042\16).
Fig. 1 .
Fig. 1.The scheme of the experimental setup.
Fig. 2 .
Fig. 2. Boiling regimes at different times.Left to right: film boiling, transition boiling, nucleate boiling.The average (for all replicates) dependences of excess temperature θ on time for the distilled water (dotted line) and Al 2 O 3 nanofluid (solid line) were shown in figure 3.In this figure there were boiling conditions, that realized in these experiments and shown in figure 1-2.That is change slope of a curve indicates the changes.
Fig. 3 .
Fig. 3.The dependence of excess temperature on time.
Fig. 4 .
Fig. 4. The dependence of excess temperature on time in the transition region from film boiling to transition boiling.
Fig. 5 .
Fig. 5.The dependence of excess temperature on time in the transition region from transition boiling to nucleate boiling.transition boiling to nucleate boiling. | 2018-12-13T03:52:44.474Z | 2017-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "d4d0f652265d69efce72de8186ad8ace9cfc2d1b",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/06/matecconf_tibet2017_01009.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9ab6b68cd6aa07eea372145b0824b670da2991ed",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
247840809 | pes2o/s2orc | v3-fos-license | Assessing the safety of interrogating cardiac-implantable electronic devices with brand-mismatched remote interrogators: a pilot study
Objective Remote cardiac implantable electronic device (CIED) interrogators, originally developed for home use, have been proven to be efficacious in clinical settings, especially emergency departments. Concern exists that attempting to interrogate a CIED with the remote interrogator of a different brand, i.e., a brand-mismatched interrogator, may cause device malfunction. The aim of this study was to determine if intentionally attempting to interrogate a CIED with a brand-mismatched remote interrogator resulted in device malfunction. Methods A total of 75 ex vivo CIEDs manufactured by various companies underwent attempted interrogation by a brand-mismatched remote interrogator. CIED settings were compared before and after attempted mismatch interrogation. A total of 30 in vivo CIEDs were then randomized for an attempted 2-minute mismatched remote interrogation by one of the two possible mismatched remote interrogators. CIED settings were compared before and after attempted mismatch interrogation. Results Of 150 ex vivo brand-mismatched interrogations, no device setting changes or malfunctions occurred; no remote interrogators connected to a mismatched CIED, and no devices were turned off. In the 30 patients undergoing brand-mismatched interrogations, the mean (standard deviation) age was 71.6 ( ± 14.7) years, 16 (53%) were male, with 24 pacemakers (80%), four pacemaker/implantable cardioverter defibrillators (13%), and two implantable cardioverter defibrillators (7%). Of the 30 mismatched interrogations performed, no device setting changes or malfunctions occurred; no remote interrogators connected to a mismatched CIED, and no devices turned off. Conclusion In a total 180 attempted brand-mismatched CIED interrogations, no CIED malfunctions occurred. This suggests that the use of remote CIED interrogators when device manufacturer is unknown is unlikely to result in adverse CIED-related events.
INTRODUCTION
The term cardiac implantable electronic device (CIED) encompasses pacemakers, implanted cardioverter-defibrillators, and combination devices. CIEDs are potentially lifesaving devices that decrease morbidity and mortality. 1,2 Roughly 200,000 pacemakers are implanted in bradycardic patients alone in the US each year, 3 and recent studies show CIED implantation rates continuing to increase worldwide. 4 Given their widespread use, it is critical for physicians in the emergency department (ED), perioperative units, and other clinical settings to be able to quickly interrogate the CIEDs of patients with complaints such as palpitations, syncope or dyspnea, or who report being shocked. However, it can be difficult for physicians in rural hospitals to access the services of an International Board of Heart Rhythm Examiners-certified professional on weekends and holidays, often a company representative based some distance away.
This problem can be addressed by remote CIED interrogators. Unlike the devices utilized by company representatives, remote CIED interrogators are "diagnosis-only" devices capable only of interrogating CIEDs, not altering their settings. 5 Hence, they can be safely used by any healthcare provider after minimal training. Each of the three major US CIED manufacturers (Abbott Laboratories, Chicago, IL, USA; Boston Scientific Corporation, Marlborough, MA, USA; Medtronic plc, Minneapolis, MN, USA) produces a brand-specific remote device capable of interrogating their CIEDs (the Merlin On-Demand, the Latitude Consult, and the Carelink Express, respectively). Each device consists of a "wand" paired to a console or tablet, which is placed in close proximity to the CIED of interest in order to perform interrogation. Remote interrogators were initially developed for home use by patients, allowing for electrophysiology clinics to monitor CIED function without the need for an in-person visit. Studies have found that remote interrogators decreased costs and saved time when used in such a manner, and a 2015 Heart Rhythm Society consensus statement described them as standard of care. 6-8 However, subsequent research has shown that remote interrogators possess utility in a variety of clinical settings as well.
Implementation of remote interrogators has been studied in a variety of clinical settings, and has been shown to be safe, efficient, and potentially time-saving compared to traditional interrogation in certain scenarios. 9,10 One facet of remote interrogator usage that has not been studied is the result of mismatch interrogation-that is, attempting to interrogate the CIED of a given manufacturer with the remote interrogation system produced by another. Anecdotally, doing so results in the remote system simply being unable to recognize, connect to, or interrogate the mismatched CIED. However, concern exists among physicians that attempting mismatched remote interrogation could cause CIED malfunction. While quite specific, these concerns are not irrelevant. Firstly, it is possible, especially in the ED, that a patient might misremember their CIED's manufacturer, resulting in an attempted remote interrogation with a mismatched system. Another scenario in which mismatched remote interrogation becomes relevant is one in which an ED patient requiring CIED interrogation is either unresponsive, or unaware of his or her CIED manufacturer, and the information cannot be found in the Electronic Medical Record. Anecdotally, some emergency physicians who are comfortable with remote interrogators simply attempt to interrogate the device with each possible remote interrogator until one connects, circumventing the time-consuming process of identifying an unknown CIED. Many physicians are leery of utilizing this
What is already known When patients with cardiac implantable electronic devices (CIEDs) present to the emergency department, it is crucial to determine if their CIED is malfunctioning. CIED interrogation often requires a trained professional, usually company
representatives who have to travel from the nearest major city. This results in potential delays in care. Recently, remote interrogators have provided a solution to this issue. However, concern still exists regarding their safety and efficacy. Remote interrogators are brand-specific. In the event of a patient being unaware of their device manufacturer, some providers feel comfortable simply attempting to interrogate that patient's CIED with each possible remote interrogator until one device connects. However, there is concern that this strategy may cause CIED malfunction.
What is new in the current study
In our study, we test this methodology with both in vivo and ex vivo devices to validate the safety of utilizing brandmismatched remote interrogators on CIEDs.
Safety of mismatched cardiac implanted electronic device interrogation technique, worried that mismatched remote interrogation could cause CIED malfunction.
Since the safety of mismatched remote interrogation is relevant, potentially-useful, and has not been examined, we investigated whether attempting to interrogate CIEDs with brand-mismatched remote interrogators resulted in device malfunction.
METHODS
We conducted a two-phase study, evaluating brand-mismatched CIED interrogation first in nonimplanted devices (ex vivo), and then in patients with implanted devices (in vivo). This unfunded study took place in a rural community hospital in Ohio, was approved by an institutional review board, and was performed in cooperation with device manufacturers.
Ex vivo phase
In the first phase of the study, each of the three CIED major manufacturers provided a sample of 25 older and newer pacemakers, implantable cardioverter defibrillators, and combo devices. Company representatives interrogated each of their 25 ex vivo devices using their brand-matched programmer, recording their settings. This initial interrogation served as a baseline, and was followed by an attempted 2-minute interrogation with a brand-mismatched remote interrogator. After the first brand-mismatched interrogation, the company representative again interrogated the device and recorded its settings. This protocol was then repeated using the other possible brand-mismatched interrogator (Fig. 1). Results from before and after the 150 brand-mismatch interrogations were compared to identify any programming changes that might have occurred.
In vivo phase
The second, in vivo phase of the study assessed the effects of attempted brand-mismatched interrogation in patients with implanted CIEDs. Inclusion criteria were: subjects of at least 18 years of age with a CIED who were not pregnant and presented to the electrophysiology clinic for a routine visit. Patients were excluded if they had known malfunctioning devices, declined to participate, or were prisoners. After informed consent was obtained, ten patients with devices produced by each major CIED manufacturer underwent interrogation with the appropriate brand-matched programmer by a clinic technician. They were then randomized to undergo attempted mismatched remote interrogation for 2 minutes (using one of the two possible mismatched brands) by study staff. After attempted mismatch interrogation, the patient's CIED was interrogated a second time by the clinic technician. CIED settings before and after the brand-mismatched interrogation were compared (Fig. 2).
Ex vivo results
Overall, in 150 ex vivo attempted brand-mismatched interrogations, no devices were turned off, no device settings were changed, no malfunctions occurred, and no remote interrogator was able to connect to (or extract data from) a mismatched CIED.
In vivo results
Thirty patients underwent attempted mismatched interrogation in the in vivo phase of the study. The mean (standard deviation) patient age was 71.6 ( ± 14.7) years and 16 (53%) were male. CIEDs studied included 24 pacemakers (80%), four pacemaker/ implantable cardioverter defibrillators (13%), and two implantable cardioverter defibrillators (7%). As a result of attempted brand-mismatched remote interrogation: no settings were changed; no devices were turned off; no malfunctions occurred; and no remote interrogator was able to connect to (or extract data from) a mismatched CIED.
DISCUSSION
The primary finding of our study is that, in a total of 180 brandmismatched, remote CIED interrogations performed on a mix of ex vivo and in vivo devices, there were no instances of a mismatched interrogator connecting to a CIED, no instances of CIED settings being altered, and no instances of a CIED turning off. These findings support the safety and utility of remote CIED interrogators, especially in the ED setting. Specifically, our findings suggest that, if an emergency provider unintentionally attempts to interrogate a patient's CIED with a brand-mismatched remote interrogator, it is unlikely that any CIED-related adverse events will occur as a result. These findings further add to the literature surrounding remote interrogator use in clinical settings. Remote interrogator usage has been found to be useful in a variety of clinical settings, including identifying CIED malfunction after radiotherapy, decreasing response times in perioperative areas, and reducing costs when utilized in outpatient clinics. [9][10][11] Perhaps the most such research has been performed in the ED, where it is crucial for emergency physicians to be able to interrogate potentially-malfunctioning CIEDs in a timely, efficient manner. This is rarely an issue in urban, academic centers, which often have electrophysiology staff available or device company representatives based nearby. However, rural EDs sometimes have to wait hours for CIEDs to be interrogated, often on weekends and holidays. Remote CIED interrogators allow rural emergency physicians to quickly interrogate CIEDs and receive an interpretation from the device company. Remote interrogators have been shown to be safe, efficient, and capable of potentially improving patient experience in the ED setting. [12][13][14] The increased speed inherent in remote interrogation compared to traditional interrogation is especially well-suited to the ED, as its use often allows emergency physicians to rule out device malfunction as a potential driver of symptoms. Multiple studies have shown that the vast majority of remote interrogations performed in the ED either return normal findings or findings not requiring immediate action, emphasizing their utility as a triage tool. 11,15 Because of this, remote interrogators in the ED often serve to rule out device dysfunction, decreasing clinical decision-making time and patient length of stay.
Another scenario in which remote interrogators could be of use in the ED is in identifying an unknown CIED. A previous survey of CIED patients presenting to the ED found that only 55% carried their manufacturer-issued device identification card. 16 Although not common, it is feasible that a patient could present to the ED either incapacitated or unaware of their CIED manufacturer. If the information is not available in the electronic medical record, attempting to identify the manufacturer of an unknown CIED is a lengthy, sometimes futile process often involving multiple calls to device company registries. These delays could be bypassed using a simple protocol in which ED staff attempt to interrogate an unknown CIED using each of the three possible remote interrogators until one connects, since there is no evidence of a remote interrogator being able to connect to a brand-mismatched CIED. This very strategy has proven anecdotally effective for the authors, but some emergency physicians are hesitant to utilize it, worried that mismatched remote interrogation could cause CIED malfunction. Our findings support the potential utility of this protocol.
This study has several limitations, including the fact that it was a single-center study. Furthermore, we only interrogated devices that were functional. Lastly, we did not attempt to perform mismatched interrogation malfunctioning CIEDs, due to the logistics and difficulties involved.
We found no evidence of CIED malfunction after attempted interrogation with brand-mismatched remote interrogators. These findings continue to expand the extant research related to the safety of remote CIED interrogator usage in the clinical setting, and furthermore suggest increased utility in the ED.
CONFLICT OF INTEREST
No potential conflict of interest relevant to this article was reported. | 2022-04-01T06:22:58.868Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "ab44f735976cb6d21753da4d9021be309e4b762b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d5ed11b3c3902374f61ec14e18cf9177fb5337e7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269834239 | pes2o/s2orc | v3-fos-license | Impact of foliar application of iron and zinc fertilizers on grain iron, zinc, and protein contents in bread wheat (Triticum aestivum L.)
Introduction Micronutrient deficiencies, particularly iron (Fe) and zinc (Zn), are prevalent in a large part of the human population across the world, especially in children below 5 years of age and pregnant women in developing countries. Since wheat constitutes a significant proportion of the human diet, improving grain Fe and Zn content in wheat has become important in improving human health. Objective This study aimed to quantify the effect of foliar application of iron sulfate heptahydrate (FeSO4.7H2O) and zinc sulfate heptahydrate (ZnSO4.7H2O) and their combination on grain Fe and Zn concentrations, as well as grain protein content (GPC). The study also aimed to assess the utility of these applications in large field conditions. Methods To address this issue, field experiments were conducted using 10 wheat cultivars and applying a foliar spray of FeSO4.7H2O (0.25%) and ZnSO4.7H2O (0.50%) separately (@400 L of solution in water per hectare during each spray) and in combination at two different crop growth stages (flowering and milking) for three consecutive crop seasons (2017–2020). The study used a split-plot design with two replications to assess the impact of foliar application on GFeC, GZnC, and GPC. In addition, an experiment was also conducted to assess the effect of soil (basal) @ 25 kg/ha ZnSO4, foliar @ 2 kg/ha, ZnSO4.7H2O (0.50%), and the combination of basal + foliar application of ZnSO4 on the grain micronutrient content of wheat cultivar WB 02 under large field conditions. Results GFeC increased by 5.1, 6.1, and 5.9% with foliar applications of FeSO4, ZnSO4, and their combination, respectively. GZnC increased by 5.2, 39.6, and 43.8% with foliar applications of FeSO4, ZnSO4, and their combination, respectively. DBW 173 recorded the highest increase in GZnC at 56.9% with the combined foliar application of FeSO4 and ZnSO4, followed closely by HPBW 01 at 53.0% with the ZnSO4 foliar application, compared to the control. The GPC increased by 6.8, 4.9, and 3.3% with foliar applications of FeSO4, ZnSO4, and their combination, respectively. Large-plot experiments also exhibited a significant positive effect of ZnSO4 not only on grain Zn (40.3%, p ≤ 0.001) and protein content (p ≤ 0.05) but also on grain yield (p ≤ 0.05) and hectoliter weight (p ≤ 0.01), indicating the suitability of the technology in large field conditions. Conclusion Cultivars exhibited a slight increase in GFeC with solitary foliar applications of FeSO4, ZnSO4, and their combination. In contrast, a significant increase in GZnC was observed with the foliar application of ZnSO4 and the combined application of FeSO4 and ZnSO4. In terms of GPC, the most significant enhancement occurred with the foliar application of FeSO4, followed by ZnSO4 and their combination. Data demonstrated the significant effect of foliar application of ZnSO4 on enhancing GZnC by 39.6%. Large plot experiments also exhibited an increase of 40.3% in GZnC through the foliar application of ZnSO4, indicating the effectiveness of the technology to be adopted in the farmer’s field.
Introduction
Micronutrient deficiency, primarily iron (Fe) and zinc (Zn), is prevalent among the human population, especially in children below the age of 5 years and pregnant women in low-and middle-income countries (1).Approximately two billion people across the world are affected by Fe and Zn deficiencies (2).Fe and Zn deficiencies lead to various health problems such as higher vulnerability to infectious diseases, anemia, disrupted brain function, hampered physical development, and stunting when such micronutrient-deficient diets are consumed over a period of time (3).Since wheat constitutes a significant proportion of the human diet, improving grain Fe and Zn content in wheat has become important in improving human health.Though wheat has large variations among germplasm lines in quantities of protein, carbohydrates, fats, minerals, antioxidants, and vitamins that are required for human health, there is a need to enhance the grain Fe and Zn contents in high-yielding backgrounds.This can be achieved by both genetic manipulations as well as agronomic management of wheat cultivation (4) and delivering wheat rich in Fe and Zn nutrients naturally to everyone, primarily to the segments of populations without access to costly commercially available fortified foods or supplements (5).
Substantial variations in micronutrient concentrations have been reported in wheat grains in different studies (6,7).Grain Fe content varies typically approximately 1.2-, 1.8-, and 2.9-fold in tetraploid, hexaploid, and diploid wheat cultivars, respectively (8).Diploid progenitors of wheat showed even higher variation, with the highest value of ~110 ppm in some of the diploid accessions (9).The higher content of Fe and Zn in diploids may be due to the presence of very thin grains, higher bran content, and comparatively lesser starchy endosperm content, where micronutrients are concentrated.Recently efforts have been made to improve micronutrient density in commercial wheat cultivars by utilizing diploid progenitors.Commercial wheat cultivars exhibited a lower range in grain Zn concentrations (20-35 mg/kg with an average of 28 mg/kg) in most of the wheat-producing regions (10).Taking these facts into account CIMMYT, Mexico initiated a program for increasing Fe and Zn in the commercial cultivars grown by 12 mg/kg over the global baseline Zn concentration of approximately 25 mg/kg in the target area required for a measurable effect on human health (8).Furthermore, it is also reported in some studies that protein-rich grains have a higher amount of Fe and Zn as compared to low-protein grains (11-13).In addition, soil conditions also cause much more variation than the genotype or species, depending on the nutrient profile of the soil (14).Micronutrient deficiency in grains is caused by some soil factors, i.e., (a) a low/deficient micronutrient availability; (b) pH; (c) a high content/concentration of calcite, bicarbonate ions, and salts; and (d) a high content of available phosphorus and interaction with another nutrient element (14).Multi-location testing of wheat entries under all India-Coordinated Research Projects (AICRPs) on wheat and barley in India demonstrated a large effect of environments on Fe and Zn contents (15).The distinct physiological and biochemical functions of these nutrients in fostering plant growth were examined by Putra et al. (16).Additionally, the pivotal roles of Fe and Zn in the physiology of wheat plants, along with their positive associations with different yield components, have been addressed (17,18).
Significant improvement in Fe and Zn content by external application of salts containing Fe and Zn has been reported in wheat grains (3,(19)(20)(21)(22).As Zn has higher mobility in wheat phloem (23,24), foliar application of Zn is considered to be the most effective method for improving Zn concentration in wheat grain.Reports indicate that foliar application of Zn salt can increase grain Zn concentration up to 3-or 4-fold, depending on soil status and climate conditions (25-30).Though the level of Fe mobility within the phloem is of intermediate level (31), reports indicated good re-translocation of Fe from the shoot to the grain in wheat (32).
With the prevalence of micronutrient deficiency of these two nutrients, the present study was undertaken to quantify the effect of foliar application of Fe and Zn salts on their concentration in wheat grain using recently released varieties.In addition, large-plot experiments were undertaken to quantify the effect of both foliar application of 0.50% (w/v) of ZnSO 4 •7H 2 O @ 2 kg/ha, soil application of ZnSO 4 @ 25 kg/ha, and combined application of both foliar spray 0.50% (w/v) of ZnSO 4 •7H 2 O @ 2 kg/ha and basal dose of ZnSO 4 @ 25 kg/ha on grain Fe and Zn concentration and yield potential to see the utility of foliar application in large field conditions.
Materials and methods
The experimental site, plant material, and experimental details (34).Two foliar sprays of all three treatments were applied @400 L of solution per hectare at two different crop growth stages (flowering and milking) during evening hours.Thus, the total amount of FeSO 4 •7H 2 O and ZnSO 4 •7H 2 O used was 2.0 kg and 4.0 kg, respectively for both sprays.In addition, a large-scale experiment (1 acre of land) was conducted at ICAR-NDRI, Karnal (Haryana) during 2019-2020 using the cultivar WB 02 for the assessment of the effect of the foliar application of aqueous solution (0.50% w/v) of ZnSO 4 •7H 2 O @ 2 kg/ha, basal application of ZnSO 4 @ 25 kg/ha, and combined application of both foliar spray 0.50% (w/v) of ZnSO 4 •7H 2 O @ 2 kg/ ha and basal dose of ZnSO 4 @ 25 kg/ha at two different crop growth stages (flowering and milking) on GFeC, GZnC, and GPC along with the grain yield.
Every year, sowing was done in the second week of November in a well-prepared field at 25 cm row spacing with a plot size of 3 rows of 2 m.Recommended doses of fertilizer (120 kg N, 60 kg P 2 O 5 , and 40 kg K 2 O per ha) were used with full doses of K 2 O and P 2 O 5 applied at sowing; nitrogen was applied in three split doses: 60 kg N per ha as basal dose at the time of sowing, 30 kg N per ha at first irrigation (21 days after sowing), and 30 kg N per ha at second irrigation (45 days after sowing).Finally, trials were harvested in the third week of April of the subsequent years at maturity.All the recommended packages of practices were followed for raising a good crop.
Data collection and statistical analysis
Twenty-five to 30 spikes from each replication of treatment were bulked separately and hand-threshed in a clean cloth bag by beating with a wooden stick, and the grains were separated from the husk in a plastic tray.After threshing, 15-20 g seed from each replication was used for measuring grain iron concentration (GFeC), grain zinc concentration (GZnC), and grain protein content (GPC).GFeC and GZnC were measured using energy dispersive X-ray fluorescence (ED-XRF) model X-Supreme 8,000 (M/s Oxford Inc., USA).Whereas, GPC was estimated using Foss Infratec TM 1241 Grain Analyzer, and the final value of GPC was calculated on a 12% moisture basis.Finally, data for GFeC, GZnC, and GPC were analyzed for descriptive statistics, analysis of variance (ANOVA), and Pearson's correlation coefficient using the publicly available statistical analysis platform R version 4.2.1 for Windows (R core team), and the % change over control was calculated and their graphical representation was presented using MS excel.
ANOVA showed that the genotypic effect was significant for GZnC (p ≤ 0.01) and GPC (p ≤ 0.05) and no effect for GFeC.Though the environment had a significant effect on GFeC and GZnC, the interactive effect of genotype and year was significant only on GZnC (p ≤ 0.05).On the other hand, treatments and their interaction with environments showed significant effects (p ≤ 0.01) for all three traits (Table 2).There was a significant positive correlation between GFeC and GZnC (r = 0.60; p ≤ 0.01) under control conditions, as well as the foliar application of FeSO 4 (r = 0.70 and p ≤ 0.01), ZnSO 4 (r = 0.25; p ≤ 0.05), and the combined application of FeSO 4 and ZnSO 4 (r = 0.26; p ≤ 0.05).There was also a significant positive correlation between GZnC and GPC (r = 0.28; p ≤ 0.05) using the combined foliar application of FeSO 4 and ZnSO 4 (Table 3).
Details of variety-wise changes in GFeC, GZnC, and GPC in different experiments are given in Supplementary Table S2 and Figures 2-4.As compared to control conditions, the highest increase in GFeC was observed in the cultivar WH 1105 (11.6% higher) by foliar application of ZnSO 4 (Supplementary Table S2 3); and the highest increase in GPC (10.2%) was exhibited in HPBW 01 by foliar applications of both FeSO 4 and ZnSO 4 salts independently (Supplementary Table S2 and Figure 4).
With the high significant effect of ZnSO 4 on Zn content, an experiment was conducted in an area of 1 acre to assess the impact of the application in large field conditions.The foliar spray and combination of basal + foliar application of ZnSO 4 exhibited a very large increase (40.3%) (p ≤ 0.001) in the Zn content, while basal application showed a comparatively smaller increase (7.0%).GZnC was 28.3 mg/kg, 30.3 mg/kg, 39.7 mg/kg, and 39.7 mg/kg, while GFeC was 39.9 mg/kg, 40.9 mg/kg, 42.1 mg/kg, and 37.9 mg/kg under treatments such as no Zn, basal ZnSO 4 , foliar ZnSO 4 , and basal+foliar ZnSO 4 , respectively.Grain yield also increased significantly, showing 54.8 q/ha, 58.2 q/ha, 58.6 q/ha, and 60.3 q/ha under treatments such as no Zn, basal ZnSO 4 , foliar ZnSO 4 , and basal+foliar ZnSO 4 , respectively.In addition to GZnC, significant increases were observed for GPC (p ≤ 0.05) and hectoliter weight (p ≤ 0.01), by the application of foliar ZnSO4 spray (Figure 5).
Discussion
Fe and Zn are essential nutrients for the growth and development of both plants and animals.At present, an increase in crop-nutritional value is of utmost importance because a large part of the human population is suffering from micronutrient deficiency.A sustainable and cost-efficient approach being used for the reduction of mineral malnutrition in developing countries is agronomic biofortification, among all current strategies.Therefore, this study was conducted to address the effectiveness of the foliar application of two different micronutrient concentrations of Fe and Zn salts (alone and in combination) on three important nutritional quality traits of wheat, i.e., GFeC, GZnC, and GPC.ANOVA results indicated significant differences among genotypes (for GZnC and GPC except for GFeC), treatments (for GFeC, GZnC, and GPC), years [(for GFeC, GZnC except for GPC), and the year × treatment interaction (for GFeC, GZnC, and GPC)], suggesting the influence of environment on the expression of these traits.Other studies also showed that the three traits are highly influenced by the genotype × environment interaction and are controlled by polygenes (35).The notable genotype by environment (G × E) interaction observed in our study may arise from differences in micronutrient levels within the soil, as previously reported (36) with genotype as the primary determinant of variation in GFeC, GZnC, and GPC (37-40).
Impact of foliar application of FeSO 4 on GFeC and GZnC
Over the years, GFeC and GZnC ranged from 30.6 to 45.3 mg/kg with an average value of 38.1 mg/kg and 21.0 to 36.3 mg/kg with an average value of 27.3 mg/kg, respectively, under control conditions.Various studies also reported diverse GFeC and GZnC levels in wheat: 28.5-46.3mg/kg and 33.6-65.6 mg/kg (41); 17.8-49.7 mg/kg and 24.5-44.3mg/kg (42); and 24.2-48.5 mg/kg and 19.4-47.7 mg/kg (43).Previous studies reported an average GFeC of 35.0 mg/kg at CIMMYT, Mexico (44) and an average GZnC of 27.3 mg/kg in 160 Chinese ancient wheat cultivars (45).In this experiment, foliar application of FeSO 4 marginally increased average GFeC by 5.1% over the control.Prior reports also indicated no significant influence of foliar application of Fe fertilizers, either in inorganic or chelated form, on grain Fe concentration in Canadian wheat cultivars (46).The grain Fe concentration increased to approximately 21% in Iran (47), 28% in China, 14% with FeEDTA, and 10% with FeSO 4 in central Anatolia, Turkey, with no significant effect on Zn content (33).The lesser increase in Fe content in wheat grains following foliar Fe application may be due to limited penetration into leaf tissues and restricted phloem mobility (48,49).Additionally, the effectiveness of different forms of Fe salts, including FeSO 4 , FeEDTA, FeDTPA, FeEDDHA, and Fe-citrate in addressing Fe deficiency varies significantly based on factors such as solubility, stability, leaf cuticle penetration, mobility, and translocation within leaf tissues (50,51).Though negative effects of foliar sprays of different doses of Fe salts on grain Zn concentration have been reported, leading to lower Zn content as compared to the control (52), in this investigation, GZnC increased by 5.2% with the sole application of Fe salt to the leaves.The negative effect of Fe application on Zn content has been attributed to the antagonistic relationship between Fe and Zn elements (53).Similar findings regarding the interaction of Fe with Zn and other elements were observed in, paddy (54), and rice (55), with a decline in Zn translocation as Fe levels increased.Our findings on the foliar application of FeSO 4 are consistent with the findings of Pahlavan-Rad and Pessarakli (47), who concluded a 21% increase in grain Fe and a 13% increase in grain Zn concentration in wheat.
Impact of foliar application of ZnSO 4 on GFeC and GZnC
It is interesting to note that foliar Zn salt application improved GFeC by 6.8%, from 38.1 to 40.4 mg/kg, as compared to the controlled condition.Some of the previous studies also found that foliar Zn salt application significantly improved GFeC of the bran and embryo parts of the wheat grain, which may be due to the ability of Zn-binding compounds to act as sinks for Fe transport and storage in grains (56,57).This increase in GFeC following foliar treatment of Zn salt was also seen in potatoes and had no antagonistic impact on the tuber Fe content (58).Our findings from the foliar application of ZnSO4, in collaboration with Pahlavan-Rad and Pessarakli (47) resulted in a 99% boost in Zn concentration and an 8% increase in Fe concentration in wheat grains.Foliar spray of ZnSO 4 increased Zn content significantly (p ≤ 0.001) by 39.6% compared to controlled conditions, from 27.3 to 37.8 mg/kg (Table 1).This significant increase in Zn content in wheat grains may be attributed to the improved mobility of Zn in the phloem and its efficient translocation into developing wheat grains (24,59).Not only does foliar Zn salt application alone result in high grain Zn concentration, but also the combined application of Fe and Zn boosts Zn concentration in wheat grains in a similar way.Several other reports also showed enhanced wheat grain Zn content by 58% (60) and 83% (22) with foliar application of Zn salt.In addition, other studies reported increased grain Zn concentrations by 27% in rice and 9% in maize (61) with the foliar application of Zn salt.To further substantiate the significant effect of ZnSO 4 spray on Zn content applicable to large field conditions, an experiment was conducted in an area of 1 acre using both foliar and basal application.There was a 7% increase in Zn content by the basal application of ZnSO 4 , while it was 40.3% by foliar application.This demonstrated that the foliar application of ZnSO 4 can be used to enhance Zn content significantly in farmer's field conditions.
Impact of combined foliar application of FeSO 4 and ZnSO 4 on GFeC and GZnC
Combined application of FeSO 4 and ZnSO 4 led to a 5.9% increase in GFeC as compared to control conditions.It ranged from 32.4 to 52.0 mg/kg, with an average of 40.35 mg/kg.It is worth noting that a lesser increase in GFeC through agronomic biofortification occurred as compared to GZnC.This may likely be due to the limited phloem mobility of Fe from leaves to grains (62).These results corroborate earlier findings (63,64), indicating a weak association between GFeC and Fe salts when applied via foliar or soil route.However, in one of the studies, an increase in GFeC up to 28% was observed following foliar Fe salt application (65).Promisingly, the combined application of Fe and Zn salts increased GZnC by 43.8% over control and ranged from 27.8 to 52.5 mg/kg, with an average of 38.9 mg/kg.The increase in GZnC was similar to the foliar application of Zn salt alone.Similarly, several other reports also showed a higher accumulation of Zn content by foliar application (22,60).This may be due to the better mobility of the Zn-binding compound and the ready translocation of the Zn element in the phloem to emerging wheat grains (53,56,59).
Impact of foliar application of FeSO 4 , ZnSO 4 , and their combination on GPC
The foliar application of Fe, Zn, and a combination of Fe and Zn salts increased GPC by 6.8, 4.9, and 3.3%, respectively, indicating a higher influence of the application of FeSO 4 on the GPC.GPC ranged from 10.0 to 12.8%, averaging 11.4% with Fe salt application, 9.8 to 12.7%, averaging 11.2% with Zn salt application, and 9.3 to 12.2%, averaging 11.0% with combined Fe and Zn salt application.Combined application of Fe and Zn led to a lower increase in protein content, which may be because of some antagonistic effect of the Fe and Zn combination.Several other reports also showed improvements in GPC by the application of Fe and Zn salts in wheat (11,60,66,67).Previous studies have indicated that foliar application of micronutrients such as FeSO 4 and ZnSO 4 enhanced grain weight as well as grain and straw yield (26,30,(68)(69)(70). Significant positive correlation among GFeC, GZnC, and GPC provided additional support that the application of Fe and Zn salts can enhance protein content.Earlier reports (40, [71][72][73] similarly found a positive correlation between Fe and Zn content and the GPC in wheat.They suggested that the accumulation of Fe and Zn in wheat grains is influenced by grain protein levels. Varietal response to the foliar application of FeSO 4 , ZnSO 4 , and their combination Varietal responses to various foliar sprays of micronutrients displayed significant differences (Supplementary Table S1).The average performance of wheat cultivars, as determined by LSD testing, revealed significant differences only with the application of ZnSO 4 and a combination of FeSO 4 and ZnSO 4 via foliar spray, specifically in the case of GZnC.Varietal responses varied in terms of absorption, accumulation, and translocation, factors primarily determined by inherent genetic potential.Notably, in our study, WB 02, bred with a focus on biofortification traits, exhibited a heightened response in GZnC to both foliar and basal application of mineral fertilizers.It is recommended to assess genotype performance for agronomic fortification through micronutrient application to attain accurate outcomes.Identifying and endorsing lines with superior grain micronutrient content for cultivation are imperative in combating malnutrition.Effect of foliar, soil, and their combined application of ZnSO 4 using large plots on GFeC, GZnC, GPC, grain yield, and hectoliter weight An experiment of basal (soil) application and foliar spray of ZnSO 4 was conducted using 1 acre of land to evaluate the suitability of the technology in farmer's field conditions.The data exhibited that the foliar application of ZnSO 4 significantly increased GZnC (p ≤ 0.001), GPC (p ≤ 0.05), grain yield (p ≤ 0.05), and hectoliter weight (p ≤ 0.01) but had no effect on GFeC.So, a large-scale experiment further demonstrated the positive effect of ZnSO 4 on GPC and hectoliter weight as well as on yield.An additive effect on Zn and protein concentrations was previously reported in wheat grain (74).Zn and Fe salts applied via soil and foliar methods, along with 120 kg N/ha at sowing, led to a 46% increase in Zn and a 35% increase in Fe concentration (75).A positive relationship between Zn and N is observed with increased protein content (76).In addition, reports also showed synergistic interaction among the plant nutrients that led to enhanced crop growth and grain yield (77, 78).Studies have shown that grain yield can lead to a "dilution effect" on grain Zn concentration across various cultivars or fields (79,80).However, in this study, a dilution effect on grain yield was also observed in the basal + foliar spray treatment, although it was not significant.Therefore, the foliar
Conclusion
The present investigation was undertaken to quantify the effect of foliar application of Fe and Zn salts on Fe, Zn, and protein concentrations in wheat grain.The data demonstrated the significant effect of foliar application of ZnSO 4 on grain Zn and protein contents.The large-plot experiments also exhibited the significant positive effect of ZnSO 4 not only on grain Zn and protein content but also on grain yield and hectoliter weight, indicating the suitability of the technology in the farmer's field.The cultivars exhibited a slight increase in GFeC with solitary foliar applications of FeSO 4 , ZnSO 4 , and their combination.In contrast, a significant increase in GZnC was observed with the foliar application of ZnSO 4 and the combined application of FeSO 4 and ZnSO 4 .In terms of GPC, the most significant enhancement occurred with the foliar application of FeSO 4 , followed by ZnSO 4 and their combination.The shortcoming of the implementation of the foliar application adds to the overall cultivation expenses for farmers.The efficacy of foliar spraying of micronutrients is heavily influenced by environmental variables such as the timing of application, temperature, wind speed, and rainfall, which could potentially hinder their effectiveness.However, keeping in view the potential of this technology to combat micronutrient malnutrition, it is better to adopt agronomic biofortification through foliar application.
FIGURE 1
FIGURE 1Overall effect of foliar application of FeSO 4 , ZnSO 4 , and their combination on GFeC, GZnC, and GPC over control of 10 wheat cultivars grown at Karnal during 2017-2020.
FIGURE 2
FIGURE 2Variety-wise percent change in GFeC over control by the foliar application of FeSO 4 , ZnSO 4, and their combination (wheat cultivars grown at Karnal during 2017-2020).
FIGURE 3 Variety
FIGURE 3 Variety-wise percent change in GZnC over control by the foliar application of FeSO 4 , ZnSO 4 , and their combination (wheat cultivars grown at Karnal during 2017-2020).
FIGURE 4
FIGURE 4Variety-wise percent change in GPC over control by the foliar application of FeSO 4 , ZnSO 4 , and their combination (wheat cultivars grown at Karnal during 2017-2020).
and Figure2); the
TABLE 1
Overall performance of the entire set of wheat cultivars for GFeC, GZnC, and GPC using the foliar application of FeSO 4 , ZnSO 4, and their combination along with control grown at Karnal during 2017-2020.
TABLE 3
Pearson's correlation coefficient of the entire set of wheat cultivars for GFeC, GZnC, and GPC using the foliar application of FeSO 4 , ZnSO 4 , and their combination along with control grown at Karnal during 2017-2020. | 2024-05-18T15:35:58.677Z | 2024-05-14T00:00:00.000 | {
"year": 2024,
"sha1": "3f56a8e13fc9ef2276db7844c68a31be5fbd7458",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2024.1378937/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "12568fa346d93394506b265ff0e5302dff637d37",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
184483623 | pes2o/s2orc | v3-fos-license | Case study of creativity in asynchronous online discussions
It is vital for online educators to know whether the strategies they use help students gain twenty-first-century skills like creativity. Unfortunately, very little research exists on this topic. Thus, the purpose of this study was to determine whether participation in online courses can help students develop creativity using asynchronous online discussions, textbooks, and teacher developed materials. A case-study approach was used and one professor, recognized by her peers for her expertise in online education, and three of her students were interviewed. Asynchronous online discussions (29) were also collected and analyzed using a sequential process of building an explanation, checking the explanation against the data, and repeating the process. Key results from the study indicated that project-based prompts, problem-based prompts, and heuristics used in asynchronous online discussions can help promote creativity. Future research should explore a more diverse group of participants and academic subject areas.
Review of the literature
For an idea or product to be judged creative, it must be novel and capable of being used (Corazza, 2016;Kaufman, 2009). Per the componential model of creativity (Amabile, 1983(Amabile, , 1988Amabile & Pillemer, 2012), creativity arises from the interaction of three components within a person: (a) domain-relevant skills, (b) creativity-relevant processes, and (c) task motivation; and one component outside a person: the social environment in which the person is working. Domain-relevant skills are the factual knowledge and domain expertise that a creative individual possesses (Amabile, 1983(Amabile, , 1988Amabile & Pillemer, 2012). Creativity-relevant processes are the general cognitive skills that promote the generation of ideas (Amabile, 1983(Amabile, , 1988Amabile & Pillemer, 2012). Task motivation is the intrinsic motivation that the person has for completing the task (Amabile, 1983(Amabile, , 1988Amabile & Pillemer, 2012). Social environment refers to the environment in which the person is working (Amabile, 1983(Amabile, , 1988Amabile & Pillemer, 2012).
Several researchers have considered how teachers can enable students to be more creative. These researchers have discovered that helping students to be creative necessitates that teachers possess knowledge of what creativity is and what it looks like. Beghetto and Kaufman (2013) proposed that instructors who were required to teach creativity while lacking deep understanding of what creativity is might be doing more damage than good. They also suggested that teachers need to understand five essential principles before creativity is added to the curriculum. First, teachers must realize that creativity is not just novelty. Second, teachers must know that there are various levels of creativity. Third, teachers must appreciate that some environments inhibit creativity while others stimulate it. Fourth, teachers must know that creativity is not free. Finally, teachers must know that there is an appropriate time for creativity.
Teachers should model and provide opportunities for creativity. They can do this by first keeping creativity at the forefront when developing classroom environments (Starko, 2013). Modeling and reinforcement should play a primary role in making creativity a part of every lesson (Beghetto & Kaufman, 2013). Additionally, students need to have their creativity triggered (Garner, 2013), so teachers should carefully select the motivational techniques that are used to urge students to be creative (Beghetto & Kaufman, 2013), and always provide students opportunities to demonstrate their creativity (Beghetto & Kaufman, 2013).
Teachers need to help students develop the mindsets required for creativity (Starko, 2013). Part of this is helping students gain content knowledge (Gregory, Hardiman, Yarmonlinskaya, Rinne, & Limb, 2013). This should include how to visualize and how to get in the habit of noticing (Garner, 2013), as well as how to ponder the ramifications of their solutions (Gregory et al., 2013) while thinking in an interdisciplinary manner with creativity (Starko, 2013). Teachers also need to help students develop the skills required for creativity (Starko, 2013). This may involve:
Asynchronous online discussions
Researchers have also considered how asynchronous online discussions can best be planned for student success. Gao et al. (2013) uncovered four ways that online instructors organize asynchronous online discussions: (a) constrained environments, (b) visualized environments, (c) anchored environments, and (d) combined environments. In constrained environments, teachers give students sentence starters or frames. In visualized environments, teachers provide students with software that allows them to turn their discussions into concepts maps. In anchored environments, teachers give students texts to annotate and ask students to turn in their annotated texts for the discussion. In combined environments, teachers ask students to participate in two or more types of discussion.
De Noyelles, Mannheimer Zydney, and Baiyun Chen (2014) reviewed the literature on asynchronous online discussions. They discovered that asynchronous online discussions are improved when instructors modeled good social presence, required participation in the discussions, and graded the discussions. De Noyelles et al. also uncovered three types of prompts that improved asynchronous online discussions: (a) problem-based prompts, (b) project-based prompts, and (c) debate-based prompts. De Noyelles et al. also discovered six types of teacher responses to students' posts that improved discussions: Questioning student responses, playing devil's advocate, providing timely, modest instructor feedback, allowing students to facilitate discussions, providing structure or protocol prompts, and providing audio feedback.
Pena-Shaff and Altman (2015) noticed that students who posted frequently in discussions were more likely to benefit from the discussion. Also, students who replied to other students were more likely to gain from the discussion. Thus, Pena-Shaff and Altman suggested that online teachers insist that students post a certain number of times. Additionally, Pena-Shaff and Altman discovered that introducing thinking than other prompts are. However, insufficient research has been done to establish whether asynchronous online discussions can be used to improve creativity. In the following pages, we describe a case study (Yin, 2014) that focuses on answering the question of whether asynchronous, online discussions can be used to improve creativity.
Research questions
We asked two research questions. This is the first question that we asked: How do asynchronous online discussions reflect Amabile's componential model of creativity?
(a) How do instructor prompts reflect Amabile's componential model of creativity? (b) How do student-to-instructor interaction reflect the different components of Amabile's componential model of creativity? (c) How do student-to-student interaction reflect the different components of Amabile's componential model of creativity?
This is the second question that we asked: How do the materials used in asynchronous online courses promote creativity per Amabile's componential model of creativity?
Ethical approval
This study was part of a dissertation completed at Walden University. Committee members included Dr. Dennis Beck and Dr. Jennifer Smolka. Approval for the study was obtained through Walden University's Institutional Review Board (IRB). The IRB number was 07-29-16-016834. All participants agreed to participant in this study and signed an agreement that was approved by Walden's IRB. Pseudonyms for all participants and the university are used through out the document to protect confidentiality. Neither researcher had a conflict of interest with any of the participants or participating university.
Methods
The case study method as described by Yin (2014) was adapted for this study. Participants for this study were purposefully chosen from two graduate courses at a midsized public university in the United States. One instructor and three students were selected to participate in the study. Pseudonyms have been used to protect the identity of participants. Two student participants were between 20 and 30 years old. One student participant was between 55 and 65 years old. Teresa was completing a Master of Science in Education and was enrolled in one of the two courses. Teresa had a job in marketing. Vanessa was completing a Doctor of Philosophy degree in Education and was enrolled in both courses. She had previously worked in the legal profession. Cindy was working on a Doctor of Education degree and was enrolled in both courses. She worked in fire safety developing educational materials for firefighters. Dr. Jones taught both classes. She was a respected member of an educational technology professional association and had received several awards for creativity in her teaching. She had also taught both courses previously.
Setting
ITEC 3520 was taught online using the learning management system (LMS) Canvas in the spring of 2016. The purpose of this course was to familiarize students with the theoretical frameworks necessary to critically evaluate and create visual depictions of information. The course continued for 15 weeks and included the following topics: (a) visual literacy, (b) learning theories, (c) instructional design, (d) instructional technology, and (e) information presentation. Students began each week by watching a video created by the Dr. Jones. Students would then read any texts that were assigned for the week and participate in weekly asynchronous online discussions.
ITEC 3550 was also taught online using Canvas in the spring of 2016. The purpose of this course was to acquaint students with the practices, software, and applications used to create, operate, and develop multimedia presentations for educational purposes. The course continued for 15 weeks and included hands-on activities to help students practice and utilize multimedia design principles.
Data collection and analysis
Data was collected from several sources, including individual interviews with students enrolled in the courses, an interview with the instructor for the courses, transcripts of asynchronous online discussions from both courses, and other materials related to the courses.
All student interviews were conducted on Zoom (https://zoom.us/) following a student interview protocol (Creswell, 2007). The interviews were then transferred to a private YouTube channel that allowed the interviews to be transcribed using the closed captioning feature. After transcribing the interviews, we read the transcriptions while listening to the interviews to make sure that there were no errors. The questions in the interviews varied slightly from those on the student interview protocol because clarification about something that was said in a previous answer was sometimes needed.
Dr. Jones was interviewed on 3 January 2017. An instructor interview protocol was used. The interview was conducted using Zoom and then uploaded to a private You-Tube channel. The close captioning feature of YouTube was used to transcribe the interview. After the interview was transcribed, we then listened to the interview while reading the transcriptions to ensure that errors were fixed. The transcription was then put into an Access database. Research questions and interview questions were aligned with the componential model of creativity.
Dr. Jones also shared copies of twenty-nine asynchronous online discussions from ITEC 3520 and ITEC 3550. The online discussions were entered into an Access database for easier coding during data analysis.
Cleaning up the data stage 1 A two-stage method of data analysis was used. The first stage was cleaning up the data and the second stage was finding patterns. First, the asynchronous online discussions were divided into prompts and threads. A prompt was the initial question or statement that was used to ignite the discussion. A thread was one student's response either to the prompt or another student's thread. The method for coding the interview was to give each new response by the person being interviewed an initial code and a comment code.
Second, the prompts and threads were given an initial code based on the componential model of creativity (Amabile, 1983(Amabile, , 1988Amabile & Pillemer, 2012): (a) domain-relevant skill, (b) creativity-relevant process, or (c) task motivation. Any discussion thread that concentrated mainly on aiding an individual in gaining knowledge of or expertise in the course's content was labeled a domain-relevant skill. Any discussion thread that focused mainly on helping a student learn a heuristic for creating something (Amabile, 1983(Amabile, , 1988Amabile & Pillemer, 2012), define a problem, gather information, organize information, combine concepts, generate ideas, evaluate ideas, implement a solution, or monitor a solution was labeled a creativity-relevant skill (Mumford, Medieros, & Partlow, 2012). Finally, any discussion thread that concentrated mainly on praising, agreeing with, critiquing, or answering a student's post was labeled task motivation. The codes for task motivation were based on a study done by Karakaya and Demirkan (2015) that described environments that helped individuals be creativity in digital environments.
Next, domain-relevant skills were subcategorized by a code that showed the source of the domain knowledge. This type of subcategorization was done with domain-relevant skills to help answer RQ2. The three source codes used were (a) textbook, (b) real world, and (c) additional source. The initial codes for creativity relevant processes and task motivation were further categorized by type of comment. There were five comment codes for creativity-relevant processes: (a) heuristic, (b) openness, (c) suspending judgement, (d) broad categories, and (e) breaking patterns. Our rationale for the comment codes for the creativity-relevant processes were types of creativity-relevant processes described by Amabile (1983Amabile ( , 1988 and Amabile and Pillemer (2012). Also, there were four comment codes for task motivation: (a) praise, (b), critique, (c) answering, and (d) agreeing. Since the social environment plays an important role in task motivation (Amabile, 1988), these codes were based on a study of what enables students to be creative in digital environments done Karakaya and Demirkan (2015). Finally, all of the prompts and threads received a type of response code: (a) student feedback, (b) teacher feedback, and (c) original response.
The same coding process was used for prompts except the type of response was left blank in prompt coding because the information was redundant. See Table 1 for all the codes used in the study.
Finding patterns stage 2
Using Yin's (2014) explanation building process, four themes were uncovered: (a) heuristics, (b) openness/suspending judgement, (c) agreeing/praise, and (d) answering/critiquing. One theme that emerged was that the discussions helped students develop heuristics for solving problems. These heuristics might be general, or they might be specific to solving one problem or evaluating one solution to a problem. One specific heuristic that was given was brainstorming. One chapter in the course textbook was devoted to teaching students how to brainstorm. Another theme that emerged was openness/suspending judgement. Dr. Jones developed openness and the ability to suspend judgment by asking open-ended questions. The term that interviewees used for openness and suspending judgement was flexibility. Both agreeing and praising occurred frequently in the asynchronous online discussions. Students were quick to tell their peers that they had done an excellent job or that they concurred with an answer to a given prompt. Both students and Dr. Jones answered questions when asked. Dr. Jones would answer questions that were asked other students if she had knowledge that was not available to students in the class. Students were slow to give negative feedback on the work of their peers. Dr. Jones balanced her praise with providing negative feedback that was designed to help students improve their final projects. See Table 2 for how frequently comments related to each component of Amabile's (1983, 1988, Amabile and Pillemer, 2012 model of creativity were made.
Results
The results of this study are presented in relation to the research questions used in the study. During data analysis, the data received codes using labels developed based on the componential model of creativity (Amabile, 1983(Amabile, , 1988Amabile & Pillemer, 2012). Based upon the explanations that began to emerge from the data, answers to the research questions appeared. The key findings from this question was that asynchronous online discussions can help students to become more creative by helping them gain domain-relevant skills and creativity-relevant processes.
Research Question 1a: How do instructor prompts reflect Amabile's componential model of creativity?
Acting as the heuristic for developing a specific product Sometimes the prompts served to teach creativity-relevant processes by acting as the heuristic for developing a specific product. One instructor prompt that helped to develop a creativity-relevant process by serving as heuristic came from week 6 of ITEC 3550: … Explain the context (class/training, audience description, etc.), type of graphic (refer to C&M Table 4.1), and how you expect the image to be used. Remember the graphic you create does not need to be perfect or high quality. It does, however, need to adhere to copyright law and should be attached or embedded in your reply.
Providing students with heuristics, instead of systematic guidelines, was valued by students as this quote from the interview with Teresa demonstrates, "… I think that allowed us some flexibility and we were able to be a little more creative on how we formulated our responses." Teaching a heuristic that could be used on a variety of projects At other times, the prompts served to teach creativity-relevant processes by teaching a heuristic that could be used on many different projects. Two examples of this type of prompt are in Table 3.
Prompts to develop openness
The prompts also served to help students develop the creativity-relevant process of openness as this quote from Vanessa shows: I mean, and that's been true of many of my courses when you go out and you actually find real world examples or other academic texts even that are related to the topic and again going back to that discussion with your peers where your able to dissect information, you know, other people bring in. You're able to really get a much broader understanding that in some ways also a more in-depth understanding.
This quote from Vanessa also shows that the prompts helped students to develop openness: I think the main thing I got from the discussions was all the different experiences from peers because they were all coming from different backgrounds, from different areas, and so there was a very diverse way of thinking, and so that was kind of interesting because they were able to really help me think of things that I probably never would have thought of with my own experiences.
Prompts to develop domain-relevant skills
The prompts also served to help students to develop domain-relevant skills. This was most frequently done by asking students to apply knowledge gleaned from the course textbook, from teacher-made-videos, or from another source. Gaining domain-relevant skills by correcting student misunderstandings The interactions between students and instructors helped students gain domain -relevant skills. Sometimes the interaction between students and instructor in the asynchronous online discussions helped students to gain domain-relevant skills by correcting student misunderstandings or providing more information as this interaction between Dr. Jones and Cindy in week 2 of ITEC 3520 demonstrates. Cindy wrote: I looked at Popplet, but it is a MAC program, and I can't run that. I will look for a PC tool. "Dr. Jones responded, "Popplet is available to use in any browser on Mac or PC. It can also be downloaded as an iOS app. If you go to the website http://www. popplet.com (http://www.popplet.com) and click the "try it out" button, you can experiment and/or click the "sign up" button in the upper right corner. Signing up lets you save and share your popplets." Another example of when Dr. Jones provided additional information occurred in this exchange between Dr. Jones and Jack in week 1 of ITEC 3520. Jack wrote: I think one of the biggest controversies in college sports is the use of certain symbols that may be "offensive" to a particular group, especially Native American symbols such as the Seminole Indian (which the tribe wholly supports) and a school like the University of North Dakota.
Dr. Jones responded: The mascot issue has long fascinated me, primarily related to my knowledge of the Seminole Tribe support. When UND was first discussing changes, of course FSU came up, and I was surprised at how many active and passive voices in the conversation did not know about the relationship.
Gaining domain-relevant skills by answering student questions
Sometimes the interaction between students and instructor increased domain-relevant skills by answering questions that students brought up during the discussion as this exchange between Vanessa and Dr. Jones during week 5 of ITEC 3520 shows. Vanessa wrote: I was actually thinking the same thing … It seems that many of the logos that we have looked at are trying to explicitly or implicitly tell the viewer something about the company, organization, or product through the visual aspect of the logo. I am wondering what that would be in these two cases, and really, in the case of a lot of the car company logos. You brought up a great point … Dr. Jones responded, "There are three ellipses visible in the company's logo. Each ellipse represents the heart of the customer, the heart of the product and the heart of technological progress." Another instance when Dr. Jones provided additional information can be seen in this exchange between Adam, Jan, and Dr. Jones during week 4 of ITEC 3550.
Adam wrote, "So, can we make a backup so long as we don't ever share it with someone else?" Jan replied, "That's a good question, Adam. I would think so if it was still only one individual using the material but going from paper to digital makes me wonder." Dr. Jones responded: I would have to dig for it, but there was a ruling that says if you own the physical copy of media (movie, song, etc.), you are entitled to one digital copy. This means that you could legally use an application like Handbrake to "rip" your favorite Disney films and store these on a personal device. However, you cannot distribute that digital copy and if you ever lose or sell the physical copy, you must delete the digital copy. As for iTunes or other digital media sellers, system backups are usually excluded from consideration. In other words, if you use Time Capsule on a Mac or a service for PC, that backed up copy isn't accessible except in the instance to restore a system. That said, if you lose your digital purchase, you can re-download it from the purchasing company. Some make it easier than others, but you can usually get it back.
Gaining domain-relevant skills through inclusion of subject matter expertise
Sometimes the interaction between students and instructor increased domain-relevant skills because the teacher added information that was not contained in the textbook, teacher-made videos, or additional readings as this response by Dr. Jones in week 13 of ITEC 3550 shows, "The point about the company being larger, with locations in multiple states, upon looking at the website makes me wonder if this is a case where the local franchise is not provided with stock marketing materials." Teresa valued the domain-relevant skills that she gained from her interactions with Dr. Jones as this quote shows, "I learned a lot more than I thought I was going to learn. I think one of the things that caught me by surprise is designing a logo." Vanessa also valued the additional knowledge she gained from her interactions with Dr. Jones as this quote shows: Yeah, I mean a lot of times, she would pop in and give us sort of directed information based on the discussions that were going on or questions that she saw popping up, so it would be useful particularly if we were having trouble with technology or finding resources or what not. It would be useful in those cases.
Increasing task motivation and social environment
The interaction between students and instructor also served to increase task motivation. One way that the interaction between students and instructor increased task motivation was by giving quality feedback. The most common type of feedback given by Dr. Jones was praise. This quote from week 1 of ITEC 3520 is an example of the praise that Dr. Jones would give, "Think your first observation really illustrates the role of the album cover in conveying a deeper message about the band. Nice choice!" This quote from week 4 of ITEC 3550 also is another example of the type of praise that Dr. Jones would give, "Great find on the Canada vs. US copyright resource!" I've had a similar conversation with UK faculty over "crown copyright." "Fascinating stuff when you look at country/cultural guidelines!" This quote from Vanessa shows that students appreciated this positive feedback that they received from Dr. Jones, "Oh, yeah, absolutely. Dr. Jones always has a really good attitude and it's really sort of a cheerleader, in sort of a way, you know, to help everybody stay encouraged and not get frustrated or what not and so." Student-to-teacher interaction also helped to create a social environment that was conducive to creativity as this quote from Vanessa demonstrates, "Oh, yeah, absolutely. She's always has a really good attitude and is really very sort of a cheerleader, in sort of a way, you know, to help everybody stay encouraged and not get frustrated or what not and so." The student-to-instructor interaction served four functions. First, it enabled Dr. Jones to correct student misunderstandings and provide additional information not found in the textbook or other course materials. Second, it served as way for her to answer questions that students had as result of the discussion. Third, it helped her to encourage students to keep working on the projects. Fourth, it allowed her to create a social environment conducive to creativity. Finally, it enabled her to increase task motivation by providing a quality critique.
Research Question 1c: How does student-to-student interaction reflect the different components of Amabile's componential model of creativity?
Adding information from sources other than course materials
Student-to-student interaction increased domain-relevant skills by adding information from additional sources other than the materials provided by the instructor as this quote from Teresa from week 1 of ITEC 3520 demonstrates: I find it interesting that in addition to the University of CS logo, the poster includes the logos of all their competitors on the dates USC plays them. First of all, as a public relations and communications professional for University of CS, I can tell you that the University of CS signature is not readily available for use by others without permission. It leads me to wonder if these logos are used within legal guidelines. Also, if another entity is not a sponsor of the event or publication, we typically do not want to "share the stage" with other entities. I think this element of the poster is unusual.
Another example of student-to-student interaction increasing domain-relevant skills by adding information from additional sources can be seen in this exchange between Cindy and Rachel in week 2 of ITEC 3520. Cindy wrote: I do think that visuals/images can tell more about an individual's understanding and perception than a list of words. Your example of a process flow chart would be great to see how members of a group are thinking and where misconceptions lie. I think that going through a process like this would definitely benefit all learning styles --auditory by listening to someone talk, visual by seeing the information in a graphic organizer or in the form of images and kinesthetic by writing or drawing. I hope I've answered your question. Let me know if I missed it.
Rachel responded: That is so true, that the visuals give us a direct link into the student's schema regarding content learning/learned. This study suggests a positive benefit to the approach.
Increasing task motivation through positive feedback
Student-to-student interaction also increased task motivation by providing positive feedback. Vanessa found student-to-student interaction to beneficial to her in helping her keep task motivation: Yeah, I think that for the most part they were very positive. Any type of criticism I got generally was sort of very constructive and not overly negative, and yeah, I mean, that certainly anytime you get positive feedback or even constructive criticism that it encourages you to continue what you're doing and sort of take more risks and whatnot because you seem to be on the right track and the information you're getting is useful.
Research Question 2.
How do the materials used in asynchronous online courses promote creativity per Amabile's componential model of creativity?
Textbooks
The textbooks played a critical role in helping students to develop domain-relevant skills as these quotes from the interviews demonstrate. Teresa described the textbook this way: … White Space Is Not Your Enemy was a lot of review for me because a lot of it is what I do on a daily basis, but I love that book because it really did a great job. It was very direct, you know, and explained things very well and very clearly.
Vanessa described the textbooks this way, "That textbook I remember quite a bit and was really useful." Cindy described the textbook this way when describing what the asynchronous online discussions did for her, "I was better off using the text." Dr. Jones also believed that the textbooks were useful in helping students to develop domain-relevant knowledge although she thought the other materials that she brought in from journals and other sources were just as valuable or more valuable, and she also got some of the discussion prompts from the textbooks as this quote shows: I don't like to rely on textbooks. I'd rather do selected readings cause I don't want to make a student buy a book; however, that book I'm in love with and it's like 15 bucks on Amazon … I've never had a student complain about it. In fact, my course evaluations almost always mentioned how awesome the book is because it's easy to read. It's easy to follow. It's written from a very practical standpoint with references back to research and practices and historical approaches to design so that I like to keep it. Some of my discussion questions actually come from the book, from the end of the chapters and that's one of the other reasons that I as an instructor like it.
Additionally, the textbook also helped students develop creativity-relevant processes. One entire chapter of the textbook was devoted to learning how to brainstorm ideas. Thus, the textbook helped students to develop domain-relevant skills and creativity-relevant processes.
Teacher-made videos
Teacher-made videos also played a critical role in promoting creativity.
Teresa described the role that teacher-made videos played in her learning in course this way, "Dr. Jones was amazing at that. Really that's one of the things that I take away from this program. I really want to do that in my own classes to be able to give that same kind of structure in my classes." Vanessa described the value that the teacher-made videos had in this way: Dr. Jones also believed that her teacher-made videos were crucial to development of creativity in her classes as this quote shows, "At the very least they provide a huge impact. That practice actually won an award from the Association for Education Communication and Technology as a distance education best practice." The teacher made videos played a significant role in helping students. They served to sum up the previous week's material and introduce the up-coming material.
The analysis by Cindy about the use of the asynchronous online discussions differed from the information presented by Teresa, Vanessa, and Dr. Jones. Cindy found little value the asynchronous online discussions. This quote expresses her feelings about the discussions, "… with Dr. Jones, it got way too long and too many multiple comments going back and forth. It was like going on Facebook in a way you had a political blog going." Teresa, however, saw this lack of specificity as adding flexibility to projects, "I think that allowed us some flexibility and we were able to be a little more creative on how we formulated our responses."
Discussion and conclusions
The prompts in the case-study courses reflected the components of Amabile's componential model of creativity (Amabile, 1983(Amabile, , 1988Amabile & Pillemer, 2012). Two types of prompts used in the case-study courses fell into what de Noyelles et al. (2014) described as problem-based prompts and project-based prompts. Problem-based prompts ask participants to apply their knowledge by generating a solution to a problem (de Noyelles et al., 2014) while project-based prompts ask participants to solve a problem by developing a project (de Noyelles et al., 2014).
In solving the problems presented in the problem-based prompts and in completing the projects in the project-based prompts, students applied domain-specific skills. Domain-relevant skills are factual knowledge and expertise (Amabile, 1983(Amabile, , 1988Amabile & Pillemer, 2012). Problem-based prompts demonstrate best practices in teaching online courses by acting as triggering events (Akyol & Garrison, 2011) that spur participants to become cognitively involved in the class by applying domain-relevant skills learned from the textbook, teacher-created videos, or additional-teacher-provided sources to real world issues.
In solving the problems presented in the problem-based prompts and in completing the projects in the project-based prompts, participants applied creativity-relevant processes. Creativity-relevant processes are processes that help with the generation of ideas (Amabile, 1983(Amabile, , 1988Amabile & Pillemer, 2012). According to Amabile (Amabile, 1983(Amabile, , 1988Amabile & Pillemer, 2012), one type of creativity-relevant process is a heuristic. Creativity-relevant processes is reflected in the asynchronous online discussion in case-study courses when project-based prompts provide a heuristic for completing a specific assignment or help to teach an all-purpose heuristic for generating ideas such as brainstorming.
This case study also showed that the student-to-instructor interaction in asynchronous online discussions could promote domain-relevant skills, and encourage creativity-relevant processes, and increase task motivation. The student-to-instructor interaction in asynchronous online discussions can demonstrate best practices in teaching online courses by allowing students to integrate ideas (Akyol & Garrison, 2011). The integration of ideas in the asynchronous online discussions in the case-study courses served to help students gain domain-relevant skills allowing students to add the teacher's perspectives to their schemata of the topics being presented in the textbook, teacher-created videos, or additional-teacher-provided sources. De Noyelles et al. (2014) stated that domain-relevant skills are developed when instructors question or challenge student solutions to problem-based prompts. Student-to-instructor interactions in the asynchronous online discussions in the case-study courses helped students to solve problems by allowing the instructor to question and challenge student solutions. Student-to-instructor interactions also helped students gain domain-relevant skills by allowing the instructor to answer questions. Student-to-instructor interactions can help students to develop creativity-relevant processes by encouraging students to adopt a cognitive style that is conducive to creativity. Per Amabile (1988) a cognitive style that is conducive to creativity has the following characteristics: (a) exploring new cognitive pathways, (b) keeping response options open for as long as possible, (c) suspending judgement, (e) using broad categories to store information, and (f ) and breaking out of performance patterns. In the student-to-instructor exchanges that took place in the case-study courses, the instructor encouraged students to explore new cognitive pathways, and to keep options open for as long as possible, and to suspend judgment.
Student-to-instructor interactions in asynchronous online discussions can also increase task motivation. Karakaya and Demirkan (2015) discovered that a high frequency of feedback from evaluators can increase task motivation. Student-to-instructor interactions in asynchronous online discussions give instructors many opportunities to provide feedback that encourages students to keep working on solutions to problems. Additionally, student-to-instructor interactions allow students to seek help. Kamdar and Mueller (2011) suggested that help seeking is an intermediate variable between intrinsic motivation and creativity. Student-to-instructor interactions allow students to seek help from their instructor and thus maintain task motivation. In addition to increasing task motivation, the student-to-instructor interactions in asynchronous online discussions can increase domain-relevant skills. The componential model of creativity (Amabile, 1983(Amabile, , 1988Amabile & Pillemer, 2012) presupposes a feedback loop that increases domain-relevant skills (Amabile, 1983). In the case study courses, Dr. Jones frequently answered questions that students asked or added additional information that was needed to help students understand the material in the textbook.
This case study showed that the student-to-student interaction in asynchronous online discussions could promote domain-relevant skills, and encourage creativity -relevant processes, and increase task motivation. The student-to-student interaction facilitated creativity in much the same that the student-to-instructor interaction facilitated creativity.
As with the student-to-instructor interaction in the asynchronous online discussions, the student-to-student interaction in asynchronous online discussions can demonstrate best practices in teaching online courses by allowing students to integrate ideas (Akyol & Garrison, 2011). The mixing of concepts in the asynchronous online discussions in the case-study courses assisted students in gaining domain-relevant skills by allowing students to add other students' perspectives to their schemata of the topics being presented in the textbook, teacher-created videos, or additional-teacher-provided sources. As with the student-to-instructor interaction in the asynchronous online discussions in the case-study courses, the student-to-student interaction can help students to develop creativity-relevant processes by stimulating students to adopt a cognitive style that is conducive to creativity. Per Amabile (1988) a cognitive style that is favorable to creativity has the following characteristics: (a) exploring new cognitive pathways, (b) keeping response options open for as long as possible, (c) suspending judgement, (e) using broad categories to store information, and (f ) eliminate breaking out of performance patterns. In the student-to-student exchanges that took place in the case-study courses, the students encouraged their peers to examine new cognitive pathways, and to keep possibilities open for as long as possible, and to suspend judgment. As with the student-to-instructor interaction in the asynchronous online discussions in the case-study courses, the student-to-student interactions in asynchronous online discussions can increase task motivation by providing the instructor an opportunity to give constructive feedback. Per Amabile (1983), constructive feedback gives positive recognition for creative work and encourages the recipient to consider ideas. Additionally, constructive feedback avoids making comments that imply that recipient is incompetent. Karakaya and Demirkan (2015) discovered that a high frequency of feedback in an asynchronous online discussion from evaluators leads to an increase task motivation. In the case study courses, student-to-student interactions in asynchronous online discussions allow students to give peers positive, frequent feedback that stimulated feedback recipients to keep working on solutions to problems that are presented in the course.
As with the student-to-instructor interaction in the asynchronous online discussions in the case-study courses, the student-to-student interactions in asynchronous online discussions can increase domain-relevant skills. The componential model of creativity (Amabile, 1983(Amabile, , 1988Amabile & Pillemer, 2012) presupposes a feedback loop that increases domain-relevant skills (Amabile, 1983). In the case study courses, students frequently answered questions that other students asked or added new information that was needed to help their peers understand the material in the textbook.
Research Question 2.
How do the materials used in asynchronous online courses promote creativity per Amabile's componential model of creativity?
Garrison, Cleveland-Innes, and Fung (2010); Shea and Bidjerano (2009);and Sheridan and Kelly, (2010) have noted that the decisions that teachers make in designing and selecting the materials for courses have profound impact on what students take away from the course.
The textbook in an asynchronous online course can play a vital role helping students to gain domain-relevant skills, to develop creativity-relevant processes, and to retain task motivation. The textbook in the case-study courses helped students to gain domain-relevant skills by serving as a resource for the basic information that students would need to begin discussing the prompt provided by the instructor. In the asynchronous online discussions in the case study courses, the instructor often took the discussion prompts from the end of chapters in the textbook that students were reading for the courses. These questions became problem-based prompts and project-based prompts that de Noyelles e al., (2014) said could assist students in becoming cognitively involved in discussions. The textbook in the case-study courses also helped students to develop heuristics by including a chapter on brainstorming. Finally, the textbook in the case-study courses helped students to retain task motivation by explaining why the topic being discussed was important.
Teacher-made videos can play a key role in promoting creativity in asynchronous online courses. Teacher-made videos that include audio feedback can increase teacher presence in asynchronous online courses (de Noyelles et al., 2014), and this increased teacher presence can help students to gain domain-relevant skills. Teacher-made videos can serve to promote a social environment conducive to creativity by providing the instructors with another avenue for giving feedback. Additionally, teacher-made videos enable instructors to answer questions that come up during asynchronous online discussions. Finally, teacher-made videos provide a venue for giving direct instruction on creativity-relevant processes, such as brainstorming and providing feedback. Additional teacher resources can play a key role in promoting creativity. Additional -teacher-provided resources help students to gain domain-relevant skills by serving as a resource for the basic information that students need to begin discussing the prompt and creating solutions to the problems provided by an instructor in an asynchronous online discussion. Like the teacher-made videos, the additional-teacher-provided resources enable instructors to answer questions that came up during the asynchronous online discussions and by allowing the instructor to provide feedback on student work.
High school instructors who teach their courses online can use this information to help their students become more creative. They can do this by adding teacher-made videos and by making sure that the prompts that are used in discussions help students to become more creative.
Conclusions are based on the findings and limitations of this study. The recommendations include developing studies that occur closer to the completion of the course, adding more participants, conducting in person interviews, and examining courses in different domains.
Future research should be conducted in which the interviews either take place concurrently with the course or immediately after the course is completed. This would ensure that interviewees have not forgotten valuable information that might explain how creativity was being expressed during the course.
Since creativity is a social construct (Moran, 2010), additional research on the way that asynchronous online discussions enhance creativity needs to be done with more participants, especially minority students. This will help to ensure that views of minority students will be included. Also, it will help to make sure that all potential viewpoints about how creativity is expressed.
More research on the way that asynchronous online discussions enhance creativity needs to be done where participants are interviewed in person. Not all participants in this study were equally adept at using Zoom. In person interviews would ensure that technology is not an obstacle to the expression of relevant information.
Since domain-relevant skills are important in an environment conducive to creativity (Amabile, 1983(Amabile, , 1988Amabile & Pillemer, 2012), further research on the way that asynchronous online discussions enhance creativity in courses in multiple domains needs to be undertaken. Asynchronous online discussions in online science courses, math courses, and English courses may display creativity differently than the courses examined in this study did.
Limitations
Limitations of this study came from the research design of the study and real-world limitations of the setting and demographics of the study. Limitations included data that was filtered through the lens of the interviewee, not all interviewees were able to express themselves equally, interviews were conducted via video conferencing, the number and type of participants was limited, and only one type of course was examined.
Interviews provided information that was filtered through the lens of the interviewee (Merriam, 1998). The asynchronous online courses for this case study took place about six to nine months before the time that the participants were interviewed. Sometimes the participants had difficulty remembering what took place during the courses. The lack of remembering what happened during the course might have affected the accuracy of the answers that interviewees provided.
Not all interviewees were equally articulate (Merriam, 1998). Some participants could elaborate on the topics being asked about in the interview protocol while others had difficulty elaborating on the discussion prompts. Less articulate students needed encouragement and prompting. This encouragement and prompting may have biased their responses. While encouraging and prompting was a limitation, it was necessary to help some interviewees develop responses that were more than one or two words long. As result, the views of more articulate interviewees may have been weighted more than the views of less articulate interviewees skewing the results.
The interviews were conducted using Zoom. While Zoom allowed for both video and audio, it was different from having both the interviewer and the interviewee in the same location. While using Zoom was a limitation in this study, its use was justified because the interviewer and interviewees were over 1000 miles apart and the expense of travel would have been excessive. Additionally, not all interviewees were equally adept at using Zoom. Thus, the views of those who were adept with Zoom may have been given greater weight than those who were less adept with Zoom skewing the data.
The number of participants was limited. The small number of participants was justified because as Patton (2002) stated that there is no fixed number of participants needed for a case study only and multiple requests were made to get more student participants. While number of students was small, it was large enough to obtain rich information.
There was only one minority student in the case-study classes. While not ideal, this was justified because a case study examines a real-world event. In the case-study classes, only one minority student was enrolled, and he was unwilling to be interviewed. Minority students might view the activities that took place during the class differently than the White students in the courses did.
The courses in this study were both courses related to creating media for instructional purposes. Creativity might have been displayed differently if the courses studied had been science, math, or English. | 2019-06-12T15:49:35.484Z | 2019-06-12T00:00:00.000 | {
"year": 2019,
"sha1": "4697ad6a05f2480081045f6f10d2e14a5848d967",
"oa_license": "CCBY",
"oa_url": "https://educationaltechnologyjournal.springeropen.com/track/pdf/10.1186/s41239-019-0150-5",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "895ece3115b8231a16ef62fde336674c9f9b92f4",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
246624783 | pes2o/s2orc | v3-fos-license | Social management of the elderly education system: case of Russia Gestão social do sistema educacional de idosos: o caso da Rússia Gestión social del sistema educativo de personas mayores: caso de Rusia
In the context of population ageing, it is expedient for the state to manage the national systems of elderly education. The current article studies the management issues of elderly education programs. Using the case of Russia, it addresses the elderly education projects implemented in four main forms: clubs, lecture courses, educational courses and third age universities. The specific features of each form make it optimal to perform a certain range of educational tasks. The benchmark of the elderly education practices allowed us to identify the classification attributes of the organization forms of educational programs, outline the typology of these forms and determine the scope of their application. The research results show that the form in which the project is implemented is largely determined by the type of organization on the basis of which it operates. The focus area and the main social function of the project’s parent organization significantly affect the effectiveness of educational programs. The paper substantiates that it is feasible to distribute the tasks of elderly education between various agents considering the specifics of their activities and priority forms of educational projects implementation. To do so, the research proposes a management principle for differentiating the functions of the elderly education system. The paper presents recommendations for optimizing the mechanisms for managing the elderly education system and increasing its performance, which can be of use to public authorities.
INTRODUCTION
In the context of global population aging, researchers increasingly regard the economic, social and cultural potential of the older generation as a substantial resource for modernizing various spheres of society and adapting it to new demographic realities. In the light of this, the issue of elderly education is becoming especially topical. A lack of up-to-date knowledge and social barriers faced when gaining it are the central challenges that seniors have to deal with in order to integrate and adapt to the changing environment.
Improving older people's educational level is a necessary condition for renewing their potential. A comprehensive approach should be employed to resolve the issue of mass education for the elderly. In particular, there is a need for specialized educational programs and a system of institutions focused exclusively on educating people of the third age. It also seems expedient to attract resources and capabilities of the existing institutions. For example, in the USA, the public school system is actively involved in tackling this problem. Public schools train over 23 million Americans aged 50+ every year (Lapshina, 2007).
An alternative structure capable of taking on a significant portion of the burden associated with training the elderly is a set of non-formal education projects. In Russia, the elderly education programs have been implemented since the 1990s. Currently, there are more than 250 ongoing projects that enroll about 100,000 students every year (Sorokin, 2020). Throughout its history, elderly education in Russia has evolved independently, and at the moment it is not directly regulated by the state. Therefore, in order to realize its potential within the social management practice, it is necessary to formulate management principles, technologies and methods that the state can use to exert a centralized influence on this system. Thus, the purpose of the research is to theoretically substantiate whether it is feasible to implement the principle of distribution of the elderly education system's functions.
LITERATURE REVIEW
Until recently, research publications have looked at aging of the world's population gathering pace as an indisputable achievement of human civilization. It was attributed to an increase in the standard of living, medical advancements, the falling number of victims in armed conflicts, etc. Nowadays, there is no consensus about the nature of the demographic megatrend, as well as its results. For instance, Eakin and Witten (2018) and Taziev (2015) highlight social and economic risks associated with the transformations of society's age structure, and view popukation aging as one of the global challenges of our time.
The global trends in population aging are echoed in the Russian society. In 2019, the share of Russia's population over 60 years was 21% (Russia in Numbers, 2019). Transformations of the age structure place additional burden on social security, the pension system and healthcare. An increase in the demographic burden shouldered by working citizens is gradually leading to dysfunctions of economic institutions.
In recent decades, the Russian government has taken a number of actions to counter these negative consequences. The measures are mainly associated with raising taxes, attracting untapped labor resources (the unemployed, women, etc.), and stimulating the migration inflow. The most radical measure was the increase in the retirement age. The effectiveness of the mechanisms currently implemented in Russia is assessed by Gelman (2019), Chahrak and Ugryniuk (2016), Savinov, Bistyaykina andSolovyova (2018), andKalyugina et al. (2018). The researchers conclude that all the existing strategies aimed at adapting society to demographic shifts are designed not so much to handle urgent social problems as to delay dealing with them. The scholars stress that it is critical to make sure that the sets of measures for adapting society to the new demographic realities cover actions on updating and realizing seniors' potential. At that, educational programs for the elderly occupy a crucial role in this process.
The problems and development trends in elderly education in Russia are analyzed by Freydkina (2017), Ambarova and Zborovsky (2019), Smirnova (2008), Mosina and Gonezhuk (2019), and Kononygina (2006). They underline a steady increase in the number of specialized educational projects and their participants. Researchers positively assess the prospects for the development of the Russian elderly education system. At the same time, there is a range of factors impeding the development of educational practices for the senior citizens, such as a lack of the government's attention to the problems of elderly education, insecure funding, and a significant effect of negative stereotypes on seniors' ability to learn (Elyutina & Chekanova, 2003).
Despite the fact that most scientists support the idea of attracting the resources of the elderly education projects, the phenomenon of elderly education has not yet been studied from the perspective of sociology of management. There are extremely scarce and fragmented works on elderly education systems as objects of managerial influence in Russia. Lack of necessary empirical data poses a major barrier. For the same reason, it seems impossible to draw up an integral strategy for managing the education of senior citizens across the country. The abovementioned circumstances emphasize the relevance of studying various aspects of the elderly education system management in Russia.
METHODOLOGY
The paper is based on the results of the author's studies conducted in the period from 2006 to 2019.
The first research aimed to explore the activities of elderly education projects implemented in Russia. The data for the research were derived from the interviews with organizers and teachers engaged in the elderly education projects. In total, 254 organizations from 193 localities were examined. The study scrutinized the organizational forms of the educational projects, their focus areas, funding mechanisms, and the specifics of studying at an advanced age. The primary purpose was to recognize and assess the potential of the elderly education system as a tool for managing urgent problems in society. The empirical data were processed using SPSS software package.
The second research investigated the opportunities and prospects for managing the national network of education for older adults in Russia. Through online survey, we interviewed 15 experts working in related fields: gerontologists, doctors, demographers, economists, sociologists, psychologists, physiologists, teachers, and social workers. The experts were inhabitants of large Russian cities (Moscow, Saint Petersburg, Novosibirsk, Smolensk, Omsk, Kazan, and Tyumen). The sample was formed using the "snowball" method. The respondents were asked about the special features of the national elderly education system as a potential object of management, current goals of elderly education management, and possible management factors.
To classify the organizational forms of the elderly education projects, the following attributes were used: self-identification (usually reflected in the name of educational projects); orientation (education, rehabilitation, leisure, etc.); period of training; training format; requirements for the teaching staff's qualifications; types of institutions as venues and participants of educational projects and events. The organizational forms of the elderly education projects implemented in Russia fall into four categories: clubs, lecture courses, educational courses and third age universities.
RESULTS
Among the factors exerting the most profound effect on the Russian system of elderly education are the non-formal nature of educational programs, insecure funding of projects and lack of outer influence that would integrate and coordinate the development process.
Currently, the alloy of the elderly education practices functions according to the principle of a social network (Ferguson, 2017). Network-based organization allows being independent from the external environment and developing sustainably. The nodes of the network structure are educational projects that raise funds independently, establish goals and objectives of their activities, as well as choose the optimal format for solving educational problems. At that, the forms of project implementation are rather diverse. Clubs. This is a fairly popular form of interaction between older people to spend leisure time together and meet various needs (including education). Typically, clubs are organized on the basis of social security institutions, cultural centers, and public organizations. At the same time, education services are rarely the main specialization of clubs. Educational programs (or even particular training events such as lectures, round-table discussions, and guided tours) are mostly episodic, of unspecified duration and organized according to the preferences of the club members.
Teacher responsibilities are taken on by employees of the institutions on the basis of which clubs operate, as well as by club members themselves. This is a rare occurrence to engage qualified specialists (doctors, psychologists, lowers and social workers), who are not usually professional teachers, in providing training. Classes are usually held in informal or semi-formal settings in the form of conversation, excursion, or consultation. Such freedom and flexibility in organizing training activities, as well as their informal nature fit well into the ideology of the club movement. Studying in a club weakens the psychological barriers that prevent the elderly from continuing their education. Here, studying represents a form of entertainment and leisure; however, clubs play a critical part in expanding the national elderly education network. They are agents for primary reintegration of senior citizens into the domain of education. Club study experience encourages individuals to continue their education through other organizational forms of training. It is typical of such clubs to hold entertainment events complemented by educational activities (round table discussions, lectures, seminars) and meetings with experts (the pension fund specialists, notaries, doctors, etc.) (Sorokin, 2011).
Lecture courses. According to this organizational form of educational activities, classes for the elderly are held periodically in the predominant form of lectures or, less frequently, in the form of seminars and round-table discussions. In Russia, lecture courses are mostly hosted by cultural and social security institutions. Professional teachers or experts with experience in lecturing are involved in the educational process. As a rule, lecture courses are not reduced to a specific focus area, and therefore, experts from different fields of knowledge and practitioners whose expertise is of interest for students are invited to conduct classes. However, there are lecture courses focused on a particular subject (general and local history, art, etc.). In this case, each lecture is a selfsufficient piece of information, which makes it easier for new students to join the educational process, and allows students to attend classes in a selective manner (according to their needs and interests). This organizational form is used if an educational institution aims to attract a broad audience and reach distant cities (Shkatova & Saprykina, 2004). Educational courses. The distinctive features of this form of the elderly education programs implementation are the concentration on one specific area (computer literacy, psychology, healthcare science, etc.), and that classes are designed to develop practical skills within a short time (short courses). Organization of educational courses is imposed on educational and cultural centers, whereas the teacher responsibilities are taken by their employees (professional teachers). Oftentimes, these organizations do not regard education for older adults as their main and only priority. Work with seniors mainly pursues educational goals (Voytovich, 2016). The specifics of educational courses allow students to acquire specialized knowledge, skills and abilities in a relatively short time. Therefore, such courses can be viewed as a promising basis for implementing vocational training and retraining programs for senior citizens, which may be in demand in the near future.
Third age universities. A widespread form of the educational process organization is third age universities (senior universities, universities of the golden age, higher public schools, folk faculties are included in the same category). Usually, their founders are educational or social security institutions. A significant number of third age universities in Russia are divisions of higher education institutions or cooperate with them. Founders usually opt for this organizational form if the institution's thrust is educational and teaching activities.
Third age universities demonstrate some attributes typical of higher educational institutions. In particular, in their structure there are faculties headed by deans, the study period is divided into semesters or sessions, student cards are used in some, etc. Students have the opportunity to attend several courses supervised by faculties. In the curriculum, theoretical disciplines prevail over practice-oriented courses. At the end of the study, students take a graduation test. There are a number of third age universities that succeeded in their mission and their experience and best practices can be of use when developing the elderly education system at large (Barabanov, 2007).
The research results show that the form in which a project is implemented is largely determined by the type of organization on the basis of which it operates ( Table 1). As of 2019, the most common form of elderly education was third age universities (77% and 53% respectively). Nonprofit organizations, cultural institutions and authorities gave preference to educational courses (50%, 43% and 50% respectively). The overwhelming majority of educational projects at social security institutions were implemented in the form of clubs. No authorities named lecture courses as their preferred format, unlike cultural institutions, which practiced it in 29% of cases. The network structure of elderly education in Russia and the polymorphism of educational projects' formats predetermine the peculiarities of the management processes. To provide effective management, it is of high importance to establish goals and tasks. The avenues for the development of elderly education should be obviously synchronized with the overall situation in the country and the existing social problems. The expert survey indicated that in the Russian society there are a number of problems directly associated with older people being excluded from the educational space. Respondents highlighted several contradictions of a socio-economic and cultural nature that can be organized into three groups (or, in other words, three main challenges). Their occurrence according to the experts is given in figure 2. Social alienation. A great number of senior citizens in Russia are disintegrated from the social environment and experience considerable difficulties when handling even simple everyday tasks. Under these circumstances, any attempts to utilize the older generation resources for resolving social problems will turn out to be ineffective. Social adaptation refers to the adjustment of an individual (a group of individuals) to the social environment, which implies interaction of both parties and step-by-step coordination of their expectations (Kovaleva, 2003). In old age, the adaptive abilities of an individual are significantly reduced due to health and economic issues, as well as the declining number of social contacts. In Russia, social alienation of the elderly is also caused by the fact that institutional approaches to providing social support for seniors are obsolete and do not meet modern requirements.
Poor professional competence of senior citizens. In the context of population aging, there is an urgent need to exploit the residual working capacity of older adults. At the same time, the lack of up-to-date professional knowledge and skills prevents senior Russians from integrating into the production and economic sphere, increases their vulnerability in the labor market, and limits their employment opportunities to low-paid unqualified jobs (Ambarova & Zborovsky, 2019).
Generation gap in the Russian society. Another urgent problem of the Russian society is to reduce the generation gap and establish intergenerational dialogue. Any strategies aimed at increasing social and economic activity of seniors but ignoring cross-generational issues will be ineffective, since in fact they will turn out to be attempts to aggressively integrate the elderly into an unfriendly environment. Such an approach will not only fail to achieve the stated goals, but will also exacerbate the already difficult relationships between different generations. The recent studies confirm that the problem of cross-generational communication is still relevant (Savinov, Bistyaykina and Solovyova, 2018).
Based on the revealed problems, it is possible to establish the main goals of the elderly education in Russia and, consequently, the social guidelines for managing it: to facilitate the social adaptation of older adults; to reduce the generation gap in society; and to integrate senior citizens 16% 29% 41% 0% 5% 10% 15% 20% 25% 30% 35% 40% 45%
Significant generation gap
Poor professional competence of senior citizens
Social alienation of senior citizens
Social management of the elderly education system: case of Russia into the production and economic sphere. Having studied the projects of non-formal education, we have found that all the contradictions typical of the Russian elderly education had ready-made solutions. These are mechanisms that can significantly reduce the severity of the problems. For instance, the elderly education demonstrates a huge potential for improving social adaptation of the older generation. Educational programs for seniors can have both a direct effect on their adaptive abilities (through everyday life skills training) and an indirect effect, which consists in creating favorable conditions for the adaptation to social changes, the aging process and the old age (organization of leisure time, expanding the circle of friends, self-realization, etc.). Educational projects often cover various courses aimed at informing participants about the peculiarities of the youth's lifestyle and subculture, as well as programs focused on developing the elderly as subjects of intergenerational dialogue. It is also noteworthy that the elderly education system is capable of forming the space for intergenerational communication. Representative of various generations have an opportunity to get to know each other, learn from each other or gain knowledge in the same classroom. The projects of elderly education in Russia have experience, yet limited, in helping senior citizens with professional socialization. By attending specialized courses, older people successfully master new professions and acquire the knowledge necessary for employment. These factors confirm the expediency of using the resources of the national elderly education system in order to solve urgent problems and determine the importance of studying the elderly education system as an object of social management.
At the next step of the expert survey, the respondents were asked to name measures aimed at ensuring effective management of the elderly education in Russia. The respondents' answers are given in table 2. 3. Taking into account the specifics of the organization and implementation of educational programs for the older generation 35% 4. Distributing the tasks of elderly education between various agents (organizations, institutions, etc.) 29% 5. Applying the network approach when managing the Russian system of elderly education 25% 6. Attracting new agents for implementing elderly education programs 15% 7. Establishing a training system for pedagogical and management staff to work in elderly education projects 13% 8. Forming public opinion on the social importance of education for senior citizens 12% Predictably, the respondents stressed the need for stable funding of educational projects (65%) and taking into account the specifics of educational programs for the older generation (35%). The experts' recommendations concerning the technologization of the management processes are of much greater interest for the development of the management mechanisms for the national elderly education system. For example, 42% of respondents underlined the importance of focusing management on solving urgent social problems; 29% noted that it was expedient to distribute the tasks of elderly education between various agents; and 25% of respondents named the necessity to apply the network approach when managing the Russian system of elderly education.
DISCUSSION
Based on the research results, we lay down a principle for differentiating the management functions of the elderly education system. Its main idea lies in increasing the efficiency through the distribution of particular educational tasks between various agents, where the objectives of the educational project fit into the general course of the parent organization (Bondaletov, 2016). For instance, courses focused on seniors' social adaptation are a right choice to be implemented on the basis of social security institutions. Hence, it is expedient to differentiate the current tasks of the elderly education system management and address them to agents according to their main social functions. The principle of distribution of the functions is presented in figure 3. In figure 3, the goals of elderly education network management are integrated into blocks that were earlier referred to as social guidelines of elderly education management. Tasks for achieving them are distributed between the leading agents of elderly education (the most common founders of educational projects). There are no public authorities among the agents, since they promote elderly education mainly through sponsorship, so it seems not possible to define their role in the scheme of functions. The forms of educational projects were determined according to the compliance with the specific tasks of gerontology, their popularity with the educational agent, and the possibility of implementation. According to the principle, the functions of elderly education are differentiated in order to increase the efficiency of elderly education network management and the rational use of its resources. can bear a significant part of the burden associated with training for the elderly. Elderly education projects in Russia are implemented in four forms. According to the research results, the form in which the project is implemented is largely determined by the type of organization on the basis of which it operates. The most common form of elderly education is the third age university (77% are organized on the basis of universities, and 53% -on the basis of the Russian society "Znanie"). Nonprofit organizations, cultural institutions and authorities gave preference to the format "educational courses" (50%, 43% and 50% respectively). At social security institutions, the overwhelming majority of educational projects were implemented in the form of clubs (up to 72%). No authorities named lecture courses as their preferred format, unlike cultural institutions, which practiced it in 29% of cases.
CONCLUSION
The specifics of each form make it optimal for resolving a certain range of educational issues. The results of the present study have indicated that the focus area and the main social function of the project's parent organization had a significant impact on the effectiveness of educational programs. The efficiency of the centralized management of the elderly education network in Russia can be boosted through the distribution of management functions and the implementation of specific educational programs in the most suitable format. To do so, we have developed a principle for differentiating the functions of management of the elderly education system. The recommendations obtained through the expert survey can be used to optimize the mechanisms for managing the elderly education system. | 2022-02-07T16:07:00.074Z | 2021-12-29T00:00:00.000 | {
"year": 2021,
"sha1": "58404c6973f62e571441c9db184005a2e30a487d",
"oa_license": "CCBY",
"oa_url": "https://seer.ufs.br/index.php/revtee/article/download/16687/12414",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "738f801d6ec52589aa133c4b474d190bc9b67c9e",
"s2fieldsofstudy": [
"Education",
"Sociology"
],
"extfieldsofstudy": []
} |
236646801 | pes2o/s2orc | v3-fos-license | Effect of Huatan Quyu Decoction on Patients with Cerebral Infarction
: The objective is to explore the clinical methods and effects of applying Huatan Quyu Decoction in the treatment of patients with cerebral infarction. The research work was carried out in our hospital, and the research was carried out from November 2019 to November 2020. Patients with cerebral infarction were selected as the research object, and the number of patients was 80. They were randomly divided into two groups and one group was given conventional rehabilitation therapy as the control group, the other group was given Huatan Quyu Decoction as the experimental group, and the effects of different treatments on patients were compared and analyzed. Before treatment intervention, there was no significant difference in the scores of neurological impairment, limb motor ability and ability of daily living between the two groups of patients, P>0.05. After treatment intervention, the CSS score of the experimental group decreased and the BI score increased, which has significant differences compared with the control group, and the experimental group has higher scores for sensory and motor function. In addition, the effective rate of treatment in the experimental group was 95.00%, which was significantly higher than the 77.50% in the control group. The difference between the two groups was significant and meaningful. The treatment effect was better in the experimental group.The application of Huatan Quyu Decoction in the process of treating patients with cerebral infarction can effectively promote the recovery of patients and improve the quality of life of patients. The clinical application is significant.
Introduction
As far as cerebral infarction patients are concerned, it is more common in the clinic, and the incidence of patients is higher, which mostly occurs in elderly patients. After the onset of the patient's disease, it has a relatively large impact on the patient's health and quality of life, and the disability rate and fatality rate are relatively high. It is necessary to give the patient timely and effective treatment. From the perspective of previous clinical work, the treatment of patients is mostly to promote the recanalization of the patients' blood vessels, and to infuse the brain tissue for hypoxia, so as to reduce the patient's nerve damage and improve the patient's prognosis [1] . According to the results of relevant research data, the application of Huatan Quyu Decoction during the treatment of patients can reduce the degree of nerve damage and promote the recovery of patients. On this basis, this study applied Huatan Quyu Decoction to compare and analyze its clinical therapeutic effects.
Objective
Its objective is to carry out experimental work in the form of comparative analysis to explore the therapeutic effect of Huatan Quyu Decoction in the treatment of patients with cerebral infarction, so as to provide reference for the development of clinical treatment.
Patient information
The patients in our hospital were selected as the main subjects of the study. The selection time of patients was controlled from November 2019 to November 2020. The selection of personnel was all patients with cerebral infarction, and the number of selected patients was 80. The study was carried out in the form of comparative experiments, and 80 patients were divided into a control group of 40 cases and an experimental group of 40 cases. In the experimental group, there were 21 male patients and 19 female patients. The maximum and minimum ages of the patients were 81 and 52 years, respectively. The average age of the patients was (63.23±3.29) years. In the control group, the number of males and females were 23 and 17, respectively. The maximum age of the patients was 83 years old, the minimum age was 54 years old, and the average age of the patients was (64.39±3.99) years old.
Comparing the general information of the two groups of patients, the data difference of patients is small, which is in line with the comparison standard between the groups.
Inclusion criteria: The patient meets the diagnostic criteria of cerebral infarction; the patient is treated within 48 hours after the onset of the disease; the patient is aware of the process and methods of this study and is willing to participate in it; this study was approved by the hospital ethics committee. Exclusion criteria: The patient has a history of mental illness; the patient has cerebral hemorrhage; the patient has gastrointestinal hemorrhage; the patient has coagulation dysfunction, and the patient has poor compliance.
Research methods
The patients in the control group were treated with conventional rehabilitation methods to control their blood pressure and lipid status, and also give them nervenutrition drugs to promote their limb exercises and strengthen their recovery.
Patients in the experimental group were treated with Huatan Quyu Decoction on the basis of patients in the control group. In the actual implementation process, its prescriptions mainly include: Pueraria lobata 15g, Shengbaizhu 10g, Liquor rhubarb 5g, raw leech 3g, Tianma 10g, Chi Peony 15g, Pinellia ternata 10g. Take the medicine by decoction in water, remove 100ml of juice, and take it twice a day, each time taking 50ml.
Observation indicators
To evaluate the degree of neurological deficit of the patient, using CSS score, the score of the patient is high, and the degree of the defect is large. The patient's ability of daily living is evaluated and analyzed by BI. The full score is 100, which means that the patient's ability of daily living is strong [2] .
Comparing the limb sensory and motor functions of the two groups of patients before and after treatment, FMA-Meyer was used to evaluate the scores and the evaluation of the limbs of the patients showed a positive correlation.
To evaluate the treatment effect, the main indicators include marked effective, effective and ineffective. Marked effective means the degree of disability of the patient is low, and the recovery of basic living ability ; Effective means the clinical symptoms of the patient are improved, and they can achieve basic self-care; the patient's limb function recovery is poor, and there are still major problems after treatment, which is invalid. Excluding inefficiency is the total effective rate of this study.
Statistical methods
Statistics on the data and application of software SPSS20.0, the analysis of measurement data ( x ±s) is verified by t value, while the calculation and comparison of count data (n, %) is verified by X2 value. When expressed as P<0.05, it means that the comparison of this study is statistically significant [2] .
Nerve impairment and ability of daily living
Before treatment intervention, the difference in CSS score and BI index of the two groups of patients was small, there is no significant difference, P>0.05. After treatment intervention, the data has changed. Compared with the control group, the CSS score of the experimental group was lower, the BI index is higher, the data difference between groups becomes larger, P<0.50, there is statistical significance.
Comparison of patients' limb function scores
The motor function of the limbs of the patients was compared. Before the treatment, there was no significant difference in the motor function and sensory function of the two groups of patients, P>0.05. After the treatment, the sensory function and motor function scores of the experimental group were compared with those of the control group. The experimental group has a higher score, and the difference between two groups has increased, which is expressed as P<0.05, which is statistically significant.
Comparison of treatment efficacy
There are 38 people in the experimental group who are effective in treatment, and the effective rate is 95.00%, and there are 31 people in the control group who are effective in treatment with the effective rate of 77.50%. There is a big difference in data between the two groups, which is statistically significant.
Conclusion
The clinical incidence of cerebral infarction is relatively high. The main factor leading to the patient's disease is the patient's blood circulation disorder, which in turn leads to the patient's brain tissue hypoxia and ischemia. The patient is prone to complications such as tissue necrosis and softening. Once the patient becomes ill, the disease progresses rapidly. Therefore, after the onset of the patient, there is a large effect for the patient's life, health and quality of life, and the clinical disability rate and mortality rate of patients are high, which poses a serious threat to the patient's life and health [3]. It is necessary to give patients clinical treatment as soon as possible to promote recanalization of patients' blood vessels and avoid brain tissue damage [4] . From the perspective of the development of western medicine, the treatment of patients with cerebral infarction is mainly to give the patients blood perfusion during the best recovery period of the patients.
Under normal circumstances, the treatment methods such as thrombolysis and anticoagulation are given to the patients to try to improve the coagulation function to avoid thrombosis in patients [5] . From the perspective of traditional Chinese medicine, it believes that cerebral infarction is a stroke problem in the field of traditional Chinese medicine, and the pathogenesis is more complicated. The main factors leading to the disease are the disorder of Qi and blood, endogenous phlegm and blood stasis, and hyperactivity of liver yang. In the process of treating patients, it is necessary to promote blood circulation to remove blood stasis, dry dampness and expectorant [6] . On this basis, this study applied the Huatan Quyu Decoction, the elixir of which is the emperor's medicine, which has the effects of promoting blood circulation, removing blood stasis and clearing the pulse, and the red peony root and angelica in the prescription has the effects of promoting blood circulation and removing blood stasis, nourishing blood and camping [7] . Leech has the effects of promoting blood circulation, removing blood stasis, and relieving dysmenorrhea. Pueraria lobata can clear away heat and strengthen the spleen, relieve muscles and reduce fever, and Gastrodia has the effect of calming down. When the drugs are used in combination, its clinical effect is significant, which can improve the clinical symptoms of patients and achieve blood circulation and promote the recovery of patients [8]. After the use of Huatan Quyu Decoction in this study, the results showed that the treatment of patients in the experimental group was more effective, and the patients' neurological deficit scores were lower, while the scores of activities of daily living and limb movement were better, both of which were comparable to those of the control group. It shows that the clinical effect of Huatan Quyu Decoction is significant, and it has positive significance in promoting the recovery of patients with cerebral infarction. In summary, the application of Huatan Quyu Decoction in the treatment of patients with cerebral infarction has significant effects. It can improve the patients' ability of daily living, and improve the patients' neurological deficit scores, and promote the enhancement of patients' limb function. It is clinically significant, and it can be promoted and used. | 2021-08-03T00:06:02.060Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "02754dd881342718c9c3cb1f577406dcdc20049e",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/10.1051/e3sconf/202127103080/pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "6a57009c788abf87cd433173ca182d2fd562d265",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261895733 | pes2o/s2orc | v3-fos-license | Perioperative mortality of emergency and elective surgical patients in a low-income country: a single institution experience
Background The perioperative mortality rate is an indicator of access to safe anesthesia and surgery. Studies showed higher perioperative mortality rates among low- and middle-income countries. But the specific causes and factors contributing to perioperative death have not been adequately studied in the Ethiopian context. Methods This is a retrospective institutional study of the largest academic medical center in Ethiopia. Data of all patients who were admitted to surgical wards or intensive care and underwent surgical interventions were evaluated for perioperative mortality rate determination. All mortality cases were then evaluated in depth. Results Of the 3295 patients evaluated, a total of 148 patients (4.5%) died within 30 days of surgery. By the 7th postoperative day, 69.5% of the perioperative mortality had already occurred. Septic shock contributed to 54.2% of deaths. Emergency surgery patients had more than a twofold higher mortality rate than elective surgery patients (p value < 0.001) and had a 2.6-fold higher rate of dying within 7 days of surgery (p value of 0.02). Patients with ASA performance status of 3 or more had a 1.7-fold higher rate of death within 72 h of surgery (p value of 0.015). Conclusion More than two thirds of patients died within 7 postoperative days. More emergency patients died than elective counterparts, and emergency cases had a higher rate of dying within 7 days of surgery. Poor ASA performance score was associated with earlier postoperative death. Further prospective multi-institutional studies are warranted to elucidate the factors that contribute to higher postoperative mortality in low-income country patients.
Introduction
Perioperative mortality rate (POMR) is defined as the rate of death following surgery and anesthesia either on the day of surgery, before the 30th postoperative day or on the day of discharge from the hospital and is an indicator of access to safe anesthesia and surgery and (Aggarwal et al. 2020).POMR is inversely correlated with the improvement in access to advanced anesthesia care and surgical services (Bainbridge et al. 2012).This is why POMR is included in the World Health Organization's 100 core health indicators list (Biccard et al. 2018).
Low-income countries (LIC) have a higher POMR than high-income countries (HIC) (Blaise Pascal et al. 2021;Bohnen et al. 2016).The Human Development Index (HDI) is also shown to associate with the rate of postoperative death when assessed on a global scale (Dandena et al. 2020).Few perioperative mortality studies have been reported from Ethiopia which consistently showed a high POMR (Davies et al. 2016;Dugani et al. 2017).As part of the Lancet Commission on Global Surgery's core indicators for monitoring universal access to safe, affordable surgical and anesthesia care, it is planned to acquire a consistent reporting of the national POMR of all countries in the world by 2030 (Fecho et al. 2008).Accordingly, we are reporting a detailed assessment of POMR in the largest academic medical institution in one of the LIC, Ethiopia.
Study design
A retrospective cross-sectional study was conducted at a single referral hospital.All cases within the general surgery, pediatric surgery, neurosurgery, and cardiothoracic and vascular surgery units were analyzed.The study included all patients who were operated on and those who died within 30 days of surgery in the time course between May of 2021 and April of 2022.
Study setting
The study was conducted at Tikur Anbessa Specialized Hospital, the largest teaching hospital in Ethiopia.It is located at the center of Addis Ababa, the capital city of the country.The hospital has over 700 beds and provides services to over 500,000 patients annually at a tertiarylevel designation.
Study participants
All surgical patients within the 4 surgical units selected for the study were included initially to determine the rate of POMR.Following this, perioperative mortalities (POMs) occurring during the course of the study were included and analyzed.
Inclusion criteria
All surgical patients who underwent surgical intervention with open or minimally invasive techniques within the study period.All deaths following surgical intervention within 1 month after surgery regardless of the cause of death were included in the study and evaluated.
Exclusion criteria
All patients with surgical disease who were treated nonsurgically were excluded.In addition, all patients who died at the time of arrival at the hospital or before surgical intervention were excluded.Obstetric and gynecology, urology, and orthopedic patients were excluded.Obstetrics and gynecology cases were excluded because they are out of the jurisdiction of the Department of Surgery, and ethical clearance for these cases could not be acquired.Urology and orthopedics were excluded because there were no mortality cases during the course of the study.Finally, 17 cases with poor or incomplete documentation deemed difficult for analyses were also excluded.
Variables
The independent variables for this study were gender, age, American Society of Anesthesiology (ASA) score, comorbidity, type of admission, indications for surgery, and surgical procedures performed.
The dependent variables were the rate of postoperative death, the cause of death, the postoperative day of death, and the length of hospital stay.
Data source
The data source was from medical records, and both electronic and paper-based retrieved after the medical record numbers were acquired from the operation logbooks.
Measurement/analysis and interpretation
After the data was collected, it was cleaned, coded, and entered into IBM Corp. Released 2015.IBM SPSS Statistics for Windows, version 23.0.Armonk, NY: IBM Corp.Both descriptive and inferential statistics were utilized for the interpretation of the data.
Statistical analysis
For all categorical variables, measures of central tendency with mean and standard deviation were used in addition to frequency distribution.Inferential statistics with univariable and multivariable logistic regression was then performed for the risk identification regarding the time of death and associated variables.
Ethical considerations
The ethical approval for this study was acquired from Addis Ababa University, College of Health Sciences, ethical review board.The study was conducted in accordance with the Helsinki Declarations, National and Institutional Guidelines, while keeping all the information retrieved for the study confidential.
Results
Of the 3295 patients who underwent surgical interventions, 1246 belong to the pediatric age group.The total perioperative mortality was 148 (4.49%) of which 19 died within 24 h of surgery.The crude 24-h and 30-day perioperative mortality ratios were 1:173 and 1:22, respectively.Of the 148 deaths, 131 were analyzed while the rest were excluded due to incomplete documentation.A significantly higher rate of mortality was found in patients admitted for emergency surgery than elective surgery at 88/1384 (6.36%) and 43/1911 (2.25%), respectively, p value of < 0.001.Neurosurgery patients had higher overall POMR among the 4 units studied, which was 31/579 (5.35%), p value of 0.052 (Fig. 1 and Table 1).
From the 131 deaths evaluated, 69 (52.7%) were male.Patients in the neonatal age group were 41 (31.3%), and the mean age among the adult cases was 43.9 ± 18.5 years and 3.12 ± 2.59 days for infants.Overall, 53 (40.5%) died within 72 h of surgery, and 69.5% within 7 days of surgery (Fig. 2).The median overall postoperative day of death was 4 days.Patients older than 3 years of age had an earlier mean (5.28 days) and median (4 days) POM than their older counterparts (Table 2).This difference has shown a tendency towards statistical significance in one-way ANOVA, F(1129) = 3.77, p = 0.054.
Elective and emergency surgical patients had a median POM day of 9 and 4 days, respectively, with a statistically significant difference in the average POM days between the two groups, F(1129) = 11.66,p = 0.001.The overall median length of hospital stay was 10 days (Table 2).
ASA score of 3 through 5 was recorded in 92 (70.2%) of the patients, and 24 (18.3%) had one or more medical comorbidity.During preoperative evaluation, anemia was detected in 82 (62.6%) of the mortality cases (Table 3).
Regarding the indications for surgery, general surgical procedures were mostly done for generalized peritonitis due to viscus perforation and malignant intestinal obstructions.Elective surgery for brain tumors was the indication for surgery in 21 (67.7%) of deaths in neurosurgery.Overall, most deaths occurred due to septic shock (Fig. 3).The most common causes of death in neurosurgical patients were brain herniation and septic shock at 56.2% and 28.13%, respectively.Septic shock (63.3%) and aspiration pneumonia (20.4%) were the most common causes of death among the pediatric age group (Tables 3 and 4).
Of the 19 POMs on the day of surgery, 18 (94.7%)died in the ICU, while one died in the wards.The majority, 56.5%, of the POMs beyond 24 h after surgery occurred in the ICU.No intraoperative death was recorded during the course of the study.
Multivariate logistic regression to evaluate factors that were associated with death within 72 h of surgery showed that only an ASA score of 3 or more was associated with a 1.7-fold increase in early postoperative death (AOR 1.73 (1.12-2.68),p value of 0.015).All the other factors analyzed including age group (infant versus older age group), admission circumstance (elective versus emergency), comorbidity, and gender did not affect the time of death (p value: 0.71, 0.13, 0.41, and 0.35, respectively).At postoperative day 7, only emergency surgery was associated with a 2.6-fold death rate in the first 7 days compared to the elective surgery group (AOR 2.65 (1.17-6.0)p value of 0.02).All other values including ASA score were found to be significantly associated with POMR at 7 days post-surgery.
Discussions
In this study, the crude 24-h and 30-day perioperative mortality ratio was 1:173 and 1:22, respectively.Emergency surgery cases had close to a 2.5-fold higher mortality rate compared to their elective surgery counterparts.When comparing surgical services, higher POMR was recorded among neurosurgery patients, most of whom died from brain herniation.Patients who died within 24 h of surgery were in the ICU in nearly all of the cases, compared with only two thirds of those who died beyond 24 h.Septic shock was the most common cause of postoperative death.Patients with higher ASA scores (3 or more) had a 1.7-fold higher likelihood of dying within 72 h and emergency surgery deaths had a 2.6-fold likelihood of dying within the first 7 days than later.This finding reaffirms the existing body of evidence that early perioperative deaths are largely due to the inherently poor patient functional status and the emergent nature of the patient's pathology.
A 30-day POMR of 2% was reported by Medecins Sans Frontieres across 3 nations within Central and Eastern Africa in a patient cohort comprising of mainly surgical emergencies of which the most common procedure was cesarean Sect.(Fecho et al. 2008).A systematic review of studies from low-and middle-income countries showed an aggregate mortality rate of 1.2% for elective surgery and 10.1% for emergency surgery (Bainbridge et al. 2012).Furthermore, Ethiopian studies have shown a POMR rate ranging between 3.4% and 4.6% (Davies et al. 2016;Dugani et al. 2017).Our findings concur with both the national and regional findings, in that the POMR is higher than that of HIC, but equivalent to the regional figures (Fichtner and Dick 1997).In LIC, 40% of the POMR is reported to be attributed to facility-based resource factors such as postoperative care infrastructure and cancer care pathway, while the rest is attributed to patient factors (GlobalSurg Collaborative and National Institute for Health Research Global Health Research Unit on Global Surgery 2021).Consequently, the disparity between the two regions could partly be explained by the lack of infrastructure and human resources.
With regard to the immediate cause of death, a German multi-institutional study demonstrated that myocardial infarction and multiorgan failure were the primary causes of perioperative death (Hopkins et al. 2016).A Brazilian study showed that advanced disease and surgical complications were the most common causes of death among perioperative patients (Meara et al. 2016).One multinational LIC study on a 7-day cohort of perioperative outcomes put forward cardiovascular complications as the leading cause of death (Misganaw et al. 2022).Regardless, septic shock was the most predominant cause of death in the surgical population.In one LIC report, sepsis-related mortality is two-fold higher than that of the HIC patient population (Mullen et al. 2017).It could be hypothesized that mortality from sepsis could at least be partly attributed to the inadequate care provided at the LIC center and is likely to continue being an important cause of death in an LIC setting.
Aspiration pneumonia was found to be common among tracheoesophageal fistula/esophageal atresia patients.In one Ethiopian report, more than 90% of neonates with similar pathology had aspiration pneumonia.This is thought to emanate from neonates being fed prior to a late diagnosis leading to aspiration pneumonia (Ng-Kamstra et al. 2018).
Emergency patients in our study had higher POMR than their elective counterparts and a higher likelihood of death within the first 7 days of surgery.Our finding is in agreement with the existing body of knowledge that demonstrates that emergency and urgent surgery patients have 2-2.5 fold higher rate of mortality (Stefani et al. 2018;Tarekegn et al. 2020).To our knowledge, differences in the timing of POM between elective and emergency surgical patients have not been previously described, but a high rate of "early" postoperative death among emergency surgical patients has been reported (Watters et al. 2015).This could be explained by the inherent risk of the pathology that mandated the emergency surgery and the poor physiologic status of patients at presentation for acute care services (Weissman and Klein 2008).
The ASA performance status score was associated with "early" death among the cases studied here.Patients with an ASA score of 3 and above had a 2.9-to 16-fold increase in mortality within the first 72 h after emergency surgery (Weissman and Klein 2008).The association of ASA score with "early" POM has previously been shown among elective surgery patients, although the overall impact is relatively lower than the emergency surgical counterparts (World Health Organization 2015).
The future national plan needs to be made to create a registry in order to evaluate the aggregate multi-institutional POMR and factors associated with the POM including specific pathology, comorbidities, ASA scores, and postoperative complications, as these are not within the scope of this study.These registries need to be executed in accordance with the Lancet Global Surgery Initiative recommendations in order to produce a dataset of a comparable quality with the international literature.
In conclusion, we found a similar POMR rate in our study population as that in other low-and middle-income countries and higher than reports from HIC.Most deaths occurred due to septic shock and its complications.Emergency surgical patients had higher 7-day and 30-day mortality rates compared to their elective surgery counterparts.Additionally, patients with an ASA score greater than 2 had a twofold increase in "early" death compared to those with better performance scores.These findings need to be confirmed using prospective multi-institutional, preferably national, studies.
Fig. 1
Fig. 1 Percentage of death across different surgical units
Fig. 2
Fig. 2 Trend of death across 1 month after surgery
Fig. 3
Fig. 3 Cause of death of the cases.HAP hospital-acquired pneumonia
Table 1
Death rate for each surgical sub-unit and admission type
Table 2
The mean and median of overall and postoperative stays of the emergency, elective cases, less than 4 years, and older age group
Table 3 ASA
scores, comorbidities, degree of anemia, and age group of the POMs
Table 4
Reasons for surgical admissions and immediate cause of death of the perioperative mortality cases among different surgical sub-units EA Esophageal atresia, ARM Anorectal malformation | 2023-09-16T13:19:25.197Z | 2023-09-16T00:00:00.000 | {
"year": 2023,
"sha1": "a62f82dba40f47a6e6b87d4bf733f8bef900cad8",
"oa_license": "CCBY",
"oa_url": "https://perioperativemedicinejournal.biomedcentral.com/counter/pdf/10.1186/s13741-023-00341-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "76de4d28bf96e782ba6ecdf33d2e2d5e242013b5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
204945185 | pes2o/s2orc | v3-fos-license | 352. Characteristics Associated with Pre-Frailty in Older People Living with HIV
Abstract Background Frailty is a concern among older people living with HIV (PLHIV). There is a paucity of research characterizing PLHIV who are at risk of becoming frail (pre-frailty). To investigate how HIV impacts older PLHIV in the United States, a new study called Aging with Dignity, Health, Optimism and Community (ADHOC) was launched at ten sites to collect self-reported data. This analysis uses data from ADHOC to identify factors associated with pre-frailty. Methods Pre-frailty was assessed using the Frailty Index for Elders (FIFE), where a score of zero indicated no frailty, 1–3 indicated pre-frailty, and 4–10 indicated frailty. A cross-sectional analysis was performed on 262 PLHIV (age 50+) to determine the association between pre-frailty and self-reported sociodemographic, health, and clinical indicators using bivariate analyses. Factors associated with pre-frailty were then included in a logistic regression analysis using backward selection. Results The average age of ADHOC participants was 59 years. Eighty-two percent were male, 66% were gay or lesbian, and 56% were white. Forty-seven percent were classified with pre-frailty, 26% with frailty, and 27% with no frailty. In bivariate analyses, pre-frailty was associated with depression, low cognitive function, depression, multiple comorbidities, low income, low social support and unemployment (Table 1). In the multiple logistic regression analysis, pre-frailty was associated with having low cognitive function (Odds Ratio [OR] 8.56, 95% Confidence Interval [CI]: 3.24–22.63), 4 or more comorbid conditions (OR 4.00, 95% CI: 2.23–7.06), and an income less than $50,000 (OR 2.70, 95% CI: 1.56–4.68) (Table 2). Conclusion This study shows that commonly collected clinical and sociodemographic metrics can help identify PLWH who are more likely to have pre-frailty. Early recognition of factors associated with pre-frailty among PLHIV may help to prevent progression to frailty. Understanding markers of increased risk for pre-frailty may help clinicians and health systems better target multi-modal interventions to prevent negative health outcomes associated with frailty. Disclosures All authors: No reported disclosures.
Characteristics Associated with Pre-Frailty in Older People Living with HIV
Background. Frailty is a concern among older people living with HIV (PLHIV). There is a paucity of research characterizing PLHIV who are at risk of becoming frail (pre-frailty). To investigate how HIV impacts older PLHIV in the United States, a new study called Aging with Dignity, Health, Optimism and Community (ADHOC) was launched at ten sites to collect self-reported data. This analysis uses data from ADHOC to identify factors associated with pre-frailty.
Methods. Pre-frailty was assessed using the Frailty Index for Elders (FIFE), where a score of zero indicated no frailty, 1-3 indicated pre-frailty, and 4-10 indicated frailty. A cross-sectional analysis was performed on 262 PLHIV (age 50+) to determine the association between pre-frailty and self-reported sociodemographic, health, and clinical indicators using bivariate analyses. Factors associated with pre-frailty were then included in a logistic regression analysis using backward selection.
Conclusion. This study shows that commonly collected clinical and sociodemographic metrics can help identify PLWH who are more likely to have pre-frailty. Early recognition of factors associated with pre-frailty among PLHIV may help to prevent progression to frailty. Understanding markers of increased risk for pre-frailty may help clinicians and health systems better target multi-modal interventions to prevent negative health outcomes associated with frailty.
Disclosures. All authors: No reported disclosures. Methods. We prospectively collected data on epidemiology, comorbidities, CD4, HIV virus load and ART from November 2017 to September 2018 in patients undergoing TE examination with Controlled Attenuation Parameter (CAP) in our HIV clinic at Saint Michael's Medical Center in Newark, NJ. We used the same parameters to define NAFLD and fibrosis severity that were used for the UHP (CAP >248 dB/m and TE > 7.1 Kpa). We present comparative data between those 2 cohorts.
A Comparison Study of Prevalence and Risk Factors for Nonalcoholic Fatty Liver Disease (NAFLD) and Nonalcoholic Steatohepatitis (NASH) by Transient Elastography (TE) in HIV-Infected Patients
Results. We enrolled 624 consecutive HIV-infected individuals (group 1) their baseline epidemiologic characteristics were not significantly different from the UHP cohort (group 2) for age and sex. Prevalence of NAFLD was 51.6% in group 1compared with 42.7% in group 2, and the prevalence of significant fibrosis in those with NAFLD was 31% in group 1, and 23% in group 2. The main differences we found between those 2 cohorts were race: group 1, 68% black and group 2, 87% White, incidence of Diabetes mellitus was 20% in group 1, and 6% in group 2, despite the fact that BMI was not significantly higher in group 1. Other important differences were the mean time on ART, it was 5 years longer for group 1. Finally, there was a trend for a higher incidence of hypertension, a lower percentage of patients with Virus load < 20 c/mL, a lower mean CD4 count, and a higher percentage of integrase strand transfer inhibitors current users in group1.
Conclusion. NAFLD prevalence is alarming high in patients with HIV disease, it is of utmost importance to understand its natural history, in order to prevent the potentially severe consequences of NASH. Our study suggests that a longer duration on ART might correlate with higher incidence of NAFLD, which would suggest better monitoring of liver health with new ART.
Disclosures. All authors: No reported disclosures. Background. Patients co-infected with HIV and HCV represent a unique subpopulation with specific high-risk characteristics including increased transmission efficiency of HCV, higher HCV viral load and more rapid progression of liver disease when compared with mono-infected patients. Although virologic failure is rare in the direct acting antiviral (DAA) era, we have anecdotally observed a high rate of failure in our patients who are co-infected and have cirrhosis. Our objective was to evaluate the impact of cirrhosis on co-infected patients compared with co-infection without cirrhosis and mono-infected patients with cirrhosis as it relates to cure of HCV treated with DAAs.
Efficacy of Second-Generation Direct Acting Antivirals in the Setting of HCV/HIV Co-infection and Cirrhosis: A Review of Real-World Treatment Experiences
Methods. A retrospective chart review was performed. Patients from UConn Health Infectious Diseases and Gastroenterology clinics and Hartford Hospital Comprehensive Liver Center treated January 1, 2014 through December 31, 2017 were included. Patients were grouped as follows: (1) HCV/HIV coinfected without cirrhosis, (2) HCV/HIV coinfected with cirrhosis, (3) HCV infected with cirrhosis. Data were analyzed in SAS, variables were compared by chi square analysis and Fishers Exact test to determine statistical significance.
Results. No differences in baseline characteristics were noted (Table 1). Cirrhotic patients were 63% of the total cohort. There was no statistical difference in the rates of sustained virologic response (SVR) among the 3 groups. The overall rate of SVR was 95%. SVR for patients with cirrhosis (co-and mono-infected) was 92%. All treatment failures (n = 3) in this cohort had cirrhosis. Among the 38 cirrhotic patients, 3 (8%) had treatment experience with DAAs. In contrast, none of the non-cirrhotic patients had prior DAAs. The use of protease inhibitors or ribavirin had no impact on cure; ribavirin was evenly distributed between the two groups with cirrhosis. SVR rates were lower with genotypes 2-4 as compared with genotype 1. No immunologic or virologic factors were correlated with SVR.
Conclusion. We found no differences in rates of SVR in coinfected patients with or without cirrhosis. However, all treatment failures were noted in patients with cirrhosis, and cirrhotic patients tended to have treatment experience with DAAs. Whether coinfected patients with cirrhosis should be managed differently will require additional study. | 2019-10-24T09:17:09.160Z | 2019-10-01T00:00:00.000 | {
"year": 2019,
"sha1": "2561e9d4596357f8e4a1f219a5defa2014a67a86",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1093/ofid/ofz360.425",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "911277d7da24d484fe5285a9234fe287f19bea28",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233909571 | pes2o/s2orc | v3-fos-license | Health-care Accessibility Assessment in Kazakhstan
BACKGROUND: Global health initiatives such as health for all and universal health coverage aim to improve access to health care. These goals require constant comprehensive monitoring to eliminate inequalities in the availability of health care. AIM: The purpose of our study was to assess the physical availability of medical care in Kazakhstan. METHODS: A descriptive study based on a Service Availability and Readiness Assessment (SARA) general availability index calculation that used secondary data as a source of information. RESULTS: The general availability index calculated for the regions of Kazakhstan ranged from 95% to 100%. When considering individual indicators of the index, decrease trends of the volume of inpatient care were identified. Outpatient care had fluctuations with values better than benchmark after 2009. Stable upward trend illustrates positive picture of core health personnel. CONCLUSION: According to the SARA availability index, it can be concluded that health care in Kazakhstan exceeds the threshold values and is available in all regions. Trends for individual indicators of the index should be studied in more detail, taking into account the influence of health policy and other factors. Edited by: Sasho Stoleski Citation: Shaltynov A, Raushanova A, Jamedinova U, Sepbossynova A, Myssayev A, Myssayev A. Health-care Accessibility Assessment in Kazakhstan. Open Access Maced J Med Sci. 2021 Feb 18; 9(E):89-94. https://doi.org/10.3889/oamjms.2021.5704
Introduction
The World Health Organization (WHO) promoted the idea of health for all, which culminated in a conference on primary health care in Alma-Ata (today Almaty), Kazakhstan, in 1978. In 2013, the United Nations General Assembly adopted the Universal Health Coverage (UHC) concept [1]. Accessibility is a key focus of the above initiatives, as service capacity and access are one of the tracking areas of the UHC resolution [2].
Back in 1981, Penchansky and William Thomas identified accessibility as a pressing issue for health debate [3]. They grouped the concept of accessibility into five main domains: Affordability as a provider's cost versus a customer's willingness to pay for services; availability is the ability to have the necessary resources, such as personnel and technology, to meet a client's needs; and accessibility refers to geographic availability, which is defined by how easily a customer can physically reach a supplier's location; accommodation reflects the degree to which health-care services meet the client's wishes; acceptability reflects the degree to which the client is satisfied with the more consistent characteristics of the provider such as age, gender, social class, and ethnicity of the provider [4]. Accessibility of services relates to the physical presence of items needed to deliver services and encompasses health infrastructure, essential health personnel, and aspects of service utilization [5].
Data on assessing the availability of health care in the post-Soviet countries are not often published, which are overwhelmed or insufficient in scale [6], [7].
Data on assessing the availability of health care in Kazakhstan were studied for both inpatient and outpatient health care in rural areas, but they are also not comprehensive [8], [9].
One of the universal comprehensive tools for assessing accessibility is Service Availability and Readiness Assessment (SARA) tool developed by the WHO based on cooperation with U.S. Agency for International Development (USAID) [10], [11].
The aim of our study was a comprehensive assessment of the main indicators of the physical accessibility of healthcare in Kazakhstan using SARA availability index. https://www.id-press.eu/mjms/index
Study design
We conducted a descriptive study, which based on secondary data from the official open source "Medinform" database that contains medical demographic indicators of the Kazakhstan by districts from 2000 to 2018 [12]. Information from this database were generated by 14 regions, two cities of republican significance and whole Kazakhstan. In addition, these indicators were divided for rural and urban settlements.
The assessment of availability based on the SARA tool developed by the WHO. We used Service Availability indicators (n = 6) facility density (number/10000 population), inpatient bed density (number/10000 population), health workforce density (number/10000 population), inpatient service utilization (according to SARA tool, it is number of hospital discharges, but in our study, we used number of hospitalizations/100 population/year), nurse to doctor ratio to compare rural, and urban population and outpatient outpatient visits per person per year and maternity beds/1000 pregnant women indicators to compare regions. Outpatient visits and maternity beds indicators data were available only for whole population without dividing to urban/rural population, hospitalization not hospital discharges indicator data was available in "Medinform" database. Facility density indicator was the sum of inpatient and outpatient facilities, workforce was included doctors, midwives, and nurses. Population data were taken from the statistical yearbooks issued by the agency of statistics, Kazakhstan [13].
Statistical analysis
Raw data of health facilities, health workforce, and number of hospitalization in "Medinform" database were presented in absolute numbers. We used standardization procedures according to SARA tool.
Average indicators were obtained separately for districts separately for cities to compare urban and rural indicators.
To compare, indicators between regions SARA service availability index were used. The service availability index is the un-weighted average of the three areas: Infrastructure, health workforce, and utilization: a.
Max.100 means that if the tracer indicator score exceeds the benchmark, it will be scored as 100%.
There were some not available indicators for rural districts. All missing values were replaced with median for the entire time for these districts.
All calculations were performed with R studio software (Rstudio, MA, USA) version 1.1.463 for Windows.
ArcGIS software (ESRI, CA, USA) 10.7 was used to creating interactive map of Kazakhstan.
Ethical issues
We did not use personal data for this reason, there was no need for information consent.
Ethical committee of Semey Medical University (Semey, Kazakhstan) approved our study, before it was started (protocol 2 dated 18 October 2019). Figure 1 displays data on the health-care facilities density/10,000 population for urban and rural areas from 2000 to 2018. As can be seen from the graph, the peak of facilities density was in 2008 with a further decline. In rural areas density indicator better, we bound it with that in our country, we have many primary district health-care facilities in rural areas. Inpatient beds/10,000 population were presented better in urban areas with predominance of multidisciplinary hospitals (Figure 2). Inpatient beds density in rural areas has decline trend from 2010 with indicator <25 inpatient beds/10000 in 2018.
Results
Doctors, nurses, and midwives rate/10,000 population have upward trend among urban and rural areas, but urban indicator was approximately 2½ times more for urban areas (Figure 3). However, even rural areas core health workers indicator more than 2 times more than international benchmark (23/10000 population).
International benchmark for number of hospital discharges/100 population is 10. In our study, we found fluctuations of hospitalizations among rural population about international benchmark and fluctuations of hospitalization about 20/100 among urban population ( Figure 4). The last two indicators were calculated only for whole country. Figure 5 demonstrates maternity beds/1000 pregnant women about twice less in 2018 than in 2000, but more than international benchmark about 2 times.
Discussion
To the best of our knowledge, this is the first descriptive study identifying availability index of healthcare in Kazakhstan and between Kazakhstan regions.
The idea of using methodology based on a part of SARA tool, we found at study protocol with aim of measuring readiness of primary health care for acute vascular events in rural low income settings [14].
Going to first area of un-weighted SARA availability index health-care facilities density in Kazakhstan, we define it 1.8 for urban areas and 2.3 for rural areas that about international benchmark for this indicator [10]. Bulletin of the WHO journal contains results of SARA implementation reports in Burkina Faso, Cambodia, Haiti, Sierra Leone, United Republic of Tanzania and Zambia [5]. The majority of these countries have <2 facilities/10,000 population, but in Cambodia, it is 3.6 and, in Sierra Leone, it is 2.1. In studies accessing availability of primary health care, this indicator is 4.6 primary health-care facilities in two districts of Mongolia and 6.7 in Mainland China (15,16). We consider that the declining trend in the availability of facilities in our study is associated with an increase in the population from 14 millions in 2000 to 18 millions in 2018, while, since 2010, there has been a decrease in the number of hospitals [8].
A declining trend in facility density logically leads to a reduction in inpatient beds, we see lower rates in the rural areas, with an indicator of 22.5/10,000 population in 2018, in which we expect can be explained by the presence of multidisciplinary hospitals with a number of beds from 600 to 1000 only in urban areas [15], [16], [17]. This indicator calculated in other SARA studies were 14 in United Republic of Tanzania, 10 in China, and 21.6 in two districts of Mongolia.
We consider that fertility rate growth from 1.8 in 2000 to 2.84 in 2018 reflects sharply decrease of maternity beds from 39.5 to 20.7 for the same period [18].
The second area of un-weighted SARA availability index is presented only with health workforce capacity. In comparison with other SARA studies, Kazakhstan has rather high rates on this indicator [5]. This indicaor in primary health-care availability assesment were 61.2/10,000 in Mongolia and 26 in China [15], [16]. Very interesting data were received by researchers in Canada, where 11.7 and 7.2/1000 population nurses in average, respectively, for urban and rural areas and only 2.6 and 1.1 doctors/1000 population for cities and rural areas, respectively [19]. Based on these data, we can conclude that the nurse to doctor ratio is important: In Canada, it is 4.7, in Mongolia, where the accessibility index was studied, this indicator is 1.4 to 1. We did not present the calculation of this indicator for the sake of the methodology for calculating the accessibility index. For urban and rural areas of Kazakhstan, it is 1.9 and 3.4, respectively. Calculated utilization measured with outpatient visits was almost similar with OECD33 consultation with doctor indicator in 2017 (6.8 visits/capita/year). Hospital discharge rates in OECD36 (154/1000 population) significantly higher than the hospital admissions calculated in our study [20].
If we carefully consider the trends for each indicator of the comrehensive availbality assessment, then it is possible to change the change in the curves from 2009 to 2010, which corresponds to the period of the health care reform "Salamatty Kazakhstan" aimed at optimizing inpatient care and strengthening the role of outpatient care. In general, the trends presented for urban and rural areas which were in line with the conclusions of the experts of the European Author Queries??? AQ1: Kindly provide running title AQ2: Kindly check and confirm the title AQ3: Kindly provide department AQ4: Kindly provide department AQ5: Kindly provide department AQ6: Kindly provide history details Observatory on Health Systems and Policies who noted the "bloated" inpatient care received since the Soviet Union, inequality in the availability of health care in rural areas [17].
Finally, discuss about availability index, it was at maximim level almost in whole regions of Kazakhstan which may require other research methods that may appear in a new tool developing by the WHO, World Bank, and USAID [21].
Limitations
We use only general service availability index from SARA tool, for complex availbaility assesment, it will be better to calculate readiness index too. The main drawback of our study, it is secondary data, although it is allowed by SARA manual to use master facility list to calculate the availability index, it is better to conduct a survey.
Conclusion
It can be noted that our study revealed some trends in indicators of access to medical care that should be studied in the future, taking into account additional factors. In general, since 2009-2010 in Kazakhstan, there has been a decrease in indicators related to inpatient care; however, they still remain higher than the WHO benchmark. The SARA general availability indicator calculated for the regions is at least 95%, which indicates excellent physical accessibility; however, we identified an imbalance between rural urban areas. | 2021-05-08T00:03:40.554Z | 2021-02-18T00:00:00.000 | {
"year": 2021,
"sha1": "e56e350407511b8ad930ea916d64064b84006565",
"oa_license": "CCBYNC",
"oa_url": "https://oamjms.eu/index.php/mjms/article/download/5704/5431",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f26d9cdebab909cd5ec5ce61097afd13d406cab2",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6508988 | pes2o/s2orc | v3-fos-license | Priapism, pomegranate juice, and sildenafil: Is there a connection?
We report the development of low flow priapism in three patients related to simultaneous consumption of sildenafil with pomegranate (Punica granatum) (POM) juice. There were no other concurrent diseases, intake of drugs, and chemicals or other risk factors in these patients. We want to create awareness among patients and practitioners for recognition and timely intervention. Probable mechanisms are highlighted.
INTRODUCTION
Sildenafil citrate is a potent and selective inhibitor of cyclic guanosine monophosphate (cGMP). It has been widely prescribed for erectile dysfunction. [1] The adverse events reported are flushing, dizziness, headache, tachycardia, chest pain, drowsiness, hypotension, nausea, and syncope. [2] However, priapism with isolated use of sildenafil in therapeutic dose has been documented rarely. [3] Medications containing nitrates, alpha blockers, HIV protease inhibitors, and St. John' s wort has to taken very cautiously. Also, use of potent cytochrome P450 (CYP) 3A4 inhibitors namely macrolide and imidazole antimicrobials as well as non-specific CYP inhibitor cimetidine are associated with increased plasma levels of sildenafil and may aggravate the actions of sildenafil.
Pomegranate (Punica granatum) (POM) juice has a high content of polyphenolic flavonoid. The flavonoids have potent anti-oxidant and anti-atherosclerotic properties. Owing to these nutritional characteristics, HEART-UK and the cholesterol charity encourages consumption of POM as part of a routine heart healthy diet. [4] This endorsement motivates people to drink the juice for its potential health benefits. Priapism following concurrent use of sildenafil and POM juice is reported for the first time in the literature to create awareness on this entity.
Case 1
A 46-year-old man presented to the emergency room with a persistent and painful penile erection that had lasted for 5 hours after sexual intercourse with his wife. He was diagnosed to have psychogenic type erectile dysfunction and the urologist prescribed sildenafil. He took 50 mg sildenafil on alternative days for past 30 days which produced hard erections that detumesced immediately after intercourse. He was advised by an alternative medicine practitioner to drink 200 ml of pomegranate juice daily to improve the vigor and vitality. After taking pomegranate juice first time along with a dose of 50 mg of sildenafil for the first time, he had an erection within 15 minutes that was sustained even after ejaculation. He was not on any concurrent medications or herbal agents. He denied previous events of priapism, genital trauma, substance abuse, and other chronic illness. Examination showed an engorged, edematous, erect penis with tense and tender corpus cavernosa sparing the corpus spongiosum and glans. The testis and the prostate were normal.
Complete blood count, blood chemistries including liver function test, and coagulation profile were within normal limits. No dysmorphic red blood cells were seen on microscopic examination. Aspiration of cavernosa yielded a dark blood and analysis of the aspirate confirmed a venous composition of pH of 6.90, PCO 2 of 65 mmHg, and PO 2 of 15 mmHg, which were consistent with low-flow priapism. His priapism was refractory to analgesics, ice packing, and subcutaneous terbutaline. Hence, epinephrine along with 2% lidocaine was injected into each corpus after aspiration of venous blood. Total detumescence occurred within 15 minutes. The patient was discharged uneventfully and was instructed not to drink pomegranate juice while on sildenafil. He continues to use sildenafil 50 mg, which produces erections that subside immediately after orgasm.
Cases 2 and 3
Two brothers of a same family aged 35 and 27, who were married and healthy and did not take any medications presented to the emergency room with 8 hours history of priapism. They consumed sildenafil 50 mg occasionally in order to increase their orgasm. Incidentally, they were advised by practitioner of naturopathy to drink freshly prepared POM juice. On the day of concomitant use of POM juice with sildenafil 50 mg, they had erections that did not detumesced. There was no previous history of priapism or other risk factors. Laboratory studies were within normal limits. On examination, the corpora cavernosa were rigid and tender with soft corpus spongiosum and glans. Detumescence was achieved within 10 minutes following an intracavernous injection of epinephrine along with 2% lidocaine (as an anesthetic). The response was soon probably because it was a drug-induced priapism and not due to associated/underlying pathological causes. They were counseled not to use sildenafil as an aphrodisiac agent. On follow-up at 3 months, they reported normal erections with no further episodes of priapism.
DISCUSSION
After oral administration, sildenafil is rapidly absorbed and the bioavailability reaches around 40% only due to extensive first-pass metabolism. This is also influenced by age of the patient and liver and kidney function. Warrington et al. reported that about 79% of the intrinsic clearance of sildenafil is attributable to cytochrome P450 (CYP) 3A4 and remaining to CYP2C9. [5] The pharmacokinetic properties of sildenafil predispose it to drug interactions. Laboratory studies have shown that POM juice inhibits key cytochrome P450 enzymes, which are involved in sildenafil metabolism similar to grape juice. [6] Based on the circumstantial evidence in our patient who had a previously well-controlled sexual activity without changes in dosage, the concomitant intake of POM juice might have resulted in priapism by inhibition of CYP3A4-mediated first-pass metabolism and increased the bioavailability of the drug as observed by Lee et al. [7] It is also possible that the inherent antioxidant properties of POM juice might have enhanced the bioavailability of endothelial nitric oxide (NO) levels and acted directly on the endothelium as reported by Ignarro and colleagues. [8] In addition, an additive or synergistic effect between sildenafil and POM juice might have also contributed for priapism. Occurrence of priapism with concomitant use of POM juice and sildenafil among the brothers (case 2 and 3) of the same family indicate the probability of genetic predisposition. Rechallenge could not be done as these patients refused. However, further studies are needed to confirm or refute these issues in indigenous medical practice. POM juice has been recommended in the treatment of erectile dysfunction. [9] An effect of grapefruit juice on the pharmacokinetics of sildenafil was documented earlier. [10] Many drug interactions are a result of inhibition or induction of CYP enzymes. [11] The limitations of this report being lack of rechallenge and non-estimation of serum sildenafil level.
CONCLUSION
This report highlights on the interaction of POM juice with sildenafil and the development of low-flow priapism, an emergency which requires immediate attention to alleviate complications and minimize the risk of impotence. We also suggest that patients taking sildenafil should be aware of this potential interaction and warned about concurrent use in future. In the meantime, the manufacturers should be advised to include this in patient information leaflet. Also, clinicians and practitioners should be aware of this interaction while treating and prescribing sildenafil. | 2018-04-03T00:36:20.217Z | 2012-05-01T00:00:00.000 | {
"year": 2012,
"sha1": "94124c4a47de0c489cca9d529f7a59aaf53f1841",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/0974-7796.95560",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "91bfef44f13ec6b9e70703520558b2ae402ad89b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
76661128 | pes2o/s2orc | v3-fos-license | Quantifying the Interactions between Biomolecules: Guidelines for Assay Design and Data Analysis
The accurate and precise determination of binding interactions plays a central role in fields such as drug discovery where structure–activity relationships guide the selection and optimization of drug leads. Binding is often assessed by monitoring the response caused by varying one of the binding partners in a functional assay or by using methods where the concentrations of free and/or bound ligand can be directly determined. In addition, there are also many approaches where binding leads to a change in the properties of the binding partner(s) that can be directly quantified such as an alteration in mass or in a spectroscopic signal. The analysis of data resulting from these techniques invariably relies on computer software that enable rapid fitting of the data to nonlinear multiparameter equations. The objective of this Perspective is to serve as a reminder of the basic assumptions that are used in deriving these equations and thus that should be considered during assay design and subsequent data analysis. The result is a set of guidelines for authors considering submitting their work to journals such as ACS Infectious Diseases.
B iomolecular interactions, such as protein−protein, protein− ligand, and protein−nucleic acid interactions, occur because the complex that is formed is thermodynamically more stable than the unbound species. However, although thermodynamics provide the driving force for binding, the rate of formation and breakdown of the complex is a function of the transition state barrier on the binding reaction coordinate. A full description of the binding event thus requires parameters such as concentration of inhibitor (ligand) that results in 50% inhibition (effect) (IC 50 ) or K d values that report on the thermodynamic stability of the complex as well as the on-and off-rates (k on and k off ) that quantify the lifetime of the complex. Many methods now exist for determining both the thermodynamics and kinetics of biomolecular interactions and are routinely employed across biological space, for example, by underpinning the generation of structure−reactivity relationships (SARs) or structure−kinetic relationships (SKRs) that guide the selection and optimization of drug leads. This includes competitive radioligand binding, 1 the use of mass spectrometry to quantify unbound ligand, 2 methods that detect binding such as fluorescence (anisotropy, Forster/ fluorescence resonance energy transfer (FRET)), 3−6 bioluminescence, 7 or surface plasmonresonance (SPR), 8,9 andassays that monitor change in activity as a function of ligand (agonist, antagonist, inhibitor) concentration. 10 Access to user-friendly programs that can fit data to nonlinear equations has greatly facilitated the ability of investigators to obtain quantitative insight into their systems. In particular, there is no longer a reliance on linearized transformations of common equations which can distort experimental errors, such as the Scatchard plot for equilibrium binding data and the Lineweaver− Burk plot for enzyme kinetics. 11 However, use of programs such as GraphPad Prism or GraFit to perform nonlinear regression generally requires no knowledge of the assumptions and precepts that underlie the equations used for data analysis. There are of course many excellent, authoritative books and papers on assay design and data analysis. This includes Robert Copeland's book "Evaluation of enzyme inhibitors in drug discovery: a guide for medicinal chemists andpharmacologists", 12 andthe Assay Guidance Manual (Eli Lilly and NCATS). 13 This Perspective is not intended to replace these sources of information, nor do I attempt to discuss underlying complexities such as the statistics of nonlinear regression. 14 Rather, I seek to raise awareness of some common misunderstandings and thus provide guidelines for authors wishing to publish in ACS ID. In this Perspective, I use the interaction of small molecule chemical compounds with proteins as a paradigm for binding interactions.
■ RIGOR AND REPRODUCIBILITY: BIOLOGICAL AND TECHNICAL REPLICATES
Many organizations now offer courses and seminars on the Responsible Conduct of Research (RCR). For example, the US National Institutes of Health (NIH) requires that all students funded on NIH grants receive training in RCR. 15 One component of RCR is Data Management, which includes topics such as the rigor and reproducibility of scientific research. Obviously, reproducibility is a fundamental goal in the design and development of appropriate, robust assays for quantifying biomolecular interactions. In this regard, I thought it worth clarifying the difference between biological and technical replicates, which are a component of assessing the reproducibility of scientific findings and often a subject of some misunderstanding. Blainey et al. 16 define biological replicates as "parallel measurements of biologically distinct samples that capture random biological variation, which may itself be a subject of study or a noise source", and technical replicates as "repeated measurements of the same sample that represent independent measures of the random noise associated with protocols or equipment". Thus, technical replicates provide information on the precision of the measurement method, while biological replicates inform about sample to sample variation in the behavior of separate reagent preps, cell cultures, or animals. 17 The number of replicates and their treatment in subsequent data analysis, for example, whether or not the replicates are averaged, depends on the type of experiment and its purpose. Below, I briefly introduce nonlinear regression and discuss replicates in the context of curve fitting.
■ GUIDELINES FOR NONLINEAR REGRESSION
Nonlinear regression is a method for fitting an experimental x−y data set, such as a concentration−response relationship, to a mathematical equation. This is achieved by systematically varying the values of the parameters in the equation until the parameter values giving the best agreement between the data and the equation are found. The best fit is defined as the set of parameter values that minimize the squared differences between the measured and calculated y values, summed over all data points (so-called "least-squares" regression). There are several considerations in applying nonlinear regression, including the choice of model (i.e., the fitting equation), whether any parameters should be constrained (such as the Hillcoefficient h in an IC 50 model), the choice of initial values for each parameter, how replicate data points are treated, deciding whether and how to weight the data points, and how to detect and handle outliers (see Box 1). Although this Perspective does not attempt to discuss each of the above topics, it is possible to give some initial guidelines especially on the treatment of replicates.
Investigators may use whatever level of replication they consider appropriate for measurements that are exclusively aimed at helping make decisions on how best to proceed and that are not intended for publication. However, minimal standards of reproducibility must be met for any result to be publishable, and high standards of reproducibility are required for results on which a major conclusion depends. For example, in general, any IC 50 value reported in a publication should be determined using replicate (typically triplicate) measurements at each inhibitor concentration, and the entire IC 50 measurement should be repeated at least once (it being acceptable to use the same enzyme and inhibitor preps) to show that it is reproducible. In characterizing the final, optimized inhibitor compound(s), on which the manuscript's claims of important biological activity or other major conclusions are based, higher levels of replication are typically required. These replicates will ideally include using separate preparations of enzyme and inhibitor. This higher level of rigor is required because the reproducibility of the results obtained using these key compounds is central to the validity of the entire publication, and it is therefore essential to show that the results are reproducible and that activity does not vary in unexpected ways from one enzyme prep to another and/or does not result from some contaminant in the inhibitor prep. Regardless of the experimental design, investigators should clearly specify how many and what kind of replicates were performed for each experiment reported (see Box 1).
In an enzymatic assay, the duplicate or triplicate initial velocity measurements at each inhibitor concentration may come from different wells in the same multiwell plate or from individual enzyme assays performed in a cuvette. Assuming that the reagents are stable and the stock solutions are homogeneous (i.e., all components are fully soluble), the differences in the results obtained between the replicate wells provide a measure of the stochastic variability in diluting and dispensing the reagents, plus any irreproducibility in the performance of the detection instrument (e.g., the plate reader). Two different approaches are commonly employed when applying nonlinear regression to such a data set. In one approach, each measurement is treated as an independent data point and the equation is fitted simultaneously to all data points. Alternatively, some investigators will instead average the replicate measurements and then fit the resulting average values to the equation. In most curve fitting programs, it is possible to weight the averaged values according to the spread or the standard deviation of the replicate measurements for each condition. Doing so ensures that the fitting process places greater Box 1. General Points (i) Provide full experimental details for each assay including protein and ligand concentrations, buffer conditions, reaction temperature, incubation times, and number of replicates. (ii) Provide data plots together with the fitted curve(s) and the equation(s) used for the data analysis. Report standard errors for the calculated parameters. (iii) Nonlinear regression includes the following steps: choice of model, whether to constrain any parameters, selection of initial values for each parameter, whether to use differential weighting, how to detect and handle outliers, and whether to average replicates before data fitting. In general, there should be at least two or better three replicates at each experimental set of conditions (e.g., inhibitor concentration). The replicates can be treated as individual data points in curve fitting, or the averaged data can be analyzed while using the standard deviation of the replicates to weight the data. Fitting averaged data without individually weighting the averaged values should be avoided. (iv) Parameters for the final, optimized inhibitor compound(s) should ideally be based on replicates determined from separate preparations of enzyme and inhibitor. (v) Curve fitting programs enable data to be analyzed using very complex mathematical models. Generally, an increase in the number of variables used in data fitting will improve the goodness of fit. However, a valid mechanistic reason must be advanced for increasing the number of variables used to fit the data. This may include information obtained from other approaches. For example, the observation of two different enzyme−inhibitor structures (EI and EI*) by X-ray crystallography supports the two-step slow-onset mechanism for the inhibition of the enoyl-ACP reductase from Mycobacterium tuberculosis revealed by progress curve kinetics. 57 weight on fitting the averaged values that were determined most precisely, as shown by the close agreement between the replicate measurements, while placing lower weight on fitting values that showed greater differences between the replicates. In general, fitting averaged data without individually weighting the averaged values should be avoided, because it ignores information contained in the data set about the reliability of each measurement. While arguments can be made about the relative statistical validity of averaging or not averaging replicate data points before curve fitting, if the data are of good quality (i.e., reasonably accurate and precise), then the curve fitting will return very similar parameter values regardless of whether the data are averaged and how they are weighted. Conversely, if the data quality is poor, the parameter values resulting from the curve fit will be unreliable, regardless of the fitting approach used. One lesson arising from the above discussion is that it is important to check the default settings for nonlinear regression used by the curve fitting software to determine how and whether the program weights the different data points in the set. A second point is that, before averaging replicates, it is important to examine the results to check that there is no evidence of systematic error. For example, if data from a plate reader assay shows a significant difference between replicate measurements depending on which row or column of the assay plate the samples occupy, then this is indicative of a systematic rather than a random variation in results. Another common example is the observation that replicate measurements made at various times over the course of a day increase or decrease systematically, indicating that one or more of the reagents is not stable. If systematic error is evident, it is not appropriate to average the measurements. Instead, the source of the systematic error should be identified and eliminated to allow a valid experiment to be performed.
Some common errors in data fitting involve the generation of best-fit parameter values that fall outside scientifically reasonable limits. For example, when fitting a set of inhibitor concentration− responsedata to an inhibition curve, it is common for investigators who are inexperienced in quantitative analysis to generate curve fits that extrapolate to a reaction velocity below zero at saturating inhibitor concentrations or to a velocity above that of the uninhibited reaction when extrapolated to zero inhibitor. These outcomes are physically impossible, and it is incumbent for the investigators to make sure that the parameter values they obtain after curve fitting their data are indeed reasonable. One approach that can help avoid these errors is to plot the curve fit for a range of x-variable values that extend for at least 2−3 logs above and below the IC 50 , so that any nonphysical behavior at these extremes of inhibitor concentration becomes apparent. It is often appropriate to constrain the values of one or more parameters in the fitting equation to fix the minimum or maximum y-variable values to known limits or control values. Thus, it is necessary for the investigator to understand the fitting equation (though not necessarily the algorithm by which the data are fit to this equation) and especially that they know which parameter in the equation corresponds to which feature of the fitted curve.
What To Report. Standard errors should be reported for each parameter, and some curve fitting programs will also calculate confidence intervals which could also be reported (see Box 1). If parameters such as IC 50 values are determined for multiple reagent preps (inhibitor batches/protein preps), then the mean of the values can be reported together with the standard deviation and the number of replicates. In plotting fitted data for publication, the experimental data should be plotted on top of the theoretical curve predicted from the equation using the best-fit parameter values. If the data are averaged, then error bars should be included that represent the range or the standard deviation of the replicate measurements that contributed to each average data point. If the experimental data do not extend to sufficiently low and high values of the x-variable to approach zero effect and a full effect, the line that represents the curve fit should not end at the lowest and highest experimental x-variable values but should extend beyond the data to show that the best-fit equation is consistent with the highest and lowest y-variable values (e.g., full enzyme activity versus zero or background levels of activity) expected from the experiment.
It should be emphasized that these recommendations only scratch the surface of the topic; there is a great deal of additional complexity in curve fitting. For example, curve fitting programs will also calculate a χ 2 or R 2 , which are measures of how well the regression fits the actual data (the Goodness of Fit). These parameters assess the variation of the actual data from the fitted curve, and in general, a R 2 value close to 1 or a χ 2 value close 0 are taken as evidence that the data are fit well by the model. However, caution should be exercised in using these values since fitting data to the wrong model can still yield what appears to be a statistically "good fit". For those interested in learning more about the intricacies of curve fitting, there are many excellent publications, some of which are referenced here. 14,18−20 ■ ENZYME ASSAYS Steady-State Enzyme Kinetics. Many drug targets are enzymes, and hence, SAR is commonly based on assays in which I] (eq 2) to give an IC 50 of 114 nM. The data have been converted into % inhibition where the response changes from 0% to 100% inhibition over the experiment so that only 1 parameter is needed for initial data fitting, constraining the slope factor (Hill coefficient, h) to be 1. In a 2-parameter fit, h would be allowed to vary, while ina 4-parameter fit, the range over which the response varies (Y max − Y min ) as well as the background signal (Y min ) are also variables (eq 3). Also shown are the calculated fits if h is constrained to 2 or 0.5, where it can be seen that there is a systematic deviation between the fitted curve and the experimental data points. the effect of compounds on the rate of substrate consumption or product formation is monitored. Consequently, it is worth reminding ourselves of approximations used in deriving the Michaelis Menten equation (eq 1), which is based on the simple reaction scheme shown in Figure 1 The Michaelis−Menten equation can be derived using the steady-state assumption in which the concentration of ES is assumed to be constant (cf. Briggs−Haldane derivation), and in eq 1, V max = k cat [E] total , where [E] total is the total enzyme concentration and K m = (k 2 + k cat )/k 1 . K m is the Michaelis− Menten constant and represents the dissociation constant of all enzyme bound species.
Basic Assumptions and Concentration (Dose)−Response Curves. Similar rate expressions can be derived to account for the impact of enzyme inhibitor on the rate of the reaction. In inhibition assays, it is also often assumed that [I] ≫ [E] again so that [I] free ≈ [I] total (see Box 2). This is because equilibrium constants are determined from the concentration of reactants present at equilibrium (i.e., [S] free or [I] free ), and given the above approximations, data analysis can use the total concentration of substrate or inhibitor added to the reaction.
Enzyme inhibition is most commonly analyzed using initial velocity measurements. Formally, the initial velocity is determined from a tangent to the initial portion of the reaction progress curve, and it is most convenient to use a continuous assay format so that the initial velocity can be accurately measured. However, many assays, including those run in high throughput, often rely on single time point or end point assays from which initial velocity data are extracted (see Box 2). In this case, it is important to recognize the underlying assumption that the reaction velocity is linear until the single time point is taken. If a continuous assay format is not available, we recommend that multiple time points are taken in order to check the linearity of the reaction. One simple control is to demonstrate that v i ∝ [E], and thus, the initial velocity should double if the enzyme concentration is doubled. Figure 3. Competitive, noncompetitive/mixed, and uncompetitive inhibition. A competitive inhibitor binds to free enzyme and competes with the substrate while an uncompetitive inhibitor binds to the ES complex (binds after the substrate). A mixed inhibitor binds to both E and ES, while a pure noncompetitive inhibitor has equal affinity for E and ES (K i = K i ′). (B) A two-step mechanism in which the rapid formation of the initial EI complex is followed by a slow step leading to the final EI* complex. These are mechanisms A and B from Morrison and Walsh. 30 Note that by convention inhibition rate constants are numbered starting with k 3 since k 1 and k 2 are used to describe substrate binding ( Figure 1). Figure 5. Progress curve analysis of a two-step slow-binding inhibitor. Under conditions where the reaction velocity is linear in the absence of inhibitor (v 0 ), curvature in the presence of inhibitor is diagnostic of slowbinding inhibition. 30 The figure shows forwardprogress curve analysis for the inhibition of an enzyme simulated using Kintek, 32 which follows a two-step induced fit mechanism in which the rapid formation of the initial enzyme−inhibitor complex (EI) is followed by the slow isomerization of EI to EI*. (A) Fitting of the data to the progress curve equation (eq 5) yields values forv i , the initial velocity, v s ,the final steady-state velocity, and k obs , the rate constant for formation of the steady state. (B) The hyperbolic dependence of k obs on [I] is consistent with the two-step induced-fit mechanism and fitting of the data to eq 6 gives k 5 = 0.34 min −1 , k 6 = 0.029 min −1 , and K i app = 0.34 μM. (C) Consistent with a two-step mechanism, v i varies with [I], and a fit of the data to eq 7 also gives a value for K i app . (D) A fit of v s /v 0 against [I] to eq 8 gives K i * app = 0.026 μM.
The change in the initial velocity as a function of inhibitor concentration is then often fit to a concentration−response relationship to obtain the IC 50 value for the inhibitor. Software programs, such as GraphPad Prism and GraFit, include several forms of the standard IC 50 equation that differ depending on whether the response increases or decreases with inhibitor concentration and also whether the data are fit to a two or four parameter equation (eqs 2 and 3, respectively).
( ) %inhibition 100 In eqs 2 and 3, where the response is assumed to increase with [I], [I] is [I] free , and h is the Hill coefficient or slope factor. Most programs set h as a variable in addition to IC 50 . However, in a simple binding equilibrium, where there is no cooperativity and only 1 binding site, h is expected to be 1. It is thus recommended that h is set to 1 during data fitting and then allowed to float only if there is clear indication that the experimental data cannot be adequately modeled using h = 1, for example, if there is an obvious systematic deviation between the fitted curve and the experimental data. There are a number of mechanistic and artifactual reasons that can lead to values of h that deviate from for the slow-binding inhibition of polypeptide deformylase by actinonin, which follows a two-step binding mechanism. In this case, the intercept on the Y-axis is close to 0, and so, the data have been analyzed using a modified version of eq 6 where k 6 is set to 0. In this case, k 5 and K i app are actually k inact and K I , which are the parameters for quantifying irreversible enzyme inactivation (see below). Adapted from Van Aller, G. S., Nandigama, R., Petit, C. M.,DeWolf, W.E., Jr., Quinn . Two-step mechanism for irreversible inhibition. Reversible formation of the initial EI complex is followed by a second irreversible step leading to the covalent enzyme−inhibitor complex EI*. The kinetic mechanism is analogous to the reversible two-step mechanism in Figure 4 except that k 6 = 0. Irreversible inhibition is normally quantified by k inact / K I , the second order rate constant for the formation of EI*, where K I is the concentration of inhibitor required to reach the half-maximal rate of inactivation of enzyme and k inact is the maximum rate of inactivation at saturating inhibitor concentrations. Note that K I is not the same as K i , the equilibrium constant for dissociation of EI where K i = k 4 /k 3 . While K I can be numerically equal to K i (e.g., when k 4 ≫ k 5 ), this is often not the case. and k inact /K I should be used to quantify inhibitor potency. unity, 12 for example, if there is positive (h > 1) or negative (h < 1) cooperativity or if the inhibitor operates through a nonspecific mode of action, e.g., due to aggregation. 21 In addition, if the potency of the inhibitor is underestimated, then the chosen enzyme concentration could result in K i /[E] T ratios that result in tight binding inhibition (K i /[E] T is between 10 and 0.01, Zone B, or <0.01, Zone C), 22 giving h values greater than 1. However, an explanation has to be proposed if h differs significantly from unity (seeBox 2).Foreach IC 50 determination, authorsshould show the concentration−response data and the fitted curve on a semilog plot either in the main text or in supporting information (see Box 1). In addition, the standard concentration−response equation often allows both the minimum and maximum values of the response to vary too, so that the data are fit to a four-parameter equation (four-parameter logistic) (eq 3, Y max , Y min , IC 50 , and h). However, the use of a four-parameter fit implicitly assumes that enzyme inhibition does not vary from 0% to 100%, for example, that there is a background rate that is not affected by enzyme inhibition, and again, there has to be sound reasoning for increasing the number of parameters in the data analysis. In addition, the minimum rate cannot be less than the assay background rateand the maximum velocity cannot be greater than the uninhibited reaction velocity. Other experimental issues that may be encountered include solubility of the inhibitor, which may prevent 100% inhibition from being reached. Figure 2 shows a typical concentration−response plot where the data have been fit to eq 2 and h has been constrained to 1. Also shown are the fits if h is constrained to 2 or to 0.5.
If there is no prior knowledge of compound potency, then it is not possible to choose appropriate inhibitor concentrations before any measurements have been made. In this case, it is recommended that the extent of inhibition is assessed using several fixed inhibitor concentrations that vary by factors of 10, say 0.1, 1, and 10 μM. Once it is possible to estimate concentrations that span a response from ∼10% to 90% inhibition, it is then important to recognize that the level of inhibition is not a linear function of [I] (Figure 2), and thus, inhibitor concentrations should be chosen on the basis of a logarithmic rather than a linear scale to ensure an even distribution of data points across the plot. On a log scale, 3.16228 is halfway between 1 and 10, not 5. In addition, there should be enough data points at high and low concentrations to clearly define the end points for curve fitting. Finally, the pIC 50 (−log 10 (IC 50 )) rather than the IC 50 is sometimes reported to account for the exponential nature of the relationship. 23 It is important to realize that the IC 50 value is a function of both the enzyme and substrate concentration. Thus, caution should be exercised when comparing IC 50 values for the inhibition of a specific enzyme by different laboratories even when determined under "identical" conditions. K i values, which do not depend on [E] or [S], offer a more rigorous basis for comparison.
Coupled Assays. Continuous assays often follow the change in a spectroscopic signal, such as the absorbance or fluorescence of the substrate or product, as a function of time. However, in situations where neither substrate nor product has a convenient spectroscopic signature, the enzyme reaction of interest may be coupled to a second reaction that does lead to change in signal that can be easily monitored. In this case, it is important to show that Figure 8. Irreversible inhibition. The kinetics for irreversible enzyme inhibition quantified by progress curve analysis. (A) Time-dependent enzyme inactivation as a function of inhibitor concentration (μM) has been analyzed using a simplified version of the progress curve equation (eq 9) since k 6 = 0, which yields values of k obs and v i as described in Figure 5. 12 (B) A plot of k obs vs [I] is hyperbolic, consistent with a two-step mechanism in which the initial noncovalent association of the inhibitor with the enzyme is followed by a second-step leading to formation of the final covalent enzyme−inhibitor complex. Fitting of the data to eq 10 yields values for k inact = 0.033 min −1 , K I app = 0.28 μM, and k inact /K I app = 0.12 μM min −1 . Again, by analogy to equations for slow-binding inhibitors, eq 10 includes K I app , the apparent value for K I , since the presence of substrate will affect the concentration of inhibitor required to reach 1/2k inact . Timedependent enzyme inactivation was simulated using Kintek. 32
Box 2. Enzyme Assays and Concentration−Response Curves
(vi) When establishing assays, consider the methods that will be used to analyze the data. the coupling reaction is not rate limiting and that the inhibitor does not affect the activity of the coupling enzyme (see Box 2). In addition, it is often not realized that there may be an initial lag in reaction velocity before the initial steady-state velocity is reached. The lag depends on factors such as the k cat and K m of the coupling enzyme and can be minimized by adjusting the concentration of the coupling enzyme. 24 If it is not possible to eliminate the lag phase by adjusting the assay conditions, then the initial velocities should be obtained after ensuring that the lag is complete. K i instead of IC 50 . While concentration−response curves represent the bulk of activity measurements in SAR, enzyme inhibition can also be quantified by obtaining K i values. This can be accomplished by determining k cat and K m values at different fixed inhibitor concentrations and using this information to generate K i values, together with information on the mechanism of inhibition: competitive (K i ), noncompetitive/mixed (K i and K i ′), and uncompetitive (K i ′) (Figure 3). Although this is more laborious than an IC 50 measurement, K i values enable a better comparison of inhibitor potency between compounds since IC 50 values depend on the precise experimental conditions that are used, such as substrate concentration, and provide no mechanistic information about the mode of inhibition (see Box 2). K i values can be calculated from IC 50 values using the Cheng−Prusoff relationship; 25 however, this requires knowledge of the mechanism of inhibition as well as the ratio of [S]/K m . The mechanistic information derived from K i measurements is useful since not all compounds are competitive inhibitors (although this is often assumed in the absence of any mechanistic data). In addition, the observation of pure noncompetitive inhibition (K i = K i ′) may be indicative of a promiscuous inhibitor since it is highly unlikely that an inhibitor has equal affinity for both E and ES. 21,26 Furthermore, uncompetitive inhibition is often considered an advantage since the increase in substrate concentration caused by target inhibition will increase the level of inhibition rather than competing off the inhibitor.
Complexities: Tight Binding Inhibition. Tight binding inhibition occurs when the K i value is similar to, or below, the enzyme concentration (K i /[E] T < 10), leading to depletion of the free inhibitor in the assay so that the assumption [I] free ≈ [I] T is no longer valid (see Box 3). It is thus an experimental definition since it is based on the lowest enzyme concentration that can be used in the assay. Clearly, this is a good problem to have, since we generally want compounds that are very potent. To account for inhibitor depletion under tight binding conditions, inhibition is quantified using the Morrison , it is also often assumed that the system rapidly reaches equilibrium (i.e., in the mixing time of the experiment). However, some compounds are slow-binding inhibitors in which the steady state is reached slowly on the time scale of the assay that is used (see Box 4). Slow-binding inhibitors will also dissociate slowly from their targets, which has important pharmacological consequences since the time scale for breakdown of the drug−target complex can be on the same time scale as the rate of drug elimination. 27−29 Slow-binding inhibition can occur via several mechanisms; however, the most commonly encountered are a one-step mechanism or a two-step mechanism in which the rapid formation of the initial enzyme−inhibitor complex is followed by a slow step leading to the final enzyme− inhibitor complex (Figure 4).
Slow-binding inhibition can be diagnosed by observing curvature in the enzyme assay under conditions where the uninhibited reaction is linear, and data are often analyzed by forward or reverse (jump dilution) progress curves (see Box 4). where v i is the initial velocity in the presence of the inhibitor, v s is the final steady-state velocity, and k obs is the rate constant for onset of inhibition. The dependence of k obs on [I] can be used to distinguish between one-and two-step mechanisms: in this case, the plot is hyperbolic, consistent with an induced-fit two-step mechanism (Mechanism B), and a fit of the data to eq 6 gives values for k 5 and k 6 ( Figure 4) as well as K i app , the dissociation constant of EI. K i app can also be extracted directly from a plot of v i / v 0 against [I] (eq 7), where v 0 is the rate in the absence of the inhibitor, while K i * app , the overall dissociation constant, can be obtained by fitting the dependence of v s /v 0 on [I] to eq 8. In a onestep mechanism, v i does not vary with [I]. 31 Note that in these equations we use K i app and K i * app , the apparent values for the equilibrium dissociation constants. This is because progress curve analysis is performed at a single substrate concentration, which is often very high to ensure that the velocity in the absence of inhibitor is linear. Thus, the experimentally measured dissociation constant will depend on the amount of enzyme that is present as ES (see Box 4). The true values for the equilibrium constants can be obtained from the apparent values using the Cheng−Prusoff equation if both the mechanism of inhibition is known as well as the value for the substrate K m . For example, if [S] > K m then K i app will be larger than K i by a ratio of 1 + [S]/K m for a competitive inhibitor. Analogous correction factors are also available for noncompetitive and uncompetitive inhibitors. 8) Several challenges exist in analyzing slow-binding inhibitors. For example, it may be difficult, due to issues such as substrate solubility, cost, or availability, to achieve reaction conditions where the initial velocity in the absence of inhibitor is linear for sufficient time to observe curvature in the presence of inhibitor. In addition, it may be difficult to distinguish one-and two-step binding mechanisms. For example, if the initial binding event in a two-step mechanism has a relatively high K i app , then it may be difficult to reach sufficient inhibitor concentration to observe curvature in the k obs vs [I] plot. In this case, a two-step binding mechanism can give the appearance of a one-step mechanism. Note that for two-step binding, v i is also expected to vary with [I] ( Figure 5C); however, this may also be difficult to detect for the same reason.Finally, if the rate of dissociationof inhibitor fromthe enzyme is very slow, then it may be difficult to distinguish reversible from irreversible inhibition since the intercept on the Yaxis of the k obs vs [I] plot will be close to 0 (see Box 4). For reversible inhibitors with slow off-rates, it may be easier to perform a jump dilution (reverse progress curve) assay, in which enzyme and inhibitor are preincubated for several hours before dilution into a reaction mixture containing substrate. In this case, the reaction velocity will increase as the inhibitor slowly dissociates from enzyme leading to a reverse progress curve. For slowly dissociating inhibitors, it may also be possible to prepare and purify the enzyme−inhibitor complex and then directly monitor the rate of inhibitor dissociation after diluting the complex. For high affinity inhibitors, it will be necessary to use a very sensitive method to quantify free inhibitor concentration such as mass spectrometry or radioactivity. 33 Several examples of the above scenarios are shown in Figure 6.
Complexities: Irreversible Inhibition. Irreversible inhibitors are compounds that bind to the target and do not dissociate. As noted above, some slow-binding inhibitors, including those that form a covalent complex with the target as well as those that are noncovalently bound, can appear to be irreversible if the offrate is very slow. In this situation, it is important to perform additional assays to determine whether the compounds are indeed reversible, such as jump dilution assays to check for reactivation of the enzyme. While it is tempting to use IC 50 measurements to quantify the "potency" of irreversible inhibitors, such experiments are flawed since they cannot account for the time dependence of inhibition and virtually any IC 50 can be achieved if the reaction is allowed to incubate for long enough. Instead, it is more appropriate to determine k inact /K I , which is the second order rate constant for formation of the irreversible enzyme−inhibitor complex (see Box 4). Irreversible inhibitors often operate through a two-step mechanism in which the inhibitor first binds reversibly to the enzyme followed by covalent bond formation (Figure 7).
Analysis of irreversible inhibition follows a similar approach to that used for slow-binding inhibitors except that a simplified progress curve equation can be used since k 6 = 0 and v s = 0 (eq 9). Simulated data are shown in Figure 8 where it can be seen that the appearance of the plots is very similar to those for two-step slowbinding reversible inhibition except that the intercept of the k obs vs [I] plot passes through 0. In addition, v s values will have a finite value for reversible inhibitors whereas v s values obtained from forward progress curves should tend to 0.
Box 4. Slow and Irreversible Inhibition
(xv) Check for time-dependent effects in binding/inhibition. IC 50 shifts following preincubation of enzyme and inhibitor before initiating the reaction by addition of substratecan be usedto check for slow-binding inhibition, but this only works for an inhibitor that binds in the absence of the substrate. In addition, the decrease in activity following preincubation could have other explanations, for example, that the inhibitor is causing the enzyme to denature. (xvi) Slow-binding inhibition can often be diagnosed by observing curvature in reaction progress curves under conditions where the reaction in the absence of inhibitor is linear. However, it is important to show that curvature does not result from instability of the enzyme under the assay conditions. In addition, linearity may only be accomplished by using high concentrations of substrate, and thus, the apparent dissociation constants obtained under the specific assay conditions (K i app and K i * app ) may differ significantly from the true values as described by the Cheng−Prusoff equations. (xvii) Slow-binding inhibition can occur by both one-and twostep mechanisms, but the ability to distinguish these two mechanisms may be affected by inhibitor solubility and/ or the dissociation constant of the first step in a two-step mechanism. For example, a linear increase in k obs as a function of [I] is consistent with a one-step mechanism. However, this plot can also be linear for a two-step mechanism if the highest inhibitor concentration that is used does not saturate the enzyme (i.e., [I] < K i app ). (xviii) If the off-rate (k 4 or k 6 ) is very slow, then the intercept on the Y-axis of the k obs vs [I] plot may be close to 0, and thus, it may be difficult to distinguish reversible slow-binding inhibition from irreversible inhibition. Additional methods may then be needed to verify that the inhibition is indeed irreversible and/or to quantify the off-rate for very slowly dissociating compounds. (xix) For compounds that bind very slowly, it may be easier to use reverse progress curve analysis (jumpdilution), where enzyme and inhibitor are preincubated prior to dilution into the reaction solution. 2 (xx) Tight-binding inhibition may occur with very potent compounds, requiring data analysis that is based on [I] total rather than [I] free as described by the Morrison equation. (xxi) SKR for irreversible inhibition should use k inact /K I and not IC 50 values.
If the initial binding event is weak, then it may not be possible to saturate the enzyme in order to determine values for k inact and K I app individually. In this case, the plot of k obs vs [I] will be linear with a slope of k inact /K I app . An interesting recent example of this behavior is the covalent inhibition of KRAS G12C by compounds that bind weakly (K i > 64 μM) but react rapidly with C12 (k inact > 0.019 s −1 ). 37
■ DIRECT BINDING ASSAYS
Binding kinetics can also be determined using approaches that directly measure the formation and breakdown of the drug−target complex. These assays are used when no functional assay is available or when binding of ligands to the free enzyme is being assessed. Numerous methods are available, including the use of radiolabeled compounds and mass spectrometry, which provide the highest levels of sensitivity, to approaches in which there is a change in spectroscopic signal on binding. In many cases, the analysis of data from direct-binding assays depends on the same fundamental principles and assumptions described for enzyme assays. Below, I briefly discuss several methods that are in common use.
Radioligand Binding Assays. Receptor−ligand binding assays have historically relied on the use of radiolabeled ligands, which permit binding to be measured for purified receptors or receptorson cell surfaces. 38,39 In these assays, the concentration of bound ligand is often determined directly after free ligand is removed from an immobilized receptor preparation using a wash step (often performed with cold buffer). This approach assumes that the wash step is fast relative to the rate of ligand dissociation. In addition, it is important to vary incubation times to ensure that the system is at equilibrium and also to account for nonspecific binding. Data analysis often assumes that the concentration of receptor ([R]) is very low so that [L] ≫ [R] at every concentration of ligand ([L]) and thus that [L] free ≈ [L] total (see Box 5). However, if K d /[R] < 10 then ligand depletion will occur,and the quadraticMorrison equation mustbe used,with the caveat that the concentration of receptor is accurately known. Finally, a simple one-site binding event should have a Hill slope (h) of 1. Once a radioligand binding assay has been developed, then the binding of additional ligands can be measured using a competitive radioligand binding assay. This can be performed under equilibrium conditions to obtain the K d for binding but can also be adapted, as described by Motulsky and Mahan, to provide the on-and off-rates for ligand binding. 1,40 Fluorescence Binding Assays. Many direct binding assays are based on a change in spectroscopic signal upon formation of the receptor−ligand complex. This includes methods in which the fluorescence of the protein and/or ligand is altered. The change in fluorescence is determined as a function of ligand concentration, and the K d for the ligand is obtained by fitting the data to a binding isotherm using equations described above where the same assumptions, such as [L] ≫ [R], may apply (see Box 5). Again, it is suggested that h is set to 1 for initial data fitting. Fluorescencebased methods are generally less sensitive than techniques based on radioactivity or mass spectrometry, with a typical detection limit of ∼10 −8 M, although there are examples of assays where fluorescence and radiolabeling perform similarly. 3 Important experimental considerations include the need to account for the inner filter effect, in which the fluorophore is sufficiently concentrated to attenuate the excitation light and may also reabsorb some of the emitted light. In addition, while binding assays with radiotracers of known specific activity can give the absolute concentration of the receptor, the relative change in fluorescence intensity at saturation often cannot be directly related to concentration.
If there is no change in ligand fluorescence upon binding, the anisotropy or polarization may still be affected on complex formation due to a decrease in the rate of tumbling. Indeed, a common approach to the development of a binding assay is to append a fluorophore onto a ligand in a way that is not anticipated to perturb binding and then to use steady-state and time-resolved anisotropy measurements to quantify binding. A second common fluorescence-based assay is to attach a second fluorophore to the protein and use Forster/fluorescence resonance energy transfer (FRET) to monitor binding using either a steady-state or timeresolved detection. 41 FRET and methods such as bioluminescence resonance energy transfer (BRET) have been translated to quantifying drug−target interactions in living cells, such as by tagging the target with NanoLuc luciferase and monitoring BRET to a fluorescent ligand (NanoBRET). 7,42 Using a fluorescent competitor ligand, NanoBRET and other proximity-based assays can be used to assess binding of unlabeled ligands in an analogous approach to the competitive radioligand binding assay described above. When a competition assay is used, it is advisable to accurately determine the binding of the competitor so that the kinetic parameters for association and dissociation of the competitor are not also variables in fitting the competition binding data.
■ BIOLUMINESCENT REPORTER ASSAYS
In addition to bioluminescence resonance energy transfer (BRET), luciferases have found wide application in cell-based genetic reporter assays in which regulatory elements that control gene transcription are coupled to luciferase gene expression. This approach has proved particularly useful for drug targets such as GPCRs and nuclear hormone receptors, which regulate gene transcription. Andruska et al. report an HTS to discover inhibitors of 17β-estradiol (E2)−estrogen receptor α (ERα) induced breast cancer cell proliferation, 43 using a luciferase reporter whose expression is driven by three copies of the consensus estrogen response element (ERE) 3 . The authors discuss compounds that show activity in the primary screen due to a direct effect on luciferase or that are broadly cytotoxic, in addition to compounds that affect cell proliferation through inhibition of ERα.
Surface Plasmon Resonance and Isothermal Titration Calorimetry. Biophysical methods play a major role in characterizing the structure, kinetics, and thermodynamics of Generally, it is advisable to first quantify the binding of the competitor on its own in order to reduce the number of variables in fitting data obtained from the competition assay. (xxiv) Data obtained by methods such as SPR or ITC will usually be analyzed by software provided with the instruments. While this is convenient, investigators should determine what assumptions have been made in deriving the mathematical models that are used for data analysis. biomolecular interactions. While techniques such as X-ray crystallography and NMR spectroscopy are primarily used to provide insight into the structure of protein−ligand complexes, surface plasmon resonance (SPR) and isothermal titration calorimetry (ITC) are both label free approaches to quantifying protein−protein and protein−ligand interactions. 44 There are many excellent papers on the application of both SPR and ITC to the analysis of biomolecular interactions, and I only mention them here to draw the reader's attention to these methods. In SPR, one of the binding components is tethered to a surface and the change in SPR signal caused by the interaction with the second component yields both the thermodynamic and kinetic parameters for the interaction. 9 An example of a protein−protein interaction quantifiedby SPR isshown in Figure 9. 45 In additionto being label free, SPR can be deployed in relatively high throughput and is heavily used in the pharmaceutical industry for drug discovery campaigns. Experimental caveats include the ability to stably immobilize one of the binding partners and to then regenerate the surface after each binding reaction, and mass transfer effects that can mask the actual binding reaction, for example, if the transfer of the analyte from bulk solution to the sensor surface is slower than the binding event. 46,47 In addition, the tethered protein can also have different properties compared to the untethered protein in solution, which may alter ligand binding. HTS libraries invariably contain compounds, such as promiscuous aggregators, that give false hits due to assay interference, 26 and Giannetti et al.
give clear examples of sensorgrams that arise from nonideal binding events caused by such compounds. 48 One interesting recent advance is the development of a "chaser" method that improves the ability of SPR to accurately quantify the kinetics of slowly dissociating ligands and thus overcome problems associated with signal drift. 8 ITC is a solution-based method in which the heat liberated (exothermic) or absorbed (endothermic) by the binding event is used to measure thermodynamic binding parameters such as K d , ΔH, and ΔS ( Figure 10). 49 In contrast to SPR, ITC measures the binding reaction in solution, thus avoiding immobilization of a binding partner,but also usuallyrequires largeramounts of sample and has lower throughput. One important aspect of ITC assay design is to minimize heat changes caused by mixing the solutions that contain the protein (cell) and analyte (syringe), and thus, it is desirable to dissolve both components in identical buffer solutions. It is also important to ensure that there is sufficient time between each injection so that the system can come to equilibrium. Errors in concentration of the protein will impact the stoichiometry (n), while errors in the concentration of the analyte will affect n, K d , and ΔH. 50 In addition, concentrations should be chosen so that n × [protein]/K d = 10−50; however, meeting this requirement for either very high or low affinity ligands may be challenging. For example, high affinity ligands will require low protein concentrations, which will result in small changes in the amount of heat absorbed or released for each injection, which may be difficult to measure. Conversely, weak binders will require high concentrations of both protein and analyte, which may be difficult to attain due to solubility and/or availability of reagents. While ITC can quantify interactions over a range of affinities from nM to sub-mM, this range depends on the absolute heat change and can be extended by using a competitive ligand. 51 Finally, although ITC is used almost exclusively for equilibrium binding measurements, there are reports that this method can also yield the kinetic parameters for binding. 52,53 Both methods require specialized equipment, and in each case, the software that controls the instrument also includes curve fitting programs to analyze the data. Following the theme of this Perspective, investigators are urged not to treat the packaged data analysis programs as black boxes for data fitting but instead to understand which equations are being fitted to the data and what, if any, assumptions are made in this procedure (see Box 5). In SPR, curve fitting of the sensorgrams will yield the on-and off-rates for ligand binding, which are then used to calculate a K d value. However, K d values can also be calculated directly by plotting the signal at steady state (Response, RU) as a function of ligand concentration provided that the injection time is sufficiently long for the system to reach steady state. 55 In ITC, the binding isotherm is generated by plotting the amount of heat released or absorbed in each analyte injection as a function of analyte concentration. Curve fitting then yields n (stoichiometry), K a (association binding constant where K a = 1/K d ), and ΔH, from which ΔS and ΔG are calculated. 56 Initial curve fitting often assumes a 1:1 binding interaction in which case a value of n = 1 is expected. However, as noted above, n can often deviate from 1 due to inaccuracies in either the protein or ligand concentration.
■ SUMMARY: GUIDELINES FOR PUBLICATION OF QUANTITATIVE INHIBITION/BINDING DATA In summary, many methods are available for quantifying biomolecular interactions. The choice of method will depend on the specific system under investigation, and in this Perspective, I have only attempted to highlight some of the most common approaches in order to draw attention to key experimental considerations, and also common problems (see Box 6), that are encountered in developing assays and analyzing the data that are produced. Instruments such as those used for SPR and ITC measurements come with their own software for analyzing the binding data, and while I have not attempted to describe the mathematical models used for these approaches, users should be aware of the underlying assumptions used here too.
■ ABBREVIATIONS
BRET, bioluminescence resonance energy transfer; FRET, Forster/fluorescence resonance energy transfer; h, Hill coefficient; IC 50 , concentration of inhibitor (ligand) that results in 50% inhibition (effect); ITC, isothermal titration calorimetry; SKR, structure−kinetic relationship; SPR, surface plasmon resonance Box 6. Common Problems (xxv) The observation of a background rate (drift) before all the reagents have been combined could have several explanations including instrument drift, reagent instability, or changes in reagent solubility. It can be useful to scan the UV−vis spectrum of the reagent since precipitation will lead to light scattering, which will manifest as an increase in absorption across the whole absorbance spectrum, increasing as the wavelength decreases (since scattering intensity ∝λ −4 ). Proteins and other reagents can also bind to surfaces, leading to changes in the effective concentration in solution. One useful control is to double [E] and show that the observed rate doubles. This is a good check that the change in signal is due to the enzyme catalyzed reaction. (xxvi) Systematic errors in replicates could be due to reagent instability, and it is important to stagger replicates over the whole course of an experiment (e.g., at several times during the day) to check for systematic changes in activity. Proteins with essential Cys residues can lose activity due to oxidation, and reducing reagents (such as DTT) are often added to stock solutions or assays. Controls should be performed to check that the reducing reagent does not react with other reagents. The oxidized forms of some reducing reagents can have absorbance at 280 nm, so the background absorbance of a buffer containing DTT can increase slowly with time. Ligands (inhibitors) can also be unstable, especially if they contain reactive groups and/or aggregate or precipitate either before or after being added to the assay. (xxvii) IC 50 values less the 1/2[E] could occur if the enzyme is less active than assumed, for example, if it is not pure or if a fraction of the enzymeis inactive(e.g., due to oxidation; see above). (xxviii) There can be several mechanistic or artifactual explanations for concentration−response plots with Hill coefficients (h) differing from unity. For instance, compounds that form colloidal aggregates will likely inhibit enzymes through a nonspecific mode of action, which often manifests as a steep response (h > 1). Feng and Shoichet have suggested a number of approaches to test for promiscuous inhibition. 58 | 2019-03-15T02:57:37.567Z | 2019-03-12T00:00:00.000 | {
"year": 2019,
"sha1": "9d58f32cf52f3618ac1e9c6fa958376ff1526d19",
"oa_license": "acs-specific: authorchoice/editors choice usage agreement",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsinfecdis.9b00012",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "2c2b96bed954a9de2cab4084fc45ac86f44b4420",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
15824036 | pes2o/s2orc | v3-fos-license | Surface Plasmon Resonance of Nanoparticles and Applications in Imaging
In this paper we provide a mathematical framework for localized plasmon resonance of nanoparticles. Using layer potential techniques associated with the full Maxwell equations, we derive small-volume expansions for the electromagnetic fields, which are uniformly valid with respect to the nanoparticle's bulk electron relaxation rate. Then, we discuss the scattering and absorption enhancements by plasmon resonant nanoparticles. We study both the cases of a single and multiple nanoparticles. We present numerical simulations of the localized surface plasmonic resonances associated to multiple particles in terms of their separation distance.
Introduction
Localized surface plasmons are charge density oscillations confined to metallic nanoparticles. Excitation of localized surface plasmons by an electromagnetic field at an incident wavelength where resonance occurs results in a strong light scattering and an enhancement of the local electromagnetic fields. Recently, the localized surface plasmon resonances of nanoparticles have received considerable attention for their application in biomedicine. They have enabled applications including sensing of cancer cells and their photothermal ablation. Plasmon resonant nanoparticles such as gold nanoparticles offer, in addition to their enhanced scattering and absorption, biocompatibility making them not only suitable for use as a contrast agent but also in therapeutic applications [38].
According to the quasi-static approximation for small particles, the surface plasmon resonance peak occurs when the particle's polarizability is maximized. Recently, it has been shown that plasmon resonances in nanoparticles can be treated as an eigenvalue problem for the Neumann-Poincaré operator, which leads to direct calculation of resonance values of permittivity and optimal design of nanoparticles that resonate at specified frequencies [2,31,45]. Classically, the frequency-dependent permittivity of metallic nanoparticles can be described by a Drude model which determines the material's dielectric and magnetic responses by considering the motion of the free electrons against a background of positive ion cores.
In this paper, we provide a rigorous mathematical framework for localized surface plasmon resonances. We consider the full Maxwell equations. Using layer potential techniques, we derive the quasi-static limits of the electromagnetic fields in the presence of nanoparticles. We prove that the quasi-static limits are uniformly valid with respect to the nanoparticle's bulk electron relaxation rate. Note that uniform validity with respect to the contrast was proved in [49] in the context of small volume expansions for the conductivity problem. Then, we discuss the scattering and absorption enhancements by plasmon resonant nanoparticles. The nanoscale light concentration and near-field enhancement available to resonant metallic nanoparticles have been a driving force in nanoplasmonics. We first consider a single nanoparticle. Then we extend our approach to multiple nanoparticles. We study the influence of local environment on the near-field behavior of resonant nanoparticles. We simulate the localized surface plasmonic resonances associated to multiple particles in terms of their separation distance.
The paper is organized as follows. In section 2, we introduce localized plasmonic resonances as the eigenvalues of the Neumann-Poincaré operator associated with the nanoparticle. In section 3 we describe a general model for the permittivity and permeability of nanoparticles as functions of the frequency. In section 4, we recall useful results on layer potential techniques for Maxwell's equations. Section 5 is devoted to the derivation of the uniform asymptotic expansions. We rigorously justify the quasi-static approximation for surface plasmon resonances. Our main results are stated in Theorems 5.9 and 5. 10. In section 6 we illustrate the validity of our results by a variety of numerical simulations. The paper ends with a short discussion.
Plasmonic resonances
We first introduce the Neumann-Poincaré operator of an open connected domain D with C 1,η boundary in R d (d = 2, 3) for some 0 < η < 1. Given such a domain D, we consider the following Neumann problem, ∆u = 0 in D ; ∂u ∂ν = g on ∂D, ∂D u dσ = 0, (2.1) where g ∈ L 2 0 (∂D) with L 2 0 (∂D) being the set of functions in L 2 (∂D) with zero meanvalue. In (2.1), ∂/∂ν denotes the normal derivative. We note that the Neumann problem (2.1) can be rewritten as a boundary integral equation with the help of the single-layer potential. Given a density function ϕ ∈ L 2 (∂D), the single-layer potential, S D [ϕ], can be defined as follows, for x ∈ R d , where Γ is the fundamental solution of the Laplacian in R d : where ω d denotes the surface area of the unit sphere in R d . It is well-known that the single-layer potential satisfies the following jump condition on ∂D: where the superscripts ± indicate the limits from outside and inside D respectively, and K * D : L 2 (∂D) → L 2 (∂D) is the Neumann-Poincaré operator defined by |x − y| d ϕ(y)dσ(y) , (2.5) with ν(x) being the outward normal at x ∈ ∂D. We note that K * D maps L 2 0 (∂D) onto itself.
With these notions, the Neumann problem (2.1) can then be formulated as (2.6) Therefore, the solution to the Neumann problem (2.1) can be reformulated as a solution to the boundary integral equation with the Neumann-Poincaré operator K * D .
The operator K * D arises not only in solving the Neumann problem for the Laplacian but also for representing the solution to the transmission problem as described below.
Consider an open connected domain D with C 2 boundary in R d . Given a harmonic function u 0 in R d , we consider the following transmission problem in R d : where ε D = ε c χ(D)+ε m χ(R d \D) with ε c , ε m being two positive constants, and χ(Ω) is the characteristic function of the domain Ω = D or R d \D. With the help of the single-layer potential, we can rewrite the perturbation u − u 0 , which is due to the inclusion D, as where ϕ ∈ L 2 (∂D) is an unknown density, and S D [ϕ] is the refraction part of the potential in the presence of the inclusion. The transmission problem (2.7) can be rewritten as With the help of the jump condition (2.4), solving the above system (2.9) can be regarded as solving the density function ϕ ∈ L 2 (∂D) of the following integral equa- With the harmonic property of u 0 , we can write Consider ϕ α as the solution of the Neumann-Poincaré operator: The invertibilities of the operator ( εc+εm 2(εc−εm) I − K * D ) from L 2 (∂D) onto L 2 (∂D) and from L 2 0 (∂D) onto L 2 0 (∂D) are proved, for example, in [8,41], provided that | εc+εm 2(εc−εm) | > 1/2. We can substitute (2.11) and (2.12) back into (2.8) to get Using the Taylor expansion, which holds for all x such that |x| → ∞ while y is bounded [8], we get the following result by substituting (2.14) into (2.13) that is the polarization tensor associated with the domain D and the contrast λ defined by and ν j being the j-th component of ν.
Here we have used in (2.15) the fact that Typically the constants ε c and ε m are positive in order to make the system (2.9) physical. This corresponds to the situation with |λ| > 1 2 . However, recent advances in nanotechnology make it possible to produce noble metal nanoparticles with negative permittivities at optical frequencies [38,53]. Therefore, it is possible that for some frequencies, λ actually belongs to the spectrum of K * D . If this happens, the following integral equation has non-trivial solutions ϕ ∈ L 2 (∂D) and the nanoparticle resonates at those frequencies. Therefore, we have to investigate the mapping properties of the Neumann-Poincaré operator. Assume that ∂D is of class C 1,η , 0 < η < 1. It is known that the operator K * D : L 2 (∂D) → L 2 (∂D) is compact [41], and its spectrum is discrete and accumulates at zero. All the eigenvalues are real and bounded by 1/2. Moreover, 1/2 is always an eigenvalue and its associated eigenspace is of dimension one, which is nothing else but the kernel of the single-layer potential S D . In two dimensions, it can be proved that if λ i = 1/2 is an eigenvalue of K * D , then −λ i is an eigenvalue as well. This property is known as the twin spectrum property; see [44]. The Fredholm eigenvalues are the eigenvalues of K * D . It is easy to see, from the properties of K * D , that they are invariant with respect to rigid motions and scaling. They can be explicitly computed for ellipses and spheres. If a and b denote the semi-axis lengths of an ellipse then it can be shown that ±((a − b)/(a + b)) i are its Fredholm eigenvalues [42]. For the sphere, they are given by 1/(2(2i + 1)); see [40]. It is worth noticing that the convergence to zero of Fredholm eigenvalues is exponential for ellipses while it is algebraic for spheres. Equation (2.18) corresponds to the case when plasmonic resonance occurs in D; see [31]. Given negative values of ε c , the problem of designing a shape with prescribed plasmonic resonances is of great interest [2].
Finally, we briefly investigate the eigenvalue of the Neumann-Poincaré operator of multiple particles. Let D 1 and D 2 be two smooth bounded domains such that the distance dist(D 1 , D 2 ) between D 1 and D 2 is positive. Let ν (1) and ν (2) denote the outward normal vectors at ∂D 1 and ∂D 2 , respectively.
The Neumann-Poincaré operator K * D 1 ∪D 2 associated with D 1 ∪ D 2 is given by [6] In section 6 we will be interested in how the eigenvalues of K * D 1 ∪D 2 behave numerically as dist(D 1 , D 2 ) → 0.
3 Drude's model for the electric permittivity and magnetic permeability Let D be a bounded domain in R d with C 1,η boundary for some 0 < η < 1, and let (ε m , µ m ) be the pair of electromagnetic parameters (electric permittivity and magnetic permeability) of R d \ D and (ε c , µ c ) be that of D. We assume that ε m and µ m are real positive constants. We have Suppose that the electric permittivity ε c and the magnetic permeability µ c of the nanoparticle are changing with respect to the operating angular frequency ω while those of the surrounding medium, ε m , µ m , are independent of ω. Then we can write Because of causality, the real and imaginary parts of ε c and µ c obey the following Kramer-Kronig relations: where p.v. denotes the principle value.
In the sequel, we set k c = ω √ ε c µ c and k m = ω √ ε m µ m and denote by We have A similar formula holds for λ µ (ω).
Finally, we define dielectric and magnetic plasmonic resonances. We say that ω is a dielectric plasmonic resonance if the real part of λ ε is an eigenvalue of K * D . Analogously, we say that ω is a magnetic plasmonic resonance if the real part of λ µ is an eigenvalue of K * D . Note that if ω is a dielectric (resp. magnetic) plasmonic resonance, then the polarization tensor M (λ ε (ω), D) defined by (2.16) (resp. M (λ µ (ω), D)) blows up.
In the case of two particles D 1 and D 2 with the same electromagnetic parameters, ε c (ω) and µ c (ω), we say that ω is a dielectric (resp. magnetic) plasmonic resonance, if the real part of λ ε is an eigenvalue of K * D 1 ∪D 2 . Analogously, we say that that ω is a magnetic plasmonic resonance if the real part of λ µ is an eigenvalue of K *
Boundary integral operators
We start by recalling some well-known properties about boundary integral operators and proving a few technical lemmas that will be used in section 5 for deriving the asymptotic expansions of the electric and magnetic fields in the presence of nanoparticles. As will be shown in section 6, the plasmonic resonances for multiple identical particles are shifted from those of the single particle as the separating distance between the particles becomes comparable to their size.
Definitions
We first review commonly used function spaces. Let ∇ ∂D · denote the surface divergence. Denote by L 2 Sobolev space of order s on ∂D. We also introduce the function spaces We define the vectorial curl for ϕ ∈ H 1 (∂D) by curl ∂D ϕ = −ν × ∇ ∂D ϕ.
The following result from [22] will be useful.
Proposition 4.1. The following Helmholtz decomposition holds: Next, we recall that, for k > 0, the fundamental outgoing solution Γ k to the Helmholtz operator (∆ + k 2 ) in R 3 is given by For a density φ ∈ TH(div, ∂D), we define the vectorial single layer potential associated with the fundamental solution Γ k introduced in (4.2) by For a scalar density ϕ ∈ L 2 (∂D), the single layer potential is defined similarly by We will also need the following boundary operators: (4.7) In the following, we denote by A D , S D , M D , and N D the operators A 0 D , S 0 D , M 0 D , and N 0 D corresponding to k = 0, respectively.
Boundary integral identities
. We start with stating the following jump formula. We refer the reader to Appendix A for its proof.
is continuous on R 3 and its curl satisfies the following jump formula: Next, we prove the following integral identities.
where r is defined by Moreover, Furthermore, and Proof. The proof of (4.11) can be found in [26]. We give it here for the sake of completeness. If φ ∈ T H (div, ∂D), then Using the fact that and since Using the fact that −∇ ∂D is the adjoint of ∇ ∂ D · we obtain Next, since S k [∇ ∂D φ] is continuous across ∂D, the above relation can be extended to R 3 and we get (4.11). Now, in order to prove (4.12), we observe that, for any φ ∈ T H (div, ∂D), Using the jump relations on ∂S k D ∂ν we obtain that Recall from [26, p.169 ± φ, we arrive at (4.12). Setting k = 0 in (4.12) gives (4.13).
Identity (4.14) can be deduced from (4.13) by duality. Now, we prove (4.15). Define r(a) = ν × a for any smooth vector field a on ∂D.
Since M * D = rM D r (see [32]) and curl ∂D = r(∇ ∂D ), it follows that Composing by r −1 = −r, we get which completes the proof.
Proposition 4.5. We have the following Calderón type identity: and we can deduce from (4.15) that Now, using the fact that M * D = rM D r and that r −1 = −r, we also have Moreover, (4.13) yields Using Calderón's identity S B K * B = K B S B and the fact that , which completes the proof.
Resolvent estimates
As seen in the section 2, we have to solve Fredholm type equations involving the resolvent of K D . We will also need to control the resolvent of M D . For doing so, the main difficulty is due to the fact that K D and M D are not self-adjoint. However, we will make use of a symmetrization technique in order to estimate the norms of the resolvents of K D and M D .
The following result holds.
Proposition 4.6. The operator K D : L 2 (∂D) −→ L 2 (∂D) satisfies the following resolvent estimate where dist(λ, σ(K D )) is the distance between λ and the spectrum σ(K D ) of K D and c is a constant depending only on D.
Proof. We start from Calderón's identity Since S D : L 2 (∂D) −→ L 2 (∂D) is a self-adjoint positive definite invertible operator in dimension three, we can define a new inner product on L 2 (∂D). We denote H the Hilbert space L 2 (∂D) equipped with the following inner product Since S D is continuous and invertible, the norm associated with the inner product . , . H is equivalent to the L 2 (∂D)-norm. Now, K D is a self-adjoint compact operator on H. We can write [28] ( .
Switching back to the original norm we get the desired result.
The following resolvent estimate holds: .
Proof. We proceed exactly as in the proof of Proposition 4.6. If we denote by ., . H the usual scalar product on H, then we introduce a new scalar product defined by where N D H is the operator induced by N D given in (4.6) on H. Then, we first prove that H is stable by N D . If φ ∈ H, then N D [φ] ∈ TH(div, ∂D) (see [26]) and, using the fact that for any f ∈ H(curl, Ω), For the sake of simplicity we will denote by N D the induced operator on H. It is easy to see that this bilinear operator is well defined, continuous and positive. Then, N D is self-adjoint [26]. The bilinear form is positive since If we equip H with this new scalar product, then we can see by using Proposition 4.16 that M D is self-adjoint and therefore, Using the fact that N D is injective and continuous on H, we can go back to the original norm to have which completes the proof.
There exists a positive constant C such that Using Helmholtz decomposition (4.1), we can write with U ∈ H 1 (∂D) and V ∈ H 1/2 (∂D). Taking the surface divergence of (4.21), together with using (4.13), (4.15), and the fact that ∇ ∂D · curl ∂D f = 0, ∀f , yields which can be written as Now we deal with the curl part. If we apply N D on (4.21) we get by using (4.16) together with Lemma 4.4 that From the Helmholtz decomposition of φ: φ = ∇ ∂D φ 1 + curl ∂D φ 2 , (4.23) becomes . Take So we have the stability of R(Ñ D ) byM D and Applying this to (4.24) we get Using Lemma 4.8 we get the desired result.
Small volume expansion
The aim of this section is to prove Theorems 5.9 and 5.10.
Layer potential formulation
For a given plane wave solution (E i , H i ) to the Maxwell equations let (E, H) be the solution to the following Maxwell equations: subject to the Silver-Müller radiation condition: Using the layer potentials defined in section 4, the solution to (5.1) can be represented as:
Asymptotics for the operators
We have the following expansions for M k D and L k D .
Proof. Let x ∈ ∂D, and writex = x−z δ . We have Changing y byỹ = y−z δ in the integral we get For anyx ∈ δB, it follows that which gives the result.
We can expand
Then we can write The proof is then complete.
(5.9) Using (5.2) we have the following expansion for E(x) for x far away from z: The following lemma holds. Moreover, there exists a C ≥ 0 depending on B, β, E, and H such that Proof. We proceed by induction. Using Propositions 5.1 and 5.2 we find that (5.14) Note that ∇ ∂B ·φ β,0 = 0. Indeed, =0.
Derivation of the leading-order tensors
By Lemma 5.3, for x ∈ R 3 \ D, Recall (5.14) for β = 0: We can see, using (4.13) and the fact that Now, taking the surface divergence of (5.15) for β = 0, it follows that The following lemma holds.
Lemma 5.5. We have
18) and
Proof. We only prove (5.18). We shall consider the solution to the following system (5.20) We can see that both the left-hand side and the right-hand side of (5.18) are divergence free. We want to prove that they are both equal to the field ∇u in R 3 . First we check that they satisfy the jump relations. We already have the continuity of the normal part of the curl of a vectorial single layer potential [27]. Recall that Then, The continuity of the tangential derivative of a scalar single layer potential gives and the jump of the normal derivative of a scalar single layer potential can be written as follows which gives the correct jump relation for the normal derivative. The only problem left is to prove the uniqueness of the system. Now letũ be the solution to (5.20) with the term ν × E i (z) replaced by vector 0 on ∂B. Note that where T is any tangential direction on ∂B. Then by choosing any test function in H 1 (∂B) and integrating by parts we can get µ cũ | − = µ mũ | + on ∂B. Thus, which provesũ = 0 and completes the proof.
It is worth mentioning that it was proved in [32] that which, by taking µ m = 0 (or let µ c = ∞), can be seen as the extreme case in (5.18). Now that we have a better understanding of ν × ∇ × A D [φ 0,0 ], by Lemma 5.5, we can introduce the unique solutions u e , u h ∈ H 1 (B) up to constants such that and The expressions of ∇u e and ∇u h are given by Lemma 5.5. Now, by using equation (5.17), we can compute the surface divergence ofφ 0,1 andψ 0,1 : Then we have the following lemma.
Lemma 5.6. Let v e be the solution to
23)
and let v h be the solution to
(5.24)
Then the following asymptotic expansions hold: Proof. By Proposition 5.4, we have Using the fact that we get that for f ∈ L 2 (∂B), Then,
An integration by parts gives
We now take a look at the transmission problem (5.24) solved by v h . Using the jump relation of the normal derivative of the scalar single layer potential we find and hence,
Integrating by parts we get
The evaluation for M e 0,0 can be done in exactly the same way.
Derivation of the leading-order tensors
Lemma 5.7. We have
(5.26)
In particular, we have where (e 1 , e 2 , e 3 ) is an orthonormal basis of R 3 .
Proof. We shall only consider M h α,β . M e α,β can be calculated in exactly the same way. We have where M h,0 α,β is given by M h,0 α,β = ∂Bỹ αφ β,0 dσ(ỹ). Since we have that for any f ∈ L 2 T (∂B), By applying Lemma 5.4, it follows that Using the jump relations on M B and the fact that we can write The curl theorem yields and thus (5.26) holds. By using the definition of u e and u h we get the case where |α| = 1, |β| = 0.
Derivation of the polarization tensor
Denote by G(x, z) the matrix valued function (Dyadic Green function) It can be seen that G(x, z) satisfies We can also easily check that (5.32) Before we proceed, we stress that the polarization tensors M e , M h defined above are matrix with each entry m e ij and m h ij , i, j = 1, 2, 3, defined by (2.16) with λ = λ ε and λ = λ µ , respectively.
They are different from the vector valued tensors we defined in equation (5.11).
Proof. We shall give the analysis term by term in (5.16). It is easy to check that Then by Lemma 5.7 it follows that Furthermore, we obtain from Lemma 5.4 that Similarly, we have Recall from (5.29) that Summing over j gives Hence, we can deduce that A similar computation yields and therefore, Moreover, Lemma 5.6 gives Combining the previous asymptotic expansions we arrive at The proof is then complete.
We shall analyze further (5.33). Recall that, from the proof of Lemma 5.6, we have Noticing that Moreover, for any f , we have We can then write Recall that by definition, Then, by using Lemma 5.5, we obtain which together with the jump relations for the normal derivative of the scalar layer potential yields Then, since A direct computation gives and therefore, A similar computation yields Now it remains to compute the last term in (5.33) which is Writing that λ ε = 1 2 + ε m ε c + ε m together with the fact that ∂ ∂ν Hence, Similarly, we have Finally, we arrive at When a plasmonic resonance occurs, the term λ ε = εc+εm 2(εc−εm) can have a real part that is lower than 1 2 , and become close to an eigenvalue of the operator K * B . Using Lemma 5.4 we can easily see that each of the potentials φ β,n and ψ β,n are controlled in norm by powers of 1 dσ , where d σ is the distance of λ ε to the spectrum σ(M B ) = −σ(K * B ) \ {−1/2}. So the asymptotic development given by Theorem 5.8 is valid when δ/d σ << 1, which ensures that the reminder of the asymptotic expansion is still small compared to the first-order term.
The following results are our main results in this paper.
Finally, two more remarks are in order. First, in view of Theorems 5.9 and 5.10 and the blow up of the polarization tensors, it is clear that at plasmonic resonances the scattered electric field is enhanced. Secondly, from the representation formula (5.2) for the electric field in D and the estimates of the densities, it can be seen that the electric field inside the particle is enhanced as well and therefore, the absorbed energy, given by ε ′′ D |E| 2 (y) dy, is enhanced at dielectric plasmonic resonances. Note that the scattering enhancement when the particles are illuminated at their plasmonic resonances can be used for nano-resolved imaging from the far-field data while the absorption enhancement for thermotherapy applications as well as for photoacoustic imaging to remotely measure and control the local temperature within a medium [57].
Numerical illustrations
We illustrate the plasmon phenomenon numerically by computing the polarization tensor M e for some different two-dimensional shapes. We use the values for the parameters given in section 6. The wavelength of the incoming plane wave c/ω, where c = 3. Then we compute the matrix M e defined by (2.16) with λ = λ ε . We plot the value of its norm with respect to the incoming wavelength. Figure 6.3 shows that if the shape B is a disk, then one has a resonant peak. This peak corresponds to λ ε = 0. Figure 6.4 shows that for an ellipse, one can observe two resonant frequencies, one corresponding to each axis. This was experimentally observed in [21] for elongated particles. The two peaks correspond to λ ε = (a − b)/(a + b) ≈ 0.33 and λ ε = ((a − b)/(a + b)) 2 ≈ 0.11, where a = 1, b = 1/2 are the semi-axis lengths of the ellipse. Figure 6.5 gives the norm of the polarization tensor for a star-shaped particle. One can observe that there are many resonant frequencies. This observation is also in agreement with the experimental results published in [34].
Finally, it is shown in Figure 6.7 that when two disks are close to each other, a strong interaction occurs and the plasmonic resonance frequencies are close to those of an equivalent ellipse.
Concluding remarks
In this paper, we have provided a mathematical framework for localized plasmon resonance of nanoparticles. We have derived a uniform small volume expansion for 0 2 · 10 −7 4 · 10 −7 6 · 10 −7 8 · 10 −7 1 · 10 −6 1. the solution to Maxwell's equations in the presence of nanoparticles excited at their plasmonic resonances. We have presented a variety of numerical results to illustrate our main findings. As the particle size increases and moves away from the quasistatic approximation, high-order polarization tensors [8] should be included in order to compute the plasmonic resonances, which become size-dependent. This would be the subject of a forthcoming work. Our approach in this paper opens also a door for a numerical and mathematical framework for optimal shape design of resonant nanoparticles and their superresolved imaging [7]. as z → x ∈ ∂D.
We now make use of the following lemma from [27]. when z → x ∈ ∂D, which conclude the proof for a continuous tangential field φ.
The formula can be extended to L 2 T by a density argument . | 2015-08-04T06:45:02.000Z | 2014-12-11T00:00:00.000 | {
"year": 2016,
"sha1": "0f5e6035fe97fd05c91477b459fa636c04e608db",
"oa_license": null,
"oa_url": "https://infoscience.epfl.ch/record/214820/files/plasmonic%201.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "342c57ebfe354af3165775e2ba8f270dd58da2d0",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Mathematics",
"Materials Science"
]
} |
3853139 | pes2o/s2orc | v3-fos-license | Revisiting the Therapeutic Potential of Bothrops jararaca Venom: Screening for Novel Activities Using Connectivity Mapping
Snake venoms are sources of molecules with proven and potential therapeutic applications. However, most activities assayed in venoms (or their components) are of hemorrhagic, hypotensive, edematogenic, neurotoxic or myotoxic natures. Thus, other relevant activities might remain unknown. Using functional genomics coupled to the connectivity map (C-map) approach, we undertook a wide range indirect search for biological activities within the venom of the South American pit viper Bothrops jararaca. For that effect, venom was incubated with human breast adenocarcinoma cell line (MCF7) followed by RNA extraction and gene expression analysis. A list of 90 differentially expressed genes was submitted to biosimilar drug discovery based on pattern recognition. Among the 100 highest-ranked positively correlated drugs, only the antihypertensive, antimicrobial (both antibiotic and antiparasitic), and antitumor classes had been previously reported for B. jararaca venom. The majority of drug classes identified were related to (1) antimicrobial activity; (2) treatment of neuropsychiatric illnesses (Parkinson’s disease, schizophrenia, depression, and epilepsy); (3) treatment of cardiovascular diseases, and (4) anti-inflammatory action. The C-map results also indicated that B. jararaca venom may have components that target G-protein-coupled receptors (muscarinic, serotonergic, histaminergic, dopaminergic, GABA, and adrenergic) and ion channels. Although validation experiments are still necessary, the C-map correlation to drugs with activities previously linked to snake venoms supports the efficacy of this strategy as a broad-spectrum approach for biological activity screening, and rekindles the snake venom-based search for new therapeutic agents.
Introduction
The development of therapeutic drugs such as the antihypertensive Captopril ® [1,2] and the anticoagulant Exanta ® (also known as ximelagatran) [3] can be traced back to the study of isolated snake venom components and their biological roles during envenomation. Over the years, venoms, and fractions thereof, have displayed several biological activities/applications, including antibacterial [4][5][6][7][8][9][10], antiprotozoarian [7,[11][12][13][14][15][16], antimeasles [17], antiviral (human immunodeficiency virus) [18,19], analgesic [20][21][22][23][24], and for the treatment of multiple sclerosis [25]. It is important to note that some of those aforementioned activities can be related not only to medium to high abundance specific venom toxins but also to low abundance components and, eventually, to their synergistic effects. Also, secondary effects generated by venom components should be considered; such is the case for the activation of inflammation and apoptosis pathways through the action of DAMPs (damage-associated molecular patterns), released after tissue injuries generated by the snake venom/snake venom fraction being assayed [26]. For instance, DAMPs released in the wound exudate after viperid envenomation contribute to vascular permeability mediated by TLR4 (toll-like receptor 4) [27]. The use of functional genomics (microarray techniques) to analyze the subtoxic effects, through gene expression analysis, on cell cultures treated with snake venoms and/or their components has been successfully demonstrated [28,29]. However, it is still challenging to associate signaling pathways identified through functional genomics to the pathophysiology of snakebite (assessed through well-established biochemical and biological assays, screening for hemorrhagic, hypotensive, edematogenic, neurotoxic, and myotoxic activities) [30]. Although these assays are useful in reproducing some of the effects of snakebite envenoming, activities other than those traditionally associated with snake venoms could remain unknown. Hence, without a priori knowledge, it is no simple task to identify potentially novel therapeutic activities derived from snake venoms and/or their components.
An alternative "blind" biological activity screening approach is to use the C-map (connectivity map) platform (https://portals.broadinstitute.org/cmap/). C-map consists in a public database of gene expression patterns generated from the treatment of known cell lineages with 1309 small molecules and drugs, whose pharmacological properties are well characterized [31,32]. Thus, the biological activity of the sample tested can be indirectly inferred by matching the experimental list of differentially expressed genes to the gene expression patterns present in the C-map database. A proof-of-concept for the application of C-map approach in Toxinololgy was demonstrated by treating MCF7 (Michigan cancer foundation 7) cells with Heloderma suspectum (Gila monster) venom or the anti-diabetic drug Byetta (developed from a peptide isolated from that same venom). As predicted, C-map analysis of differentially expressed genes in either condition displayed high positive correlation with different anti-diabetes drugs [33].
Thus, to test the feasibility of C-map analysis for biological activity screening in snake venoms, we chose the venom of the South American pit viper Bothrops jararaca, one of the best characterized venoms by proteomic approaches [34]. Although this venom is highly diverse, few protein classes account for around 94% of its composition [34] (Table 1). Consequently, the less abundant proteins such as hyaluronidases, cysteine-rich secretory proteins, growth factors, nucleotidases, among others, are underexplored [35,36], resulting in a lack of knowledge about their individual contributions to the snake envenoming pathology. Boldrini-França and colleagues [37] recently emphasized the importance of studying and characterizing minor components from snake venoms, since these can display different potential therapeutical applications, such as: antiparasitic, antitumor, neuroprotection, and ischemic tissue protection.
In this work, we have analyzed the gene expression of MCF7 cells treated with B. jararaca venom and used connectivity mapping to infer novel (therapeutic) activities potentially present in this biological sample. The majority of biosimilar drugs inferred were related to antimicrobial and anti-inflammatory activities, as well as to the treatment of neuropsychiatric and cardiovascular diseases. In short, our data rekindle the snake venom-based search for new therapeutic agents.
Gene Expression Analysis
MCF7 cells were used in this work since most of the C-map database information relies on assays using this cell type, due to its extensive molecular characterization and ubiquitous use as a reference cell line [32]. However, since MCF7 cells are not natural targets for snake venom components, it was not the focus of this study to make detailed associations between differentially expressed genes and snakebite envenoming. More importantly, our goal was to submit the list of up-and down-regulated genes to C-map analysis, in order to screen for a panel of biosimilar drug activities related to B. jararaca venom. Nonetheless, we will highlight some of the differentially expressed genes and their possible correlations with snake venom toxins.
B. jararaca venom induced (p-value < 0.01) the differential expression of 90 genes (74 up-and 16 down-regulated) in MCF7 cells. We only considered up-or down-regulated genes those displaying a log 2 of fold-change equal or greater than 0.58 (fold change ≥1.50) or −0.58 (fold change ≤0.67), respectively, when compared to expression in the untreated cells (control). The up-and down-regulated genes are shown as supplementary material (Tables S1 and S2, respectively) and the data used to generate these tables are supplied in Tables S5 and S6. The cytochrome P450 family, which is represented by heme-thiolate proteins [60], displayed the highest differentially expressed gene. The CYP1A1 (cytochrome P450, family 1, subfamily A, Toxins 2018, 10, 69 4 of 26 polypeptide 1) gene had a 29.6-fold increase in expression, compared to control, when MCF7 cells were treated with venom (Table S1). Among the up-regulated genes, we also identified another member of this family, CYP1B1, with 3.4-fold increase. CYP1A1 and CYP1B1 genes are involved in the metabolism of arachidonic acid generating ROS (reactive oxygen species), which is one of the triggers to initiate the apoptosis process [61]. Even though the cytochrome P450 main function is to metabolize drugs and synthesize lipids such as cholesterol and steroids [62], its high expression in MCF7 cells treated with B. jararaca venom could also be influenced by three venom components activities through: (i) indirect involvement in the metabolism of arachidonic acid [63] eventually released after PLA 2 (phospholipase A 2 ) metabolizes phospholipids [64]; (ii) involvement in the metabolism of arachidonic acid released by the action of bradykinin, which would be possible due to the action of BPPs (bradykinin-potentiating peptides) present in snake venoms [65]; and (iii) use of hydrogen peroxide, released by the action of venom LAAO (L-amino acid oxidase), as an oxygen donor [60]. Those activities may contribute to activation of apoptosis-and inflammatory-related pathways through the generation of ROS. In this regard, the venom from another Viperidae, Echis carinatus, induced an overexpression of genes associated to ROS pathways, including the cytochrome P450 enzymes, in HUVECs (human umbilical vein endothelial cells) [66]. Additionally, B. jararaca and Crotalus atrox venoms induced a significant increase in the expression of genes related to apoptosis and inflammatory pathways in HUVECs [28]. Interestingly, these authors also showed that the proteolytic activity of jararhagin, the major hemorrhagic metalloendopeptidase from B. jararaca venom, is mandatory for the generation of an inflammatory and pro-apoptotic response in human fibroblasts [29].
The presence of oxidative stress in MCF7 cells treated with B. jararaca venom is also supported by the significantly higher expression of HMOX1 (heme oxigenase 1) (Table S1), which is an enzyme involved in antioxidant response [67]. HMOX1 degrades heme releasing antioxidant agents such as carbon monoxide and biliverdin (which is further converted to the antioxidant bilirubin) [68,69]. Thus, the higher expression of HMOX1 may represent a response to the oxidative stress induced by B. jararaca venom.
Finally, Sunitha and co-workers [26] summarized experimental evidence from the literature for oxidative stress and inflammation induced by viper bites, as well as the apparent involvement of DAMPs, generated after SVMP (snake venom metalloendopeptidase) and PLA 2 activities, in these processes. Recently, it has been confirmed that at least part of the inflammatory process generated after viper bites is dependent on the activation of TLR4 pathway by DAMPs [27].
Overall, it is possible that B. jararaca venom induces apoptosis and inflammation through different pathways. The apoptotic feature of snake venoms is likely related to secondary molecules such as H 2 O 2 released after LAAO activity and NO (nitric oxide) production. Snake venoms such as B. jararaca and B. asper are able to induce the release of inflammatory mediators like NO [70][71][72]. Although MCF7 cells do not possess the major molecular targets of snake venoms, and do not produce cytokines, it has been demonstrated that breast cancer cells, including MCF7, express inducible NO synthase [73][74][75].
Connectivity Map Analysis
We submitted the MCF7/B. jararaca venom genomic signature (list of up-and down-regulated genes following MCF7 cells treatment with venom) to the C-map algorithm for comparison with the gene-expression profiles (signatures) generated by the treatment of different cell lineages with drugs or small molecules, also called perturbagens. In short, the algorithm returns a list of perturbagens (compounds) with score values ranging from +1.000 to −1.000, encompassing the most positively-(agonistic effect) to the most negatively-(antagonistic effect) correlated perturbagens. The C-map score is calculated by a combination of the up and down scores (which represent the absolute enrichment of the list of up-and down-regulated genes, respectively) submitted to the algorithm when compared to the signatures induced by the perturbagen. The C-map score reflects how well the genomic signature induced by the assayed sample correlates with the perturbagens' genomic signatures deposited in the database. In the original publication [31], no statistical treatment has been envisaged following the C-map score calculation. Therefore, to ensure low false discovery rates, a reasonable alternative could be to consider only the highest C-map score values (e.g., >+0.900 or <−0.900). However, when looking at the data from the literature where, following C-map analysis, biological validation assays have been performed, C-map score values for confirmed hits were as low as 0.530 [31] and −0.777 [76]. The present work established an arbitrary C-map score threshold of 0.600. On one hand, we acknowledge that, at some instances, this could eventually generate a more speculative discussion. On the other hand, our C-map results (Table S3) displayed positive hits to most of the published biological activities (related to possible therapeutical applications) directly associated to different snake venoms ( Table 2), indicating that, as expected, the biological significance of the results has not been impaired by a less stringent cut-off value. Considering only genomic signatures generated by MCF7 cells treated with known drugs, we identified 792 positive correlations, sometimes also described as "agonist-related" activities (Table S7). The top-100 positively correlated drugs are shown in Table S3, and some of them will be discussed below. Additionally, we have rearranged the data from Table S3 according to the major findings and their applications: antimicrobial, anti-inflammatory, and treatment of neuropsychiatric or cardiovascular disorders (Tables 3-6). The top-20 negatively correlated signatures ("antagonist-related") are shown in Table S4. Table 3.
C-map hits for antimicrobial drugs, following MCF7 cells incubation with Bothrops jararaca venom. Values between +1 and −1 represent the relative strength of a given signature in an instance from the total set of calculated instances; b values between +1 and −1 represent the absolute enrichment of an up tag-list in a given instance; c values between +1 and −1 represent the absolute enrichment of a down tag-list in a given instance. Values between +1 and −1 represent the relative strength of a given signature in an instance from the total set of calculated instances; b values between +1 and −1 represent the absolute enrichment of an up tag-list in a given instance; c values between +1 and −1 represent the absolute enrichment of a down tag-list in a given instance. Values between +1 and −1 represent the relative strength of a given signature in an instance from the total set of calculated instances; b values between +1 and −1 represent the absolute enrichment of an up tag-list in a given instance; c values between +1 and −1 represent the absolute enrichment of a down tag-list in a given instance. Values between +1 and −1 represent the relative strength of a given signature in an instance from the total set of calculated instances; b values between +1 and −1 represent the absolute enrichment of an up tag-list in a given instance; c values between +1 and −1 represent the absolute enrichment of a down tag-list in a given instance.
Major Drug Classes Positively Correlated to Venom through C-Map Analysis
Antimicrobial Activity Our biosimilar drug discovery study revealed 20 antimicrobial molecules (Table 3), of which 16 were antibiotics and 4 were antiparasitics (antimalarial, antifungal/antiprotozoal, and antischistosomal).
Antibiotic activity has already been reported for B. jararaca venom against Gram-negative and Gram-positive bacteria [4], as well as in other venoms from the Bothrops genus [8][9][10]. Additionally, all these studies have associated the antibiotic activity of snake venoms to LAAO or PLA 2 , even though their mechanism of action remains unclear. Both enzymes, isolated from different snake venoms (including B. jararaca's) are also frequently associated with anti-parasitic action, such as trypanocidal and leishmanicidal [4,11,13,14,77,79,80].
The second highest positively-correlated drug identified through C-map was primaquine, the only antimalarial drug available to treat malaria relapse caused by Plasmodium vivax [102,103]. This parasite presents a dormant stage (hypnozoite), which remains in the liver, creating a persistent reservoir of infection by subsequently reactivating blood-stage infections [104]. Although primaquine is the current treatment against hypnozoite forms of P. vivax, the drug has limited therapeutic efficacy [105] and is toxic to glucose-6-phosphate dehydrogenase deficient patients, due to the risk of hemolytic anemia [106]. Also, studies have indicated that some hypnozoites may be resistant to primaquine [107]. Thus, the development of more effective antimalarial treatments against hypnozoite stages of P. vivax is highly desirable [105]. Furthermore, halofantrine, another antimalarial which acts similarly to chloroquine by forming toxic complexes with ferritoporphyrin IX, thereby damaging the membrane of the parasite [108][109][110], was also inferred by our C-map data ( Table 3).
The potential antimalarial activity herein identified may also reflect an effect of HMOX-1, coded by the third most up-regulated gene identified in this work, through heme catabolism (Table S1). HMOX-1 is able to prevent apoptosis through TNF (tumor necrosis factor) pathway in Plasmodium-infected hepatocytes [67]. Studies have indicated that heme might have an important role in Plasmodium survival, especially in the mosquito and in the liver stages of infection, since the parasite is able to synthesize heme, in addition to its capability to obtain heme from the infected erythrocyte [112,113]. Furthermore, carbon monoxide released as a consequence of HMOX-1 enzymatic activity precludes the start of cerebral malaria through binding to hemoglobin released from the cells, thus preventing heme release [114,115].
In summary, some findings of this work corroborate the presence of antimalarial component(s) in B. jararaca venom. They consist of (i) the up-regulation of HMOX1 gene (Table S1) and (ii) the C-map analysis that led to the biosimilar drug discovery of the antimalarials primaquine and halofantrine (Table S3).
Neuropsychiatric Illnesses
C-map analysis associated B. jararaca venom to 19 drugs used in the treatment of neuropsychiatric disorders; among those, ten antipsychotics, three antidepressants, four anticonvulsants, and two antiparkinsonian drugs (Table 4). These compounds, especially the antipsychotics, usually act on muscarinic, adrenergic, dopaminergic, serotonergic, and/or histaminergic postsynaptic receptors [116][117][118][119][120]. The aforementioned receptors belong to the GPCR (G-protein-coupled receptor) family [121] and they are involved in different cell signal transduction pathways induced by hormones and neurotransmitters [122]. Additionally, the metabotropic glutamate and GABA B (gamma-aminobutyric acid, class B) receptors are also described as potential targets for treatment of multiple disorders related to the CNS (central nervous system), such as depression, anxiety disorders, schizophrenia, epilepsy, Alzheimer's, and Parkinson's diseases [123][124][125][126].
The potential of snake venom components to treat CNS disorders [127] may be partially explained by the presence of neurotoxins that target muscarinic receptors [128][129][130][131] and/or other families of G-protein-coupled receptors [132][133][134][135]. Three finger toxins are widely described in venoms of members of the Elapidae family; they act on a great variety of targets, including: (i) muscle nicotinic Toxins 2018, 10, 69 9 of 26 acetylcholine receptor; (ii) neuronal nicotinic receptor; (iii) muscarinic receptor (agonist or antagonist); (iv) acetylcholinesterase (inhibitor); (v) calcium channel; (vi) potassium channel-interacting protein; and (vii) β1and β2-adrenergic receptors [58]. Although 3FTX are primarily described for Elapidae venoms, they were recently identified, albeit in low abundance, in the venom of B. jararaca (Viperidae family) [34]. Thus, it is possible that 3FTX are responsible, at least partially, for the potential of a B. jararaca venom isolated component to treat CNS disorders. CRISPs (cysteine-rich secretory proteins) present in Viperidae venoms, including B. jararaca, may also contribute to that effect once they target different types of ion channels as well as nicotinic acetylcholine receptors [45,136].
Different drug classes to treat neuropsychiatric illnesses have been associated to the venom; their respective targets are illustrated in Figure 1.
Toxins 2018, 10, x FOR PEER REVIEW 8 of 25 [34]. Thus, it is possible that 3FTX are responsible, at least partially, for the potential of a B. jararaca venom isolated component to treat CNS disorders. CRISPs (cysteine-rich secretory proteins) present in Viperidae venoms, including B. jararaca, may also contribute to that effect once they target different types of ion channels as well as nicotinic acetylcholine receptors [45,136]. Different drug classes to treat neuropsychiatric illnesses have been associated to the venom; their respective targets are illustrated in Figure 1. Antipsychotics: Antipsychotics are commonly used to treat schizophrenia primarily through dopamine receptors (especially D2) inhibition [137]. However, they also display varied affinities for serotonin, cholinergic, adrenergic, and histamine receptors [138,139]. The antipsychotics are classified in two categories, typical and atypical. Members of the former category induce high EPS (extrapyramidal side effects) such as acute dystonia, akathisia, parkinsonism, and tardive dyskinesia [140] whereas the atypical ones cause fewer EPS [141]. Clozapine was the only atypical antipsychotic drug identified in the present work. This drug is characterized by a low affinity to dopamine receptors but high affinity for 5HT2 (5-hydroxytryptamine, type 2) serotonin receptor [141,142]. Although clozapine is not the first drug of choice against schizophrenia, it is frequently used to treat drug resistance cases, when the typical antipsychotics have not worked [143,144].
Anticonvulsants: Anticonvulsants are used to treat epilepsy and seizures. Epilepsy is a multifactor neurological disorder characterized by a dysfunction in the speed and intensity of the electrical neuronal discharges leading to unprovoked seizures. Antiepileptic drugs can act in distinct manners: (i) by blocking ion channels, such as voltage activated sodium and T-type calcium channels, and/or excitatory amino acids receptors; (ii) by improving the GABA activity as a brain inhibitor [145]. We identified anticonvulsant drugs that target all those pathways: calcium channels (trimethadione), sodium channels (carbamazepine), and GABA (valproic acid). Additionally, we identified a carbonic anhydrase inhibitor (diclofenamide), which is primarily used to treat glaucoma [146]; however, it might be also used to treat epilepsy since the inhibition of carbonic anhydrase, and the consequent increase in brain CO2 level, is a known indirect pathway for epilepsy treatment [147]. Antipsychotics: Antipsychotics are commonly used to treat schizophrenia primarily through dopamine receptors (especially D2) inhibition [137]. However, they also display varied affinities for serotonin, cholinergic, adrenergic, and histamine receptors [138,139]. The antipsychotics are classified in two categories, typical and atypical. Members of the former category induce high EPS (extrapyramidal side effects) such as acute dystonia, akathisia, parkinsonism, and tardive dyskinesia [140] whereas the atypical ones cause fewer EPS [141]. Clozapine was the only atypical antipsychotic drug identified in the present work. This drug is characterized by a low affinity to dopamine receptors but high affinity for 5HT 2 (5-hydroxytryptamine, type 2) serotonin receptor [141,142]. Although clozapine is not the first drug of choice against schizophrenia, it is frequently used to treat drug resistance cases, when the typical antipsychotics have not worked [143,144].
Anticonvulsants: Anticonvulsants are used to treat epilepsy and seizures. Epilepsy is a multifactor neurological disorder characterized by a dysfunction in the speed and intensity of the electrical neuronal discharges leading to unprovoked seizures. Antiepileptic drugs can act in distinct manners: (i) by blocking ion channels, such as voltage activated sodium and T-type calcium channels, and/or excitatory amino acids receptors; (ii) by improving the GABA activity as a brain inhibitor [145]. We identified anticonvulsant drugs that target all those pathways: calcium channels (trimethadione), sodium channels (carbamazepine), and GABA (valproic acid). Additionally, we identified a carbonic anhydrase inhibitor (diclofenamide), which is primarily used to treat glaucoma [146]; however, it might be also used to treat epilepsy since the inhibition of carbonic anhydrase, and the consequent increase in brain CO 2 level, is a known indirect pathway for epilepsy treatment [147].
Antidepressants: Symptoms of depression are common in medically sick people. However, only a few patients actually undergo a major depressive disorder. This disorder is characterized by disturbances in mood, appetite, and sleep as well as psychomotor compromise, fatigue, and suicidal thoughts, among others [148]. Dysfunctions of norepinephrine and serotonin neurotransmission are frequent in depression and anxiety disorders, which may be explained by the involvement of this neurotransmitter systems in the modulation of other neurobiological systems compromised by this illness [149]. Thus, the antidepressant drugs usually have potent effects on central noradrenergic and serotonergic systems and, in the case of the monoamine oxidase inhibitors, dopaminergic systems as well [150]. Regarding the antidepressants identified in this work, they act by inhibiting α 2 -adrenoceptor receptor (mianserin), serotonin (5-HT) reuptake (paroxetine), and monoamine oxidase A (pirlindole).
Parkinson's Disease Treatment: Parkinson's is a neurodegenerative disorder characterized by a progressive death of dopamine neurons leading to motor disturbances such as muscular rigidity, bradykinesia, and tremor [151,152]. The majority of antiparkinsonian drugs target serotonergic (5-HT1A) and dopaminergic (D2) receptors [153]; such is the case for lisuride, herein identified (Table 4). On the other hand, metixene, also identified in this work, is an anticholinergic drug [154]. As mentioned above (Section 2.2.1-Neuropsychiatric Illnesses) some neurotoxins have affinity for the muscarinic receptors [58]; this might contribute for the potential presence of antiparkinsonian activity in B. jararaca venom. Additionally, it has been shown that a tripeptide (Glu-Val-Trp) isolated from the venom of Bothrops atrox has the potential to decrease apoptosis in a classic model of Parkinson's disease [92]. Considering that the compositions of B. atrox and B. jararaca venoms are related [155], the presence of this peptide and its neuroprotective activity in the venom of B. jararaca should be further investigated.
On the other hand, Parkinson's disease patients typically display an accumulation of phosphorylated extracellular protein aggregates. Thus, some authors have suggested that a snake venom metalloendopeptidase, displaying a basic isoelectric point, should be able to cleave these highly phosphorylated protein aggregates, helping to slow the progression of the disease [156].
Cardiovascular Related Disorders
C-map analysis ascribed, with high positive correlation scores, antihypertensive and vasodilator activities amongst seven different drugs (Table 5). Those activities are usually associated to BPPs [1], which act by blocking the ACE (angiotensin-converting enzyme) [157,158], and had their structure used as a scaffold for development of the anti-hypertensive drug Captopril [2]. Although the hypotensive activity of BPPs is generally associated to ACE inhibition [157,158], BPP 5a from B. jararaca venom induced hypotension through muscarinic and bradykinin receptors [86], both present in MCF7 cells [159,160]. Thus, at least part of the antihypertensive activity, indirectly identified through C-map, might be related to BPP 5a. On the other hand, the antihypertensive drugs identified through C-map belong to the alpha-adrenergic blocker (phenoxybenzamine), thiazide diuretic (hydroflumethiazide), and thiazide-like diuretic (clopamide and metolazone) classes [161].
We also identified beta-1 and/or beta-2 blockers drugs (practolol and sotalol, Table 5), that are usually used to treat arrhythmias. These results suggest that B. jararaca venom could be a source of molecules acting on beta adrenergic receptors, similarly to beta-cardiotoxin, from Ophiophagus hannah venom, which blocks both beta-1 and beta-2 receptors [162]. Interestingly, we also identified beta-1 and beta-2 agonist drugs to treat heart failure/cardiogenic shock and bradycardia, respectively (Table 5).
It is important to stress that there are other snake venoms compounds such as natriuretic peptides, L-type calcium channels blockers, sarafatoxins, and vascular endothelial growth factors that display cardiovascular effects (reviewed in [163][164][165][166]). Two recent works have demonstrated the vasorelaxant effect (which is likely due to the inducing of NO production) of Montivipera bornmuelleri [167] and Crotalus durissus cascavella [168] venoms, indicating their therapeutic potential in the treatment of cardiovascular diseases such as hypertension.
All considered, it is possible that the known antihypertensive activity of B. jararaca venom, as well as its potential to treat other cardiovascular related disorders, is more complex than the actual perception, being related to different molecules and/or mechanisms of action, as briefly proposed in Figure 2. Values between +1 and −1 represent the relative strength of a given signature in an instance from the total set of calculated instances; b values between +1 and −1 represent the absolute enrichment of an up tag-list in a given instance; c values between +1 and −1 represent the absolute enrichment of a down tag-list in a given instance.
Anti-Inflammatory
The anti-inflammatory drug Sulindac displayed the third highest positive correlation with B. jararaca venom effects (Table 6). Although this activity was indirectly identified 11 times among the top-100 drugs, its presence in snake venoms is unexpected, since snake venom toxins usually have pro-inflammatory effects [26,29,[169][170][171][172][173]. However, this activity was recently reported for a cytotoxic protein present in the venom of Naja naja [93], as well as for a known analgesic peptide isolated from Naja naja atra venom [94]. A possible explanation would be an indirect action of B. jararaca venom inducing the overexpression of HMOX1, which is able to degrade proinflammatory free heme, generating carbon monoxide, iron, and biliverdin [68]. Additionally, both carbon monoxide and biliverdin, as well as its final product bilirubin, have already been described as anti-inflammatory agents [174][175][176][177][178][179].
Other Relevant Potential Applications Novel Anticancer Drugs: Antitumor activity, herein associated to three drugs against different tumor cell lineages (Table S3), has previously been reported for B. jararaca venom [87]. The antitumor activity of snake venoms may be partially due to LAAO activity. Costa and colleagues recently published a review highlighting the antitumor potential of LAAO [88]. It is hypothesized that LAAO binds preferentially to the tumor cell surface, catalyzes the release of H 2 O 2 which, once accumulated, induces oxidative stress leading to apoptosis [89]. Recently, Fung and co-workers [90] investigated the molecular mechanisms of antitumor effect of LAAO from Ophiophagus hannah through gene expression analysis of MCF7 cells. They also observed a significant increase in expression of CYP1A1 and, to a lesser extent, of CYP1B1. The authors suggested that both the direct cytotoxic effect of H 2 O 2 released by LAAO and the oxidative stress are likely the major leading causes of apoptosis and cell death. Nevertheless, another work [91] observed that rusvinoxidase (LAAO from Russell's viper venom) induced apoptosis in MCF7 cells through both extrinsic and intrinsic pathways, which supports the hypothesis for different pathways leading to apoptosis in tumor cells. Although LAAO is probably a key player in the antitumor effect of snake venoms, other components such as SVMPs, disintegrins, PLA 2, and C-type lectin/lectin-like proteins are known to have antiangiogenic properties and may also influence the overall antitumor activity [180][181][182][183].
Diabetes Treatment: Through C-map analysis, we identified three drugs to treat type II diabetes mellitus (Table S3). Tolbutamide belongs to the sulfonylureas antidiabetic drug class and acts by stimulating β cells of the pancreas to release insulin through the inhibition of a potent potassium channel on the β cells membrane [184]. Furthermore, troglitazone and rosiglitazone belong to the thiazolidinedione drug class which acts as an agonist of peroxisome proliferator-activated receptors (specifically PPARγ). This class of antidiabetic drug influences free fatty acid flux, thus reducing insulin resistance and blood glucose levels [185].
"Diabetes mellitus is a group of metabolic diseases characterized by hyperglycemia resulting from defects in insulin secretion, insulin action, or both" [186]. Insulin secretion is modulated by the action of different hormonal and neural stimuli [95,187,188], such as through the activation of G-protein-coupled receptors [189], but also through modulation of ion channel activity [190,191].
It is well known that toxins from venomous animals are able to target a great diversity of G-protein-coupled receptors, such as glucagon receptor family as well as affect membrane excitability through ionic channels modulation [95,134,192]. Thus, the identification of antidiabetic activity was not surprising since insulinotropic properties of snake venoms have already been reported for some components such as PLA 2 , serine endopeptidases, disintegrins [96], crotamine [97,193], and cardiotoxin [95]. In the case of PLA 2 , the increase in insulin secretion is likely related to cytosolic Ca 2+ [98][99][100]. On the other hand, crotamine and cardiotoxin act on potassium and sodium ion channels, respectively [95,101,193]. It is noteworthy that Byetta ® , a commercial antidiabetic drug, is a glucagon-like peptide-1 receptor agonist synthesized based on the peptide exendin-4, isolated from the saliva/venom of the Gila monster (Heloderma suspecturn) [194,195]. The potential to treat type II diabetes has also been described for components of wasp [196,197], scorpion [198,199], spider [200], and bee [201] venoms.
Gastroesophageal Reflux Disease Treatment: Gastroesophageal reflux is characterized by movement of harmful gastroduodenal contents such as gastric and bile acids into the esophagus [202]. GERD (gastroesophageal reflux disease) is a condition that causes either esophageal mucosal break, or annoying symptoms such as heartburn and regurgitation, or both [203,204]. GERD is usually treated by: (i) altering gastric contents by neutralization of acid; (ii) augmenting the antireflux barrier; (iii) improving of mucosal defense mechanisms; (iv) blocking esophageal nociceptors; or (v) modulating afferent signals and their interpretation in the brain cortex [202]. In this work, we indirectly identified one of those treatment pathways: alteration of gastric contents by neutralization of acid.
The drug lansoprazole, which showed the highest positive correlation with B. jararaca venom, is a proton pump inhibitor that treats GERD by blocking the gastric acid secretion [205]. However, the identification of lansoprazole may be correlated to its ability to induce the expression of CYP1A1, observed in hepatoma cell line HepG2 [206] and hepatocytes [207]. This ability has already been ascribed to primaquine [111], which was the second highest correlated drug identified (Table S3).
Moreover, H2 (histamine, type 2) receptor antagonists such as ranitidine, another drug related to the venom by C-map, can neutralize the gastric acid secretion dependent of histamine binding to H2 [208]. It has already been shown that the venom of Bothrops moojeni induces edema through the binding of histamine, released by the degranulation of mast cells, to H2 receptor [209]. Nevertheless, as far as we know, no compound with H2 antagonist properties has been described in snake venoms so far.
Antihistamines: We identified seven antihistamine drugs (meclozine, chlorphenamine, clemizole, carbinoxamine, ketotifen, mebhydrolin, and diphenhydramine) with good positive correlation with B. jararaca venom (Table S3). All these drugs display an antagonist effect on histamine receptor (H1) [210] but some of them (meclizine and mebhydrolin) have an additional anticholinergic effect. The binding of histamine to H1 receptor induces a proinflammatory response leading to many effects associated with anaphylaxis and other allergic diseases [211] such as asthma, bronchospasm, and mucosal edema [212]. The antihistaminic activity is unexpected for B. jararaca venom since it contains molecules (e.g., PLA 2 and SVMPs) that are able to induce histamine release through mastocyte degranulation [213][214][215][216], leading to increased vasodilation and vascular permeability. However, considering that snake venoms can display ambivalent actions such as pro-and anti-coagulant effects or possess both agonists and antagonists of platelet aggregation [41,217], we could hypothesize that snake venoms could display antihistamine activity. It is noteworthy that MCF7 cells express both histamine H1 and H2 receptors [218].
Major Drug Classes Negatively Correlated to Venom through C-Map Analysis
As previously mentioned, we have also generated a list of negatively correlated genomic signatures following MCF7 cells treatment with the venom (Tables S4 and S7). Although the interpretation of these results is not self-evident, we will comment on some of the hits obtained. For instance, oxymetazoline is a decongestant which acts as an alpha adrenergic agonist [219]. Since there was a negative correlation to venom, one could expect the presence of adrenergic antagonists (blockers). This is consistent with the data discussed in Section 2.2.1-Cardivascular Related Disorders, linking the venom to antihypertensive compounds. Another high-ranking hit was Trapidil, a PDGF (platelet-derived growth factor) antagonist. Although we could not find in the literature a PDGF agonist related to snake venoms, it has been shown that aggretin (a C-type lectin from Calloselasma rhodostoma venom [220]) phosphorylates PDGF receptor beta, leading eventually to PDGF-BB production [221]. Three anti-inflammatory-and one antihistaminic-related drugs could represent the known pro-inflammatory and histamine release activities related to bothropic venoms, which were discussed above (Section 2.2.1-Anti-Inflammatory and Section 2.2.1-Other Relevant Potential Applications: Antihistamines).
Conclusions
We aimed the exploration of novel potential therapeutic activities in B. jararaca venom through gene expression analysis allied to biological screening using connectivity mapping. The identification of drugs with activities (e.g., antihypertensive, antimicrobial, and antitumoral) previously reported for high abundance components of snake venoms, especially in B. jararaca, supported the efficacy of C-map as an unbiased exploratory approach for biological activity screening, and rekindles the snake venom-based search for new therapeutic agents. Moreover, this work indicated the existence of active venom components that could potentially be used in the treatment of other disorders (e.g., schizophrenia, depression, epilepsy, and gastroesophageal disease). However, those "newfound" activities should be assayed for in vitro and in vivo (eventually) confirmations, followed by venom fractionation in order to determine the molecular species associated to them. Furthermore, venom prefractionation could be performed and individual fractions submitted to C-map analysis; one such approach would be to assay the complex B. jararaca venom peptidome, recently described in the literature [34]. This peptidome is composed of hundreds of relatively short peptides (9 to 10 amino acids long, on average) that could prove a rich bioactive peptide library. In summary, the present work paves the way for further studies exploring the therapeutic potential of snake venoms by providing a rich set of novel activities to be assayed beyond the classical ones (e.g., hemorrhage, myotoxicity, hypotension).
Venom
Lyophilized venom, a pool from several juvenile/adult, male/female Bothrops jararaca specimens, was kindly provided by Instituto Butantan (São Paulo, Brazil). The access to Brazilian fauna genetic heritage was issued by the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) under license number 010578/2014-5.
Tissue Culture
MCF7 cells were obtained from HTB022 TM American Type Culture Collection, Manassas, VA and grown in Dulbecco's modified Eagle medium containing 0.01 mg/mL bovine insulin and 10% fetal bovine serum. MCF7 cells were passed and grown to 80% confluence in medium.
MCF7 Cells Treatment with B. jararaca Venom
Initially, one milligram of B. jararaca venom was dissolved in 1 mL of MCF7 medium. Based on previous results with HUVECs [29], four different concentrations (1, 2, 5, and 10 µg/mL) were tested. We then chose the highest concentration (5 µg/mL) at which no overt phenotypic changes were observed in the MCF7 cells, and added 1mL of this solution to each well on a six-well plate (85.20 mm × 127.80 mm). After that, the cells were incubated for 6 h at 37 • C. A plate containing only cells in 1 mL of media was assayed as control. All experiments were performed in duplicate.
Gene Expression Analysis
The total RNA was extracted from the cells using the RNeasy mini kit (Quiagen, Hilden, Germany, cat. no. 74104) following the manufacturer's instructions. The sense strand DNA was generated from cRNA, fragmented, and labeled for hybridization to HuGene ST 2.0 array (Affimetrix, lot. 4265888, Ref. 902112, Thermo Fisher Scientific, Waltham, MA, USA). The samples were hybridized to the chips overnight and washed and stained using Affymetrix's Fluidics Station 450 (P/N 00-0079, Affymetrix, Santa Clara, CA, USA) and the GeneChip Hybridization, Wash and Stain kit (P/N 900720, Affymetrix) following the manufacturer's instructions. The chips were scanned using Affymetrix's GeneChip Scanner 3000 7G (p/n 00-0213, Affymetrix, Santa Clara, CA, USA). Four chips were run for the two experimental groups (venom and control) assayed in duplicate, as aforementioned.
Bioinformatics Analysis
The gene expression analysis, to determine changes in transcripts following MCF7 cell treatment with B. jararaca in comparison to untreated cells, was carried out as previously described [33]. Furthermore, we also used the C-map software build 02 (https://portals.broadinstitute.org/cmap/) to query the probe sets of significantly differentially expressed genes with the perturbagen signatures present in the C-map database. Initially, we converted the probe sets from HuGene ST 2.0 to HGU133A dataset, which is compatible with the C-map database, using the Affymetrix tool that provided a best match between the two chip types.
Afterwards, the algorithm returned a ranked list of all perturbagens found in the C-map database along with scores indicating their relation to the venom. The top-100 positively correlated drugs identified through C-map were submitted to the website Drugbank, available on query (https://www. drugbank.ca/, (accessed on 12 June 2017) [222], to retrieve information about their mechanisms of action. The same was done to the top-20 negatively correlated drugs. | 2018-04-03T01:01:47.964Z | 2018-02-01T00:00:00.000 | {
"year": 2018,
"sha1": "119f3b95e1bd2ab3e07ef711861ebcf201fc1d24",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6651/10/2/69/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "119f3b95e1bd2ab3e07ef711861ebcf201fc1d24",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
52113414 | pes2o/s2orc | v3-fos-license | Sensorless Control of a Fault Tolerant Multi-level Inverter PMSM Drives in Case of an Open Circuit Fault
This paper introduces a fault tolerant multi-level inverter PMSM Drive that is capable to work in case of a single phase open circuit fault without degrading the system performance. Moreover, it can work in sensorless mode in case of an open circuit fault with the same performance as in sensor mode. The permanent magnet synchronous motor (PMSM) is fed by a 4-leg asymmetric cascaded H-Bridges multi-level inverter. The fourth leg is activated in case of an open circuit only to maintain the system performance. The reliability of the system is additionally enhanced by adopting a new method to track saliency position in case of an open circuit fault to make the system work in sensorless mode. The saliency position is obtained through measuring the dynamic current response of the healthy motor line currents due to the insulated-gate bipolar transistor (IGBT) switching actions. The new strategy includes software modifications only to the saliency tracking algorithm used in healthy mode in order to make it applicable to the reconfigured multi-level inverter in the presence of a fault. It uses only the fundamental pulse width modulation (PWM) waveform (i.e. there is no modification to the operation of the 4-leg multi-level inverter), similar to the fundamental PWM method proposed for a 3-leg multi-level inverter. Simulation results are provided to verify the effectiveness of the proposed strategy over a wide range of speeds in the case of a single-phase open circuit fault.
I. INTRODUCTION
Sensorless control of motor drives using 2-level converters has been widely researched for systems employing standard two-level converters [1][2][3][4][5][6]. These techniques introduce a significant additional current distortion which causes audible noise, torque pulsations and increases the system losses. In the other hand, a Multi-level converter can achieve a higher voltage and power capability with conventional switching devices compared to 2-level converter and is now used for high power drives [7,8,9]. The particular structure of some of these converters offers significant potential for improving sensorless control of motors, as they employ H-bridge circuits with a relatively low DC link voltage. [10][11][12] are introducing different techniques to achieve sensorless control of multi-level inverter drives in healthy mode i.e no open circuit fault. Under faulty conditions, a number of fault-tolerant strategies to control 2level motor drives [13][14][15][16][17][18] and multi-level motor drives [19][20][21][22] have been used to enhance system operation under open circuit phase faults in sensor mode. [23,24] introduced a 4-leg 2-level (PMSM) drive to track the saturation saliency in the case of single-phase open circuit faults. This paper is introducing a new method to track the saturation saliency in a surface mounted permanent magnet motor in case of an open circuit fault. This motor is driven by a 4-leg multi-level inverter. The objective is to maintain continuous system operation with a satisfactory performance to meet the safety procedure for the whole system and increase the reliability of the system.
II. RESEARCH METHOD
A. Fault tolerant multi-level four-leg converter drive topology Fig 1 shows the proposed fault tolerant multi-level 4-leg converter drive topology. In this topology, a fourth leg is added to the conventional 3-leg multi-level inverter. The redundant leg is permanently connected the motor neutral point to provide the fault-tolerant capability in case of an open phase fault. Under healthy operating conditions, the fourth leg will be redundant which means that the two switches in this leg will be inactivated resulting in no connection between the supply and the motor neutral point. Therefore, the proposed converter normally operates as a conventional multi-level three-leg inverter as shown in fig 2. Under faulted operating conditions, the switches on the faulty phase are disabled and the switches in the fourth leg are immediately activated in order to control the voltage at the neutral point of the motor.
B. Healthy operation of the multi-level inverter
The control strategy of the system in sensored healthy mode is illustrated in Figure 2. The reference voltages that are calculated from the controllers are used to generate pulses to control the multi-level inverter through Space Vector Pules Width Modulation Technique (SVPWM).
The multi-level SVPWM technique that is adopted in this paper is given in [8]. According to this technique the switching sequence will be one of four types as illustrated in Fig 3 ( Figure 7 in the case that an open circuit occurs in phase be as an example [18,23]. Firstly, in order to disable the switches in the phase b, the reference voltage of the faulty phase Vb_ref is set to zero whereas the motor neutral current, which is the sum of the two remaining output currents can circulate through the fourth phase of the multi-level inverter. Secondly, as the current in the faulty phase becomes zero (Ib=0), and in order to maintain the motor performance under faulty operation, the rotating magnetomotive force obtained from the armature currents (Ia, Ib, Ib) in the healthy condition should be maintained by the two remaining motor currents (Ia and Ic) in the case of an open circuit fault that demands an increase of √3 as well as phase shifting 30 degrees away from the faulted phase compared to the currents generated under normal operation, as given in Eq. (1). If the fault is occurred in other phase, the same algorithm will be applied. [ 0] = [ √3cos(θ + 30) √3sin(θ + 30) −cos(θ − 120) −sin(θ − 120) √3 cos(θ + 90) √3 sin(θ + 90) ] [ ] (1) The simulation of a 4-leg multi-level converter PMSM drive was carried out using SABER. Figure 8 shows the simulation results of a 4-leg multi-level inverter PMSM drive system under healthy and faulted conditions. The motor was driving a 30 Nm load torque at 300rpm speed. Then speed step commands from 300 rpm to 1100 rpm back to 300 rpm were applied at times 2s, 3s, 4s, 6s, 7s, and 8s to the motor. In time intervals between 2.5s to 3.5s an open circuit phase fault was introduced to phase 'a' while an open circuit in phase 'b' was introduced in time interval between 4.5s to 5.5s. Finally, in time interval between 6.5s and 7.5s the open circuit fault was introduced to phase 'c'.. It is clear that the controller could regulate the motor speed to follow the reference speed properly under faulted conditions as well as under normal operation. The controlled currents id and iq were stable at the reference level. Under faulted conditions, the amplitude of the motor currents was multiplied by √3 and the two remaining healthy currents became phase shifted by 60 o while the neutral currents was no longer zero as given in eq (1). For the rest of the test, i.e under healthy condition, The motor currents are balanced 3-phase sinusoidal and the neutral current is zero.. The simulation results show that ripple in the torque is almost the same as that exist under normal operating conditions.
Where 0 is the average inductance and ∆ is the variation of leakage inductance due to the rotor anisotropy ( = 2 for saturation anisotropy ) This modulation of the stator leakage inductances will be reflected in the transient response of the motor line current to the test vector imposed by the inverter. So by using the fundamental PWM waveform and by measuring the transient current response to the active vectors it is possible to detect the inductance variation and track the rotor position for three-leg multilevel inverter. After obtaing the scalar quantities pa, pb and pc then the position of the saliency can be constructed as shown in the equation below:p ⃗ = p α + β = p a + b + a 2 p c (5) Fig 9 shows simulation results for tracking the saturation saliency (2fe) in a SMPM under faulted condition as well as under healthy condition. The motor is driven by a four-leg multi-level inverter. The algorithm that is used in this test track the saliency is proposed in [12] where it is used track the saliency of the SMPM motor driven by a three-leg multilevel inverter under healthy operation. The results shows that under health operating conditions, the algorithm that is used in [12] for three-leg multi-level inverter drive could track the saturation saliency efficiently at different speeds while it couldn't track the saturation saliency under faulted operating conditions i.e in time intervals (2.5s to 3.5s), (4.5s to 5.5s) and (6.5s to 7.5s). This results can be explained as follows: under healthy operating conditions, the switches in the fourth leg will not be activated and hence the four-leg multi-level inverter will operate as a three-leg multi-level inverter so the algorithm proposed in [12] could track the saliency. In time interval (2.5s to 3.5s) i.e open phase fault in phase a, the measurement of the current response (dia/dt) will become zero as ia = 0. And so the position estimation algorithm couldn't track the saliency in this time interval. Also between 4.5s and 5.5s i.e open phase fault in phase b and between 6.5s to 7.5s i.e open phase fault in phase c, the measurements of the current responses (dib/dt) and (dic/dt) will become zeros and hence the algorithm couldn't track the saliency in those time intervals as shown in Figure 9.
E. Tracking Tracking the Saliency in Multilevel Inverter under unhealthy condition
As seen in previous section, the algorithm presented in [12] couldn't track the saliency under the case of an open circuit fault. In this section a modified algorithm is introduced to track the saliency in case of an open circuit fault. This algorithm is making use of the switching action of the IGBTs in the fourth leg of the multi-level inverter under faulted conditions. It uses the current response of application of fundamental PWM waveform (no modification applied to the PWM waveform). The new algorithm uses only the current response of healthy phases to track the saliency and doesn't use the current response of the open circuit phase as it will be zero. After measuring the current response of the two healthy phases and according to the sector number and the type of the space vector modulation state diagram that the reference voltage exist in, the three position scalars quantities can be deduce and hence the saliency position can be obtained. The stator circuit when the vectors V1, V2 and V0 are applied are shown in Fig 11.a, 11.b and 11.c respectively.
The following equations are obtained using Fig 11.b:-0 = * ( 2) + * ( 2) Finally when V0 is applied as shown in Fig 11.c, the following equations hold true:-0 = * ( 0) + * Assuming that the voltage drop across the stator resistances are small and can be neglected and the back emf can be cancelled if the time separation between the vectors is small. Subtracting equation (8) from equations (6) and equation (11) from equation (9) respectively yields:- − ( 1) ) (12) , V DC = * (
F. Fully sensorless speed control of 4-leg multilevel inverter under unhealthy condition
The speed control for a PM machine have been implemented in simulation in the SABER modeling environment. The estimated position signals Pαβ from the equations selected are used as the input to a mechanical observer [25] to obtain the speed ω^ and a cleaned position θ^. Note that the simulation includes a minimum pulse width of 10 mu s when di/dt measurements are made, similar to the experimental results of [6]. This estimated speed ω^ and position θ^ are used to obtain a fully sensorless speed control as shown in Figure 13.
Fig 12
The algorithm to track the saliency in of multilevel inverter using the fundamental PWM algorithm given in [12] for helathy mode and the algorithm presented above for unhealthy condition. Figure 14 shows the results of a fully sensorless speed control of a PMSM motor driven by a 4-leg multi-level inverter at loaded condition using the algorithm proposed in [12 ] for the healthy case and the method proposed above in the case of an open circuit fault. The motor was working in sensorless healthy mode at speed=0.5 Hz then at time t=4 s an open circuit fault in phase 'a' is introduced to the system. The motor maintained its performance after the fault. At t=6s a speed step change from 0.5 Hz to 0 Hz is applied to the system while the motor was under open circuit fault in phase 'a'. Figure 14 shows that the motor responded to the speed step with a good transient and steady state response. When t=8s, the fault in phase 'a' is removed and introduced to phase 'b', Figure 14 shows the motor was tracking the zero reference speed during this time. At t=12s, the fault is removed from phase 'b' and introduced to phase 'c'. After that, when t=14s, a speed step change from 0 rpm to -0.5 Hz is applied to the system while the motor was working under open circuit fault in phase 'c'. Figure 14 shows that the motor responded to the speed step with good transient and steady state response. Finally, at t=16s, all the faults are removed and the motor returns to healthy condition. Figure 15 shows similar results to those shown in Figure 14 of but at higher speed steps (16. Figure 15 shows that the motor responded to the speed step with a good transient and steady state response. When t=8s, the fault in phase 'a' is removed and introduced to phase 'b', Figure 15 shows the motor was tracking the zero reference speed during this time. At t=12s, the fault is removed from phase 'b' and introduced to phase 'c'. After that, when t=14s, a speed step change from 0 rpm to -16.67 Hz is applied to the system while the motor was working under open circuit fault in phase 'c'. Figure 15 shows that the motor responded to the speed step with good transient and steady state response. Finally, at t=16s, all the faults are removed and the motor returns to healthy condition. III. CONCLUSION This paper has outlined a new scheme for tracking the saliency of a motor fed by a 4-leg multi-level inverter in the case of a single phase open circuit fault through measuring the dynamic current response of the motor line currents due to the IGBT switching actions. The proposed method includes software modification to the method proposed in [12] to track the saliency of the motor under healthy conditions to make it applicable in the cases of open circuit phase condition. The new strategy can be used to track the saturation saliency in PM motors (2 fe) and the rotor slotting saliency in IMs (14*fr) similar to the method used in a healthy motor drive and the only difference between the PM and IM will be the tracked harmonic number. The results have shown the effectiveness of the new method in increasing the safety measures in critical systems that need continuous operation. The drawbacks of this method are increasing the total harmonic distortion of the motor's current, specially at a very low speed, due to the minimum pulse width in addition to the need for 3 di/dt sensors. | 2018-08-29T19:18:25.813Z | 2018-06-01T00:00:00.000 | {
"year": 2018,
"sha1": "3f3e01b468008f7c310c11025bb25c7b9ed142fc",
"oa_license": "CCBY",
"oa_url": "https://nottingham-repository.worktribe.com/preview/1351022/Sensorless%20Control%20of%20a%20Fault%20Tolerant%20Multi-level%20Inverter%20PMSM%20Drives%20in%20Case%20of%20an%20Open%20Circuit%20Fault.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "04e3325c9ea7fae11fd3cebfeb125cc8d8e31582",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
248020924 | pes2o/s2orc | v3-fos-license | Social sustainability in an evolving circular fashion industry: identifying and triangulating concepts across different publication groups
Sustainability and the concept of circular economy are two of the most prominent approaches in the fashion industry to meet global challenges. Advocated by different interest groups, these concepts primarily follow an environmental and economic perspective on sustainability. In turn, the social dimension of sustainability has not been extensively explored. Performing a comparative discourse analysis, this study triangulates data from three different perspectives and unveils social sustainability-related aspects in documents related to two specific companies as well as in academic and stakeholder publications in the fashion context. We use LeximancerTM to reveal and visualize the scope and frequency of socially relevant concepts in more than 550 publications. Based on this, results show that the two fashion companies have gradually been communicating more about social sustainability-related aspects as opposed to academic and stakeholder publications. Overall, single social sustainability-related values exclusively appear in each of the publication groups, whereas others seem to reflect a mutual influence among the different players. Yet, pivotal social sustainability-related issues are missing. This corroborates scholars assuming a neglected role of the social dimension of sustainability in general and calling for a greater elaboration on social aspects in the conceptualization of a circular economy. Our results also call for a deeper follow-up analysis of communications, practices and strategies of different actors in their respective social contexts.
Introduction
The global fashion industry causes massive environmental impact and disastrous social mismanagement (e.g., violating several labor laws) in its supply chains (Pedersen and Gwozdz 2014). Various stakeholder groups including consumers, non-governmental organizations (NGOs), or institutional investors have been exerting pressure on major companies to encourage them to implement proper sustainability and risk management (Jaegler and Goessling 2020). Key drivers to implement circularity in the fashion industry particularly involve recycling and upcycling activities as well as vegan manufacturing principles (Todeschini et al. 2017). Despite differences in the concepts of sustainable development and circular economy (Ellen MacArthur Foundation 2022) regarding objectives, approaches, and impacts, both emphasize intra-and intergenerational commitment, non-economic drivers for societal change and the pivotal role of companies to advance industry transitions (Bocken et al. 2017;Geissdoerfer et al. 2018;Lahti et al. 2018).
Change is also influenced by multiple stakeholder groups, including industry players, business consultancies and associations, policy-makers, academic scholarship, and media, as well as an increasing environmental awareness in society (Korhonen et al. 2018). Stakeholder activism and an increasingly complex environment of multi-stakeholder governance and bodies affect the corporate management of transitions towards circularity and sustainability (Arenas et al. 2009;Baumgartner et al. 2020;Dawkins 2014). In turn, the companies' ability and capacity of responding to external pressures and demands by creating sustainable value have become a core prerequisite for corporate success and becoming a socially sustainable organization (Chiappetta Jabbour et al. 2019;Galuppo et al. 2014). Thus, analyzing leading retailers and fashion brands, such as C&A or H&M, can deepen transitory insights in terms of social circularity.
Both companies have been adopting strategies and business model innovations towards circular economy to promote environmental and society-wide benefits of sustainable fashion production and consumption practices (GFA 2018;Saha et al. 2021). For example, while C&A has been pioneering product innovations regarding the concept of cradle-to-cradle (C&A 2018), H&M has been proclaiming its strategy turnaround of becoming 100% circular and renewable by 2030 in April 2017 (H&M 2018). Since governance and legitimization processes are characterized by underlying complex, uncertain and multiple values, embracing a plurality of perspectives relevant to tackling sustainability issues is considered crucial to enable a standardization and institutionalization of norms and practices in businesssociety relations and responsible business conduct (Grosser 2016).
These aspects also affect social sustainability that is concerned with the identification and management of positive and negative impacts, evoked by corporate activity, on people (United Nations Global Compact 2019). According to Roca-Puig (2019), "social sustainability is a quality of a hu-K man system based on a series of values or essential ethical principles (e.g., fairness, trust, equity, justice, cooperation, engagement) that foster lasting conditions for human wellbeing, particularly for the most vulnerable individuals or groups" (p. 917). However, in the circular economy, the social dimension has been less investigated and elaborated in terms of meaning, objectives and relevant issues (cf., Ajmal et al. 2017;Moreau et al. 2017;Sauvé et al. 2016). It is still unclear how circular economy-related strategies and concepts will promote social equity and provide benefits for society such as improvements in human rights and social justice (Millar et al. 2019).
Previous literature has failed to address the potential relevance of aspects associated with social sustainability and social compliance in a circular fashion industry and their real-world representation by fashion companies. Moreover, a consolidated view on what themes and aspects reflecting social sustainability are embedded in a circular economy is missing in existing literature. Considering social aspects could also help in order to spur further legitimization and institutionalization of the circular economy concept. Indeed, as Murray et al. (2017) stress, there is the imperative of further developing existing concepts from a holistic and integrative perspective. Moreover, the roles of stakeholders, such as NGOs and their engagement in circular transition processes, are also underrepresented (Lüdeke-Freund et al. 2018)-likewise the pressure of external stakeholders and interest groups on fashion companies to implement circular economy principles and strategies in a socially sustainable way. Therefore, a synthesis of the actual state of themes and concepts is essential that relates to social sustainability in selected companies and stakeholders' eyes-as it is unclear, how the views differ or overlap in a fashion industry increasingly moving towards implementing sustainability and circularity. Our study responds to this gap by identifying, visualizing and elaborating themes and concepts relating to social sustainability in publications across the three different groups of actors, like business related to two specific companies, academia, and the public. We further compare these concepts by discovering similarities and differences of these concepts across the coverage by corporate players, academic outlets and public interest groups, thereby allowing and presenting a multi-perspective and balanced approach.
The research approach chosen in our study appears to be highly relevant since discussions about sustainable development and the circular economy involve discourses between multiple actors with many co-existing motives and interests, perspectives and interpretations, representing 'constructive ambiguity' (Hugé et al. 2013, p. 188). While society and knowledge about it is actively constructed by discourses as structured ways of representation, they also filter particular social understandings and actions (Hugé et al. 2013). This may imply opportunities and enablers, but also constraints and obstacles influenced through languages, meanings and interpretations, existing assumptions and perceptions, social structures and interactions or even levels of power. On a meta-level, identifying particular features of discourses among different actors helps to uncover categories and structures that are central and congruent in the representation of social constructs and, thus, mirror their significance. It also detects neglected and less important aspects, yet that could influence the acceptability and further development of such constructs and discourses in society.
Our study contributes to this by shedding light on different discourses in an increasingly circular fashion industry to identify key socially related concepts in publications issued by diverse actors. By emphasizing a comparative and comprehensive approach, the study delineates from mainstream research on the circular economy that focuses on the examination of corporate case examples or academic viewpoints. Indeed, studies on stakeholder perspectives in the circular economy for fashion are missing. This needs to be addressed in order to close the gap.
First, we collect and analyze academic, corporate and stakeholder publications concerning concepts, patterns and relations. Second, we aggregate the findings across all groups examined to derive at comparative conclusions. The intention of this article is not to delineate between sustainability and circular strategies in the fashion industry or to elaborate on single fashion companies implementing sustainability approaches (e.g., in the context of fast fashion). Furthermore, it is not the study's intention to analyze the roles and spheres of influence among stakeholders in the fashion industry independently. The study contrasts themes of circularity and social sustainability of corporate publications of two specific fashion companies and academic as well as stakeholder publications. The results of this triangulation progress further research and practical implementation for further maturing and harmonizing multi-perspectival notions on social sustainability in an industry sector's movement towards greater sustainability and circularity.
Theoretical-conceptual background
This section focuses on core theoretical streams of our research. We stress key features of the circular economy and existing as well as missing links to social sustainability in general and relating to fashion industry.
The fashion industry's movement towards implementing circular economy
Sustainability aims at the establishment of resilient systems respecting the limits of ecological viability and capacity and balancing environmental, economic and social dimensions or aspects of sustainability, triple-bottom-line approach (Arnold 2018). Sustainable development further needs changes in multiple spheres including individual, organizational, institutional, and systemic levels, as well as business model innovations for sustainability, considering the interests of multiple stakeholders (Tolkamp et al. 2018). The textile, fashion and apparel industry is responsible for causing huge sustainability challenges throughout its value and supply chains given the prevailing principles of a linear economy model. This includes environmental harm such as rising global carbon emissions, enormous water use, non-recoverable materials, soil pollution or increasing landfill. Social impacts concern issues such as equitable and dignified working and living conditions in production countries as well as consumption patterns, relationships with garments or prevailing design standards (Brydges 2021;European Parliament 2021;Goworek et al. 2020;Ki et al. 2020;McFall-Johnsen 2020;Provin and de Aguiar Dutra 2021).
The circular economy promotes new, cyclic ways of using and treating resources from an environmental and economic point of view and forms part of holistically adapting existing or creating new business models (Bocken et al. 2016;Ferasso et al. 2020;Ellen MacArthur Foundation 2022). Based on closed loop flows of materials and energy, it aims at setting up a green, restorative and regenerative economy including the creation of new business models and employment opportunities (Geissdoerfer et al. 2018). Based on a reformulation of core principles in the circular economy, previous research has identified values, attributes, and enablers relevant for the design of circular, closedloop supply chains, reverse logistics and product recovery. These include cascades orientation, waste elimination, economic optimization, maximization of retained value, environmental consciousness, leakage minimization, systems thinking, circularity, built-in resilience, collaborative network, shift to renewable energy, optimization of change, technology-driven, market availability and innovation (Ripanti and Tjahjono 2019).
Thus, circular economy is a concept that promotes alternative strategies and tools to tackle global sustainability challenges as mentioned above. Albeit it is closely related to the concept of sustainability, a precise definition between both ideals is still subject to scientific debate (Suárez-Eiroa et al. 2019). At its core, a circular textile and fashion economy represents "an industrial system which produces neither waste nor pollution by redesigning fibres to circulate at a high quality within the production and consump-tion system for as long as possible and/or feeding them back into the bio-or technosphere to restore natural capital or providing secondary resources at the end of use" (GIZ 2019, p. 3). Principles for circularity should cover the entire lifecycle of textile and fashion items and involve responsibilities among both producers and consumers (Brismar 2015). Amongst other issues, this includes criteria for fashion design and production (e.g., disassembly and separation, non-toxic and high-quality materials) as well as consumption (e.g., multiple users) and end-of-life (e.g., recycling stations) (e.g., Hvass 2015;Machado et al. 2019;Corvellec and Stål 2019;Hvass and Pedersen 2019;Paras et al. 2019;Sandvik and Stubbs 2019;Jia et al. 2020). Against this background, the circular economy concept has particularly been considered an important driver in order to facilitate transformation towards sustainability in this industry Thorisdottir and Johannsdottir 2019;Goldsworthy et al. 2018;Todeschini et al. 2017 (Ki et al. 2020).
Yet, most scholarly publications considered primarily environmental and economic dimensions (e.g., Chouinard and Brown 1997;Kjaer et al. 2019;Siderius and Poldner 2021), political and legal issues (e.g., Jacometti 2019) or technological and manufacturing aspects (e.g., Shirvanimoghaddam et al. 2020). The social side of sustainability is occurring as a side, secondary and indirect effect (Ranta et al. 2018) rather than equally embedded or deliberately taken as a starting point for inquiry.
The missing social pillar of sustainability in circular economy
Circular economy and related business models (e.g., product-service systems) are considered to offering social values and realizing social sustainability, social progress and social growth. This is achieved by means of job opportunities, new work procedures and relationships, consumer comfort, skills development and promotion, corporate reputation and image appreciation or social cohesion and integration (Chiappetta Jabbour et al. 2019;Korhonen et al. 2018;Leder et al. 2020;Moktadir et al. 2020;Schwanholz and Leipold 2020). Likely, solidarity and sharing economy involve aspects such as deeper changes of social values, practices and paradigms underlying the economic system and activity. It implies fostering shifts in prevailing cultural categories and social practices and the promotion of human-embedded, inclusive thinking and sufficiency-oriented mind-sets and behaviors, non-ownership, low consumerism, social awareness and emotional attachment, a sense of community and shared responsibility, a reframing of product care and stewardship, a stronger engagement in partnerships and coop-eration, participation and empowerment of different stakeholders or an increase of labor-intensive activities based on diverse and dignified work activities (Bauwens et al. 2020;Hobson 2019;Ki et al. 2020;Korhonen et al. 2018;Lofthouse and Prendeville 2018;Moreau et al. 2017;Schröder et al., 2020;Schulz et al. 2019). Hirscher et al. (2018) explicitly focus on the role of social manufacturing in fashion and emphasize the importance of empowering consumers in alternative design strategies. Investigating digital sharing platforms in the clothing context, the study by Schwanholz and Leipold (2020) underlines the importance of an integrated view on (social) sustainability in strategies and innovations promoting a circular fashion economy. Other case studies investigated social enterprises or social businesses in a fashion industry integrating principles of sustainability and circularity (e.g., Fischer and Pascucci 2017;Plieth et al. 2012). Further research on social and cultural aspects in a circular textiles and fashion industry has investigated specific premises and practices, such as human perceptions of recycled garments, upcycling or soul-shopping, that could encourage changes in consumption behaviors and patterns (e.g., Hudson-Miles 2021;McEachern et al. 2020;Wagner and Heinzel 2020).
Yet, experts repeatedly criticize a subordinated and under-theorized role of social aspects in the conceptualization of the circular economy (e.g., Galvão et al. 2020;Millar et al. 2019;Velenturf et al. 2019;Hobson and Lynch 2016). Lüdeke-Freund et al. (2018) point to the limited diffusion of socially oriented concepts, beliefs, behaviors, and ideals such as sufficiency or slowing consumption. Millar et al. (2019) describe the lack of clarity and consensus concerning the exact nature and extent of impacts on a societal level in the circular economy. They miss a suitable indicator accounting for social aspects (namely, social equity) while also encompassing the other sustainability dimensions in the circular economy (Millar et al. 2019). So, a marginalized role of moral and ethical aspects such as diversity, inter-and intra-generational equality, financial equality or equality of social opportunity eventually also hampers an explicit representation of the circular economy in line with the three-pillar conception of sustainability (Kirchherr et al. 2017;Murray et al. 2017). Reflecting the negative impacts of fast fashion production and consumption, changes in human relationship to raw materials and garments, rethinking sustainability communication at the business-consumer interface or adopting new approaches, processes and standards in the design and respective education programs to enhance product longevity, to reduce absolute resource consumption levels and textile waste need to be encouraged (Brydges 2021;Dissanayake and Weerasinghe 2021;Ki et al. 2020;Mostaghel and Chirumalla 2021;Saha et al. 2021). Therefore, the social dimension in a circular econ-omy requires more differentiated theoretical and conceptual academic investigations.
Social and circular transformation
In addition, a wide range of studies, reports and policy documents of external stakeholders exist that call for circularity to foster transformation in the fashion industry (e.g., Business of Fashion McKinsey & Company 2019; Global Fashion Agenda GFA 2019b). They point to single social sustainability-related values and aspects, such as the industry's potential of empowering and engaging consumers to promote sustainable consumption behavior based on reusing and sharing networks, swapping, Do-It-Together and collaborative consumption platforms (GFA 2019a). Further social aspects addressed relate to the role of citizens both as designers and community members as well as the characteristics of the circular fashion consumer (Brismar 2015), sufficiency and servitization as two types of circular business models (Circle Economy 2015) or the necessity of system-level change (Ellen MacArthur Foundation 2017). Incorporating the business perspective, the two globally operating fashion companies H&M and C&A will represent the business part due to their transitional activities. Despite grounding their initial business models in fast fashion, both companies have continually been communicating about their efforts to foster the implementation of sustainability and circular concepts in the fashion industry.
Therefore, the identification and critical comparative juxtaposition of social themes communicated by different actors related to fashion can cluster perspectives and push integrative transformation. Using a triangulation approach the primary objective here is to gather and blend different information to enable a more complete and holistic understanding about the integration of a social pillar in the concept of the circular economy by contrasting practiceoriented with more theoretical viewpoints (cf., Jick 1979). In particular, we shed light on these different discourses by investigating the following three research questions: 1. What themes and patterns of social sustainability-related concepts are included in the sustainability reports of global fashion companies, represented by C&A and H&M? 2. In what way do these themes and patterns deviate from discussions in the academic world and the public? 3. To what extent do these themes and patterns reflect the movement towards sustainability and circular economy in the fashion industry from a theoretical perspective?
Research design
Following the research design of document and discourse analysis, this study adopts the procedure of finding 'verbal patterns' on social sustainability in circular fashion, using data triangulation (Bryman 2015), as communicated by three distinct groups of actors intricately linked to the fashion industry. The first group of publications (A) embraces scientific articles. Corporate sustainability reports of two fashion retailers are included in the second group of publications (B), while the third group (C) involves media coverage and other stakeholder publications. We performed data analysis for the three years 2014, 2015, and 2018 using the automated content analysis tool Leximancer™. The selection of the final periods for data analysis is largely attributed to data limitations in the group of corporate publications of the two specific companies and the publishing years of sustainability reports. Both companies selected for this study, C&A and H&M, pursue different ways of sustainability reporting. Specifically, C&A has been publishing global sustainability reports only since 2015. Prior to that, the company issued its sustainability reports every second year. Furthermore, several sustainability reports of C&A are-if at all-only available as summary. In contrast, H&M has been publishing annual sustainability reports since 2009. Before that, from 2002 to 2008, H&M published reports on corporate social responsibility (CSR). All reports are available in full and downloadable on the company's corporate website. In order to ensure consistency in our methodological approach and subsequent extraction, only those reports were included being available and downloadable across both companies. In total, we collected nine corporate sustainability reports between 2014 and 2018. These include four reports of C&A as well as five reports issued by H&M. According to this study's approach of performing a comparative analysis, the results presented below reflect the analysis of the sustainability reports of each company in 2014, 2015 and 2018 and, hence, allow cross-sectional and longitudinal analysis (Bryman 2015). The other two groups of publications (academic and stakeholder publications) also include data for these three periods, establishing internal validity.
Material collection
This section describes the respective process of identifying, collecting and evaluating relevant publications for the three groups (A), (B), and (C), details see Fig. 1.
Academic publications (A)
Scientific articles were retrieved following the methodological procedure and workflow of rigorous material collection used in systematic literature reviews (e.g. Buzzao and Rizzi 2020;De Giacomo and Bleischwitz 2020). Thereby, we followed the general procedure of collecting relevant material as presented in Beyer and Arnold (2021) and comprised several steps and criteria. Amongst others, this included database selection, sampling procedure as well as inclusion and exclusion criteria concerning aspects such as date and type of publication or language (Rajeev et al. 2017). They were adopted to this study's thematic focus and set up in advance. Illustrative details for the main criteria can be retraced from Fig. 1. In general, our procedure aimed at rigorously depicting and selecting publications in a transparent and comprehensive manner without following strictly the rules of a systematic literature review. Subsequently, the search was expanded using further article search strategies. These included the snowballing technique (Gerritsen-van Leeuwenkamp et al. 2017) as well as the screening of single reference and publication lists to expand the document base with further potentially relevant articles. Each article was individually checked for appropriateness by screening the abstract as well as the entire study in case of doubt. Articles, for instance, not related to sustainable development and/or circular economy or empirical studies focusing on multiple industries were erased from the search hits.
Corporate publications (B)
In line with previous research on the integration of circular economy in corporate sustainability strategies in the context of fast-moving consumer goods such as textiles and fashion (Feng and Ngai 2020;Garcia-Torres et al. 2017;Stewart and Niero 2018), we decided to focus on the analysis of corporate sustainability reports. Sustainability reports are acknowledged and extensively used in corporate disclosure (Isenmann et al. 2007). They supply information concerning issues, measures, and projects towards implementing social and environmental sustainability as well as strategies relating to circular economy. We selected the two fashion companies based on the criteria (1) economic scope of activity; (2) coverage in the media and broader public; (3) efforts concerning the integration of sustainability measures and circular economy; as well as (4) occurrence in previous articles and studies. C&A and H&M are two of the top-ten European fashion brands (Hansen and Schaltegger 2013). Both retailers can be considered to encourage a certain culture of consumerism and supporting the conventional linear model of fashion production as well as consumption (e.g., Hansen and Schaltegger 2016). Despite grounding their initial business models in fast fashion, both companies are steadily communicating about their increasing efforts to foster the implementation of sustainability and circular concepts. Moreover, both companies have been in the spotlight of previous scholarly publications (e.g., Shen 2014).
Stakeholder publications (C)
Stakeholders exert pressure on companies to alter their strategies, policies, and operations along entire supply chains (Vasi and King 2012;Cordeiro and Tewari 2015;Neville 2020). The fashion industry involves a considerable number of various stakeholder groups on various levels and domains (e.g., media, policy). For example, a company's stakeholder groups include consumers, community groups, environmental advocates, labor rights groups, product health and safety associations, associations representing suppliers and entrepreneurs, governments and intergovernmental organizations as well as animal welfare groups (Camilleri 2020;Ki et al. 2020;Jakhar et al. 2019). Further stakeholders are employees, shareholders or major retail markets (C&A 2018). Stakeholder groups issue a wide range of materials such as reports, policy briefs or newspaper articles.
Given our research approach of triangulation, we considered this supplementary secondary material useful since it represents broad public awareness, external relations and impacts as well as supporting internal validity of our study. Furthermore, this was considered important since the circular economy is conceived and developed by practitioner and stakeholder reports (e.g., Kirchherr et al. 2017;Chiappetta Jabbour et al. 2019;Salvioni and Almici 2020). The stakeholders included were selected based on criteria such as closeness to the fashion industry as well as to the topics of sustainability, social responsibility and circular economy, degree of independence, scope of activities or occurrence in previous academic studies and reputation (Diekamp and Koch 2010;Gwilt 2018;Brand Eins 2018). Over 100 stakeholders were considered to reflect the great diversity of interest groups in this industry sector. Table 1 supplies the list of all stakeholders included.
Data analysis and interpretation
The research employs a content analysis based on the computer-assisted software tool Leximancer TM (version 4.50, www.leximancer.com) as it allows for a grounded approach (Zawacki-Richter et al. 2017). Important aspects are directly uncovered and emerging from the text without an 'a priori' established set of factors derived from the literature to be coded up (Sotiriadou et al. 2014). So, it alleviates disadvantages of manual coding including human subjectivity and inter-coder reliability (Bryman 2015;Young et al. 2015; Zawacki-Richter and Naidu 2016). Main concepts and themes are inductively and iteratively discovered by a statistical examination of the frequency, (co-)occurrences and interrelations of words, (Angus et al. 2013;Harwood et al. 2015). The software differentiates between a thematic or conceptual analysis as well as a semantic or relational analysis. While the first type of analysis detects core concepts, the second type of analysis explores linkages between all concepts (Bigi et al. 2016;Thomas and Maddux 2009). The study builds on both approaches. Themes represent clusters of commonly co-occurring concepts. Overall, the analysis revealed 477 concepts among all three groups of publications investigated. To enable adequate handling and to address the research questions posed, the overall group of concepts was organized into several subgroups representing different thematic foci. These foci also include concepts of social and environmental sustainability that may inform the debate around further developing the circular economy concept in the fashion industry. In total, 12 concept maps were generated for every group and the three final years of investigation (2014, 2015 and 2018).
A two-dimensional concept map eventually visualizes clusters of similar concepts occurring in close proximity (Harwood et al. 2015;Thomas and Maddux 2009;Zawacki-Richter and Naidu 2016). In this regard, concepts are captured as groups of words that appear and are linked together throughout all the underlying sources. The concept maps also illustrate themes, which are presented as circles and include often co-occurring concepts. The themes are named after the most prominent concept in the respective cluster of concepts (Buzova et al. 2016;Harwood et al. 2015), and the concept map also displays the concepts' relative positions. Semantic relations are stronger (weaker), the closer (the further) concepts are related (Buzova et al. 2016). This relational analysis is indicated by larger (smaller) sized dots as well as more (less) thickness of lines between concepts. At the same time, closer proximity and overlaps among themes on the map show a closer relationship in the underlying textual documents (Bigi et al. 2016).
Limitations
Given the fragmented field of circular economy research and the multilayered perspectives, our study enables finding novel, objective insights and consolidates a common ground of understanding across diverse groups of publications as regards to the relevance of social aspects in circular fashion. This study lays the groundwork for more substantive analyses in subsequent studies when employing other techniques such as manual content analyses (Mayring 2015). Here, the social sustainability-related themes and concepts as identified in our study could provide a fundamental source for the formation of categories (cf. Fig. 2). At the same time, 'verbal patterns' identified and visualized in the concept maps as presented could be re-assessed and respective relations between single concepts or themes detected and investigated in a more comprehensive way. However, our explorative research builds on documents and discourse analysis by delineating themes and concepts related to the social dimension in sustainability and circular economy by stressing similarities and differences in the communication among various interest groups in the fashion context. Specifically, the study comparatively assesses 'verbal patterns' as represented and visualized in publications from the business, academic and wider public spheres indicating separate roles of and mutual influences among various actors. Based on statistical and descriptive analyses of more than 550 documents, we consolidate what social and ethical aspects and values are communicated across these different interest groups' publications. The interpretation of data and sense-making of concept maps, relationships and patterns could be biased by an implicit or explicit knowledge of the publications and subject matter under investigation (Harwood et al. 2015;Zawacki-Richter et al. 2017). Similarly, aligning concepts to certain thematic foci H&M Main trends and patterns concerning themes: -Largest themes "c&a" and "sustainability" -Theme "sustainability" has overlaps with two other themes, namely "supply" and "business"; yet, there is no interference with themes such as "working" or "training"; the theme includes concepts such as "strategy" and "need" -Theme "safety" includes concepts such as "building", "bangladesh" and "fire" -Themes "textile", "report" and "cotton" are located at the edge of the concept map Main trends and patterns concerning themes: -Largest themes "industry", "chain" and "suppliers" -Theme "commitment" is without any intersection with other themes; yet, it is directly linked with the concept "responsible" in the theme "focus"; together with the theme "actions" it is located at the edge of the concept map -Theme "rights" has some overlapping with the two themes "suppliers" and "management" -Theme "suppliers" comprises concepts such as "fair", "wages", "workers", "labour" and "bangladesh" -Theme "chain" includes concepts such as "communities", "change", "impact", "value", "customers" and "climate" -There is a direct link between the two themes "increase" and "stores" Main trends and patterns concerning concepts: -Most concepts are included in the theme "sustainability" (15 concepts) -Concept "social" is located in the theme "working", with some overlapping in the theme "c&a", yet without any direct link to the themes "sustainability" and "sustainable" -Several spiders can be retraced in the concept map, exemplarily from the concept "sustainable" to the concepts "everyone", "products", "supply", "management" and "lives" -There are several concept paths, exemplarily from the concept "c&a" to the concept "training" via the concepts "foundation", "support" and "programme" or from the concept "progress" to the concept "customer" via the concepts "sustainability", "global", "apparel" and "industry", thereby linking the themes "sustainability" and "working" Main trends and patterns concerning concepts: -Most concepts are located in the theme "industry" (20 concepts) -Concept "change" lies in the intersection between the themes "industry" and "chain"; it is embedded in a concept path from the concept "conscious" to "positive" via the concepts "fashion", "value", "chain", "impact", "create", "change" and "work" -Several spiders can be retraced in the concept map, exemplarily from the concept "use" to the concepts "cotton", "reduce" and "resources"
Academic publications Stakeholder publications
Main trends and patterns concerning themes: -Largest themes "fashion", "companies", "consumer" and "different" -The concept map included a great number of themes and concepts, thereby visualizing a differentiating and holistic perspective; moreover, all themes in the concept map have an overlapping with at least two other themes -Theme "responsibility" entails concepts such as "business", "corporate", "model" and "social" -Theme "sustainability" includes concepts such as "systems", "strategies", "perspective", "world", "impacts", "changes", "practices" and "environmental"; it has some overlapping with four other themes, namely "management", "chain", "market" and "fashion"; furthermore, it is very close to the theme "standards" -Theme "used" is overlapping with the themes "different", "market", "products", "clothing" and "water" -Themes "clothing" and "fashion" appear as two separate themes in the concept map; whereas the theme "clothing" entails concepts such as "use", "waste" and "recycling", the theme "fashion" comprises concepts such as "sustainable", "marketing", "ethical", "impact" and "discussed" -Theme "consumer" is presented between the two themes "clothing" and "fashion"; yet, it is diametrically opposed to the theme "responsibility" -Theme "management" is in the outer circle of the concept map; it has overlaps with two other themes, namely "sustainability" and "chain" -Theme "different" comprises concepts such as "system", "organic" and "working" Main trends and patterns concerning themes: -Largest themes "industry", "brands", "fashion", "rights" and "factory" -Theme "hours" appears at the edge of the concept map without any intersection with other themes -Theme "industry" has overlaps with 5 other themes, namely "fashion", "brands", "local", "countries" and "china" -Themes "companies" and "factory" are visualized as two separate themes; whereas the theme "companies" has only one intersection with the theme "brands", the theme "factory" has some overlapping with four other themes, namely "local", "wage", "countries" and "day" -Theme "wage" is diametrically opposed to the theme "fashion" -Theme "rights" includes concepts such as "social", "responsibility", "human", "better", "fair", "standards", "support", "important" and "ensure" -Theme "factory" encompasses concepts such as "government", "significant", "costs", "labour", "conditions" and "health" -Themes "hours" and "wage" are directly linked by the two concepts "migrant" and "workers" Main trends and patterns concerning concepts: -Most concepts are included in the theme "market" (31 concepts) -Concept map includes concepts such as "recycling", "used", "services", "waste", "cycle", "design" and "reduce" -Several spiders can be retraced in the concept map, exemplarily from the concept "fashion" to the concepts "fast", "marketing", "slow", "ethical", "design", "impact", "issues" and "sustainable" -Several concept paths are encompassed, exemplarily from the concept "management" to the concept "value" via the concepts "supply", "chain", "social", "responsibility", "csr", "data" and "brand", thereby linking the themes "management", "chain", "responsibility" and "companies" Main trends and patterns concerning concepts: -Most concepts are to be found in the theme "rights" (32 concepts) -Concept "poverty" is located in the intersection between the themes "factory" and "wage" -Concept "unions" is situated in the overlap between the themes "wage" and "rights" -Several spiders are presented in the concept map, exemplarily from the concept "brands" to the concepts "ethical", "responsible", "sustainable", "resources", "retailers", "study" and "leading" -Another spider concerns the concept "living" which is related to the concepts "wage", "level", "form", "fair", "allow", "paying", "national" and "trade" -Concepts "conditions", "low", "long", "factory", "terms" and "asia" form the spider related to the concept "living" -Several concept paths can be found, exemplarily from the concept "change" to the concept "economic" or the concept "society" via the concepts "use", "global" and "development" -Another concept path follows from the concept "world" to the concept "impact" via the concepts "market", "clothing", "consumers", "chemicals", "fashion", "products", "water", "cotton", "design", "environmental", "h&m", "sustainable", "brands", "retailers" and "supply" Visible concepts 100% and theme size 33% (e.g., social sustainability) is partly a matter of researchers' subjectivity and "effectively suppressed by the manner in which Leximancer TM analyses the data" (Sotiriadou et al. 2014, p. 220). Corporate data is clearly limited and should be extended by including other fashion companies adopting circular business models and strategies as well as collecting additional data (e.g., interviews with corporate representatives).
Findings
The analysis results include concept maps generated by Leximancer™ for each group of the publications and year of investigation. These are presented in Tables 2, 3 and 4 and stress themes and concepts questioned in research questions one and two, thereby supplying a descriptive picture of what aspects are relevant in publications of each interest group. Answering research question one 'What themes and patterns of social sustainability-related concepts are included in the sustainability reports of global fashion companies, represented by C&A and H&M?', relevant topics provide insights from a primarily corporate and practice perspective and are shown in Tables 2, 3, 4, 5 and 6 and Fig. 2. In the concept maps of both companies analyzed here a quite differentiated presentation of social sustainabilityrelated concepts is illustrated. In case of C&A this involves concepts such as "responsibility"/"responsible", "rights", "share", "commitment"/"committed", "awareness", "standards" and "fair" while for H&M concepts such as "social", "diversity" and "transparency" can be retraced. Moreover, as shown in Table 5 in particular, C&A provides a more differentiated and holistic perspective on social sustainability.
Concerning research question two 'In what way do these themes and patterns deviate from discussions in the academic world and the public?', our analysis aims at extending the practice view by comparing them with academic and stakeholder perspectives. Based on this triangulation, results show that several concepts such as "migrants", "poverty", "unions" or "health" arise exclusively in respective concept maps of stakeholder publications as visualized by Leximancer TM . This is also presented in Table 5. They symbolize fundamental aspects relevant to structural and institutional antecedents and conditions of social sustainability in the global fashion industry, as well as consequences of their proper implementation along global fashion value and supply chains.
Illustrating this comparison, the concept maps of C&A were visually contrasted with those of H&M. In a similar vein, the concept maps of academic publications were juxtaposed with those of stakeholder publications. Each graphic is completed with a brief textual analysis of general patterns and themes deduced from the concept map. Like previous research, using the tool Leximancer™, particular attention is given to peculiar locations of single themes and concepts as well as the tracing of concept spiders and concept paths. Corporate publications C&A H&M Main trends and patterns concerning themes: -Largest themes "c&a" and "sustainability" -Theme "future" includes a concept "consumer" and has intersection with the theme "sustainable" -Theme "impact" involves a concept "consumption" -There is a close link between the two themes "approach" and "global", further emphasized by the relation between the two concepts "goals" and "sustainability" -Themes "performance" and "certified" are located at the edge of the concept map; the theme "performance" has no direct relation to social aspects -Theme "c&a" involves intersection with the theme "impacts"; via concept paths through this theme it is also linked to the theme "sustainable" Main trends and patterns concerning themes: -Largest themes "impact", "customers" and "change" -Theme "h&m" is situated at the edge of the concept map, not having any intersections with the other themes, namely "customers"; however, it has a concept path to the theme "customers" -There is a big intersection between the themes "change" and "living" -Theme "impact" comprises several concepts relating to the environmental dimension of sustainability such as "emissions", "climate", "energy" and "water" -Theme "change" includes concepts such as "need", "local", "working" and "people" -Theme "focus" embraces concepts such as "commitment", "responsible", "communities", "chain" and "partners" -Themes "management" and "change" are connected via the link between the concepts "relations" and "markets" Main trends and patterns concerning concepts: -Most concepts are included in the theme "c&a" (22 concepts) -Concept "circular" appears for the first time compared to the concept map of C&A for 2014; it is located in the intersection between the themes "sustainable" and "supply"; it is directly linked with the concept "materials", yet, not with any social-related theme or concept -Several spiders can be depicted in the concept map, exemplarily from the concept "materials" to the concepts "products", "raw", "use", "sustainable" and "circular" -There are several concept paths, exemplarily in the theme "c&a" from the concept "c&a" to the concept "improving" via the concepts "fashion", "strategy" and "key" -Another concept path links the themes "supply", "operations" and "performance"; it follows from "carbon" to "performance" via the concepts "chain", "environmental", "footprint", "fair", "operations", "conditions", "working" and "suppliers" Main trends and patterns concerning concepts: -Most concepts are included in the theme "change" (includes 30 concepts) -Concept "sustainable" is located in the theme "customers"; yet, it has links to the concept "global" via the concepts "better" and "initiative" which are presented in the intersections between the themes "change" and "cotton" -Several spiders can be retraced in the concept map, exemplarily from the concept "workers" to the concepts "social", "labour", "fair" and "wage" -Several concept paths are displayed, exemplarily from the concept "management" to the concept "impact" via the concepts "supply", "chain", "focus", "responsible", "partners", "production" and "water"
Academic publications Stakeholder publications Main trends and patterns concerning themes:
-Largest themes "industry" and "business" -There appears a theme "change" in the concept map; it overlaps with the themes "need" and "industry"; it is linked to the theme "fashion" by the concept path from "change" to "fashion" -"clothing" and "fashion" are visualized as separate themes in the concept map; yet, whereas "clothing" has intersections with the themes "need", "product" and "time", the theme "fashion" has an intersection with the theme "industry" only -Theme "key" overlaps with 5 themes, namely "sustainability", "industry", "need", "value" and "chain" -Theme "sustainability" is located between the themes "management" and "marketing" -Theme "business" includes concepts such as "green", "model" and "transparency" -Theme "product" comprises concepts such as "recycling", "use", "reuse", "waste", "quality" and "different" -Theme "system" involves concepts such as "action" and "development"; yet, it is located at the edge of the concept map, thereby having no direct overlaps with any other theme; however, it can be linked via concept paths Main trends and patterns concerning themes: -Largest themes "working", "fashion", "bangladesh", "workers" and "company" -All themes have intersections with at least two other themes, reflecting their close connection -Theme "legal" has overlaps with 6 other themes, namely "company", "production", "working", "bangladesh", "workers" and "information"; together with the theme "production" it provides the center of the map, enclosed by all other themes -Theme "information" appears as a theme between the two themes "company" and "workers" -Theme "workers" includes concepts such as "unions", "law", "association", "international", "action", "management" and "human" -Theme "bangladesh" embraces concepts such as "health", "fire", "rana", "safety", "workers", "law", "government" and "ilo" -Theme "fashion" consists of concepts such as "impact", "future", "value", "society", "sustainable", "change" and "people" Main trends and patterns concerning concepts: -Most concepts are included in the theme "industry" (30 concepts) -Single spiders can be found in the concept map, exemplarily from the concept "brand" to the concepts "marketing", "social", "issues", "fair" and "apparel" -Several concept paths are displayed, exemplarily from the concept "change" to the concept "economic" via the concepts "fashion", "consumption", "ethical", "consumers", "responsibility" and "social", thereby linking the themes "change", "fashion", "marketing", "sustainability" and "industry" -Another concept path links the concepts "available" and "relationship" via the concepts "data", "analysis", "model", "business", "transparency", "information", "related" and "chains" -Furthermore, there are concept paths connecting the concepts "life" and "time" via the concepts "cycle", "design" and "quality" or linking the concepts "action" and "used" via the concepts "system", "development", "process", "production", "product" and "different" Main trends and patterns concerning concepts: -Most concepts are presented in the theme "legal" (29 concepts) -Concepts "responsible" and "responsibility" are visualized in the theme "company" -Concepts "education", "benefits", "issue" and "trade" are displayed in the overlap between the themes "bangladesh" and "legal" -Several spiders are to be found in the concept map, exemplarily from the concepts "working" and "conditions" to the concepts "hours", "day", "sector", "better", "garment", "improve", "wage" and "important" -Another spider concerns the concept "brands" which is linked to the concepts "responsible", "supply", "public", "apparel", "order", "manufacturer", "policy", "retailers", "approach" and "company" -Several concept paths can be depicted, exemplarily from the concept "initiative" to the concept "campaign" via the concepts "industry", "global", "development", "economic" and "fair" -Another concept path links the concepts "local" to "buyers" via the concepts "legal", "benefits", "least" and "standards" Visible concepts 100% and theme size 33% 2018), 114 social sustainability-related concepts within and across all three groups of publications were identified. This initial list was further fine-grained and narrowed down to 83 concepts selected for a later detailed examination in order to ease handling and comprehensibility. In doing so, we consolidated linguistically similar concepts in one category (e.g., "working" and "work") and erased those that were either not very informative (e.g., "feel") or that have also other than socially related connotations and meanings (e.g., "group" representing the company structure of H&M). The authors limited the overall number of concepts based on their prior knowledge of both literature and standards in the field of social sustainability (e.g., ISO 26000; SA 8000). Ultimately, those terms and categories were selected that showed a high degree of conformity between the different theoretical sources. The concepts considered relevant for a subsequent analysis principally represent concepts that are people-centered at their core and that are essentially related to human, social and societal activities, artifacts, interactions or relations. Yet, we acknowledge that, because there is not a single conclusive definition on social sustainability in literature and many approaches co-exist, the list of concepts concerning social sustainability selected for an in-depth analysis in our study as well as our procedure of narrowing them down may also be subject to debate. An overview of the concepts is given in Fig. 2. Fig. 2 also illustrates the temporal distribution of social sustainability-related concepts. Some concepts appear in only one year of investigation either in one group of publications (e.g., "awareness", "children" and "diversity") or two groups (e.g., "relations"/"relationships" and "share"). There are several concepts that occur consecutively, yet to a different extent among all three groups of publications and year of investigation. For example, while the concept "consumption" can be retraced consec-utively in the group of academic publications, the concept "partners"/"partnership" appears only consecutively in the concept maps of H&M. The concept "social" is visualized only consecutively in the two groups of academic and stakeholder publications. On the contrary, the concept "responsible"/"responsibility" is visualized in all groups of publications, yet, solely in 2014 and 2015.
Similarly, the overall annual number of social sustainability-related concepts for each group of publications is presented. For example, the number of concepts visualized for C&A had risen from 20 concepts in 2014 to 28 concepts in 2018. By referring to academic publications, a decrease in the number of concepts can be retraced from 22 concepts in 2014 to 18 concepts in 2018. A steeper decline of concepts pictured can be seen in the concept maps of the group of stakeholder publications, falling from 28 concepts in 2014 to 17 concepts in 2018. Table 5 aggregates the results within and across all three groups of publications and, thus, contrasts practice, academic and stakeholder perspectives. While several concepts play a role in each of the three groups (e.g., "circular", "local", "responsible", "standards" and "management"), other concepts appear in two groups or exclusively one group of publications. The concept "women", for instance, is presented in the concept maps of stakeholder publications only. Table 6 in the Appendix illustrates the concepts reflecting the ecological dimension of a sustainable and circular fashion industry. Fig. 3 displays a summary of those concepts that are integrated across all three groups of publications in each year of investigation. In total, this includes 12 concepts which are distributed in a different way per year. Across all three years the overall number of social sustainabilityrelated concepts that are relevant to all three groups of publications rises from seven concepts in 2014 to nine in 2018. Corporate publications C&A H&M Main trends and patterns concerning themes: -Largest themes "global", "c&a", "suppliers" and "chain" -More themes and concepts than in the previous concept maps for 2014 and 2015, thereby reflecting a more differentiating perspective -Theme "global" embraces concepts such as "share", "needs", "partners", "responsible" and "waste" -Theme "circular" is positioned between the themes "fashion" and "sustainable"; furthermore, it has a large intersection with the theme "global"; the concept "circular" is directly linked with the concept "fashion" -Theme "suppliers" includes concepts such as "safety", "workers" and "committed" -Theme "circular" encompasses concepts such as "goal", "change", "products" and "future" Main trends and patterns concerning themes: -Largest themes "chain", "key" and "h&m" -Theme "chain" is located in the center of the concept map with intersections to six other themes, namely "performance", "global", "sustainable", "circular", "partners" and "support" -Theme "performance" has some overlapping with five other themes, namely "products", "recycled", "recycling", "h&m" and "chain" -Theme "key" includes concepts such as "wage", "fair", "rights", "diversity", "climate" and "goals" -Theme "partners" embraces concepts such as "labour", "policy", "standards" and "improve" -Theme "sustainable" involves concepts such as "vision", "change", "need", "long-term" and "action" -There appears to be a chain-like link of the themes "key", "circular", "sustainable" and "global" -There is a large intersection between the themes "h&m", "recycling" and "performance" Main trends and patterns concerning concepts: -Most concepts are included in the theme "global" (29 concepts) -Concepts "human" and "rights" are situated in the overlapping between the themes "action" and "suppliers" -Concept "responsible" is located in the intersection between the two themes "sourcing" and "global" -Concept "awareness" is situated in the intersection between the themes "training" and "support" -Several spiders can be found in the concept map, exemplarily from the concept "industry" to the concepts "impact", "important", "access", "environmental" and "apparel" -There are several concept paths in the concept map, exemplarily from the concept "waste" to the concept "customer" via the concepts "impact", "business", "global" and "sustainability" Main trends and patterns concerning concepts: -Most concepts are included in the theme "chain" (26 concepts) -Several spiders can be retraced in the concept map, exemplarily from the concept "materials" to the concepts "use", "goal", "better", "water" and "products" -Several concept paths are included, exemplarily from the concept "sustainable" to the concept "climate" via the concepts "change", "vision", "strategy", "circular" and "positive" -Another concept path follows from the concept "recycling" to the concept "increase" via the concepts "h&m", "brands", "textile", "chain", "value", "supply", "impact" and "environmental" Academic publications Stakeholder publications Main trends and patterns concerning themes: -Large themes "sustainability", "products", "materials", "fashion", "models" and "research" -"consumers" and "customers" are visualized as two separate themes -Theme "work" is located at the edge of the concept map; it has overlaps with the themes "value" and "models" -"management" as displayed in the outer circle of the concept; yet, it has direct overlaps with the theme "sustainability" -Theme "key" has a center position in the map, surrounded by the themes "sustainability", "fashion", "become", "study" and "values" with direct overlaps; it entails concepts such as "solutions", "alternative", "aspects", "strategies", "approach", "goal", "understanding", "development" and "sharing" -Theme "models" is located far away from the themes "sustainability", "fashion", "consumers" and "customers"; yet, it is close to the theme "key" and involves direct overlaps with the themes "work", "value" and "research" -Themes "fashion" and "apparel" can be found as two separate themes in the concept map with a large intersection -Theme "become" includes concepts such as "changes", "share", "innovative", "lifecycle" and "alternative" -Theme "materials" embraces concepts such as "recycling", "waste", "quality", "cycle", "increase", "use" and "cost" Main trends and patterns concerning themes: -Largest themes "fashion", "brands", "management" and "workers" -Themes "fashion", "textiles" and "garment" appear as three separate themes -Theme "circular" is located between the themes "textiles" and "fashion" -Theme "brands" involves some overlaps with 6 other themes, namely "data", "management", "supply", "fashion", "making" and "use" -Theme "fashion" includes concepts such as "shift", "change", "development", "sustainability", "value", "transparency", "need" and "environmental" -Theme "textiles" comprises concepts such as "recycling", "waste", "materials", "collection" and "used" -Themes "workers" and "organic" are situated in the outer circle of the concept map Main trends and patterns concerning concepts: -Most concepts are to be found in the theme "values" (27 concepts) -Concepts "goal", "role" and "development" are located in the intersection between the two themes "key" and "sustainability" -Concept "circular" appears near to the concept "economy"; it can be depicted between the themes "products" and "customers" -Several spiders can be retraced in the concept map, exemplarily from the concept "consumers" to the concepts "clothing", "apparel", "consumption", "environmental" and "brands" -Other spiders can be found with regard to the concept "fashion" which is linked to the concepts "fast", "consumption", "global", "retail", "sustainable", "sector" and "industry" or with regard to the concept "design" which is related to the concepts "create", "several", "change", "required", "services", "products" and "economy" -Several concept paths are displayed, exemplarily from the concept "green" to the concept "future" via the concepts "marketing", "international", "management", "chains", "supply", "practices", "social", "sustainability", "study" and "research" Main trends and patterns concerning concepts: -Most concepts are visualized in the theme "brands" (29 concepts) -Concept "social" forms part of the theme "supply" -Concepts "sustainable", "market", "create" and "launched" are located in the intersections between the three themes "fashion", "circular" and "making", respectively -Several spiders can be retraced in the concept map, exemplarily from the concept "use" to the concepts "terms", "year", "water", "process", "clothes" and "range", thereby establishing links to the themes "organic" and "brands" -Several concept paths are to be found, exemplarily from the concept "collection" to the concept "change" via the concepts "used", "material", "waste", "recycling", "textiles", "materials", "products", "design", "circular" and "fashion" Visible concepts 100% and theme size 33% The absolute decrease of social concepts as shown for 2015 coincides with the temporary emergence of new concepts (e.g., "fair").
Discussion
Our analysis of major similarities and differences of social sustainability-related concepts as communicated in three groups of publications related to the fashion industry has revealed that the concepts "fair", "global"; "life/living", "local", "management", "needs", "responsibility", "values" and "work" are constantly presented in all years of investigation. They, hence, appear to be of prime concern to all groups of actors closely linked to a circular fashion industry (see Figs. 2 and 3 and Table 5). These concepts represent the core or shared understanding or basic beliefs of social issues in the textile and fashion context. In the following, we will not elaborate on each theme and concept revealed by our analysis, but summarize and highlight core and peculiar findings pertaining to each research question. The concepts "communities" and "commitment"/"committed" as well as "partners"/"partnership" only occur in the group of corporate publications. Looking at collaboration through a social capital lens, Leder et al. (2020) underline the role of collaborative activities, new relationships, linkages and social networks in advancing circular business models and innovation processes. Here, the concept "women"-solely addressed by the stakeholders-is of importance. Women represent a comparatively large group of consumers and thus may accept and promote alternative consumptive patterns such as donation of used clothing (Norum 2017). Women also form the major part of workforce in the global fashion industry, bearing great responsi-bilities in terms of income generation and social care. So, employment opportunities, ethical and safe working conditions in less-affluent production countries need to be addressed.
Stakeholders, external organizations and other key target groups are also to be actively involved in circular strategies to create sustainable value. However, our findings show that even if stakeholders appear to influence corporate activities in the fashion industry, the companies develop and change towards more social sustainability and circularity in their very own, idiosyncratic way. This finding is supported by Jakhar et al. (2019) investigating that even in case of similar pressure by stakeholders, corporate responses appeared to be different reflecting great heterogeneity. Therefore, broadly discussed social sustainability requirements need to be linked to already existing norms and guidelines as well as planned regulatory initiatives to address the minimum standards concerning CSR and ethical business conduct along supply chains (e.g., ISO 26000, German Supply Chain Act). They would require a critical examination whether their provisions go well with a social-oriented circular fashion economy. Vice versa, national, European, and international standard-setting bodies and institutions, as well as other secondary stakeholders with a stake in the circular fashion economy, would-be well-advised shaping and promoting specific provisions that go beyond already existing standards and guidelines in the context of social sustainability.
Referring to research question three 'To what extent do these themes and patterns reflect the movement towards sustainability and circular economy in the fashion industry from a theoretical perspective?', our explorative study shows that several social sustainability-related concepts have been evolving concurrently between H&M and C&A, as well as between the two other groups of publications. Thereby, a mutual influence among all three groups examined is indicated. However, as shown in Fig. 2, there are a substantial number of aspects symbolizing social sustainability that are repeatedly not displayed as themes or concepts in all three groups of publications (e.g., "solidarity", "participation", "empowerment", "engagement", "appreciation", "creativity", "sufficiency", "inclusivity", "carefulness", "culture" and "justice") and need to be systematically integrated. This, however, would require a different understanding of (economic) concepts and approaches such as value, citizen, time, market or exchange. It further necessitates considering the interrelatedness and embeddedness of such alternative approaches and actors within other contexts (e.g., cultural, political, ethical or material) in order to facilitate a different realization through daily social practices and routines, habits, norms and institutions (Hobson 2019;Hobson and Lynch 2016;Schröder et al. 2020). This also includes sufficiency, collaborative consumption, sharing, non-ownership and solidarity (Iran and Schrader 2017). The simultaneous occurrence of the concept "share" in the concept maps of academic publications and C&A's sustainability report in 2018 could also reflect the shift towards discussions on business model innovations for sustainability in both theory and practice.
Social innovations in a circular textile and fashion economy are disregarded in current debates among various interest groups. While this mirrors deficits in the conceptualization of the circular economy, it also illustrates both a pressing need and immense potential for scholars to undertake studies that focus more on social and cultural sustainabilityrelated aspects in circular fashion such as slow design, degrowth, down-scaling and localized economies (Goldsworthy et al. 2018;Hobson and Lynch 2016). Moreover, all stakeholder groups need to be elaborated and evaluated aiming at triple-bottom-line considerations in the circular economy concept. The importance of collaboration, cooperation and exchange of information and knowledge among diverse stakeholders is obvious for advancing a sustainable and circular fashion industry (Todeschini et al. 2017).
Indeed, as highlighted in previous research, in current circular economy discourses an only "curtained and impoverished view of the role of citizens" (Hobson and Lynch 2016, p. 3) is prevailing. This largely de-politicized, passive role is at its core restricted to the acceptance or rejection of new business models within existing norms of greater economic efficiencies, sustained material throughputs and "an unquestioned reliance upon, and uptake of, technologi-cally-mediated forms of social engagement" (p. 3). Thus, in fostering only weak and micro-level approaches to sustainability with an only limited idea on people and their behavior in social contexts, the current framings and narratives of the circular economy lack a debate on reducing absolute consumption levels as well as a critical exploration of antecedents, mechanisms and implications for radical changes and socio-ecological transformations enabling post-capitalist, diverse and localized economies and de-growth.
At the same time, while this could allow for a proactive presentation of changes towards sustainability and circularity, it also implies the risks of social-and greenwashing in sustainability reporting and communication. Against the background of particularly institutional complexity and multiple stakeholders' interests in the fast fashion context, companies, such as retailers, are increasingly using online communication tools to raise public and other stakeholders' awareness towards their sustainability practices and to implement reputation-added measures (Naidoo and Gasparatos 2018;Da Giau et al. 2016;Gazzola et al. 2020;Islam and Deegan 2010). While this may legitimize their business operations and promote stakeholder engagement, the success of such measures is dependent on altruistic motives, accountability and transparency in their sustainability communication practices (Miotto and Youn 2020). Yet, it raises questions regarding moral implications and boundaries of an increasing adoption of (social media and online) communication and related technologies. Here, tensions between corporate disclosure and practice become obvious, eventually inhibiting an effective amelioration of sustainability impacts (Cho et al. 2015;Garcia-Sanchez et al. 2014). Nevertheless, promoting a circular fashion economy in a socially sustainable way should imply the development of alternative and innovative business models and corporate strategies based on a holistic and extended view on value proposition and value creation (Bocken et al. 2015;Evans et al. 2017).
Based on our results we suggest the following three propositions for further research on the integration of social sustainability in a circular fashion economy: The more concepts of circular economy are linked to notions of social sustainability, the more sustainable the concepts become. The more diverse stakeholder analyses are in terms of their social requirements, the more profound are recommendations for social norms and international standards guiding responsible business conduct in a circular economy. The more broadly the role of consumption, consumer awareness, attitudes and behavior as well as values is reflected and rethought, the better the social dimension of sustainability is integrated into concepts of circular economy and their practical implementation by companies.
Conclusions and limitations
The present study investigated the evolution and integration of social sustainability-related concepts in a fashion industry progressing towards circular economy. The results contribute to extant literature and research on the circular economy in general and the circular textiles, fashion and apparel industry by employing a grounded theory approach and seminally identifying, sorting and consolidating themes and concepts with particular regard to social sustainability across three groups of publications. The shared concepts "fair", "global"; "life/living", "local", "management", "needs", "responsibility", "values" and "work" as well as underrepresented ones, for example, "empowerment", "solidarity", "sufficiency", "justice", "accountability", "culture" or "well-being" became obvious by employing Leximancer TM . This outcome confirms previous studies demanding an integration of social sustainability in circular economy and calling for a more holistic framework. The findings emphasize the necessity of mutual exchange of different beliefs and views on social sustainability and circular economy, thus, resolving the plurality of understandings and questioning undefined or neglected issues.
From a theoretical-conceptual perspective, all interest groups reveal the inadequate systemic consideration of social, cultural and ethical norms, principles, beliefs and worldviews. Further research should examine social aspects neglected in the concept maps as discussed above (e.g., "stewardship", "de-growth") as well as explore the general role(s), engagement and commitment of the wider public including the salience of their interests and objectives (Thijssens et al. 2015) in a sustained and enduring development towards a socially sustainable and circular fashion industry.
Concerning practical implications, results imply a greater need for considering social aspects in further developing circular strategies in the fashion industry. Longterm perspectives, philanthropic and social-impact business models and strategies are rarely found in the fashion industry, and need to be further explored to be linked to economic values. Here, clear progress of the content-wise convergence and reconciliation with already existing and newly established international norms and standards is needed. Corporate discourses, however, may imply risks of deception, social-and greenwashing that companies need to counteract in order to enhance the legitimacy of circular transitions. Thus, further research on corporate communications could critically question specific narratives or concepts such as "wages", "policies", "commitment", "partnership" or "fair" for real-world transformations towards socially sustainable circular fashion practices and operations.
From a methodological viewpoint, the research is limited in terms of number of companies and years investigated. The study only provides information on communication, themes, patterns, and relations, but cannot show cause-effect-relations or causalities. Further research may employ other qualitative approaches, like, interviews with representatives and experts from each of the three groups of publications (corporate, academic, and broader public), e.g. to scrutinize gaps between communication or representation and real-world corporate activities in order to uncover blueand greenwashing of corporate communication strategies.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4. 0/.
Appendix
Total 13 (24) 7 (15) 8 (15) 8 (13) Several concepts are represented in the two groups of scholarly and stakeholder publications (e.g., "disposal", "environment", "organic", "resources" and "waste"). Single concepts are visualized across all groups of publications (e.g., "cotton", "resources" and "water") or in a single group (e.g., "climate", "disposal" and "energy"). The concept "water" appears consecutively in the concept maps of corporate and stakeholder publications, yet, not in the group of academic publications a Number in parentheses indicates the frequency of concept occurrence across all years of investigation K Funding Open Access funding enabled and organized by Projekt DEAL.
Declarations
Conflict of interest K. Beyer and M.G. Arnold declare that they have no competing interests.
Ethical standards This article does not contain any studies with human participants or animals performed by any of the authors. Informed consent was not required since the present research does not include individuals. | 2022-04-08T15:26:22.241Z | 2022-04-06T00:00:00.000 | {
"year": 2022,
"sha1": "da1f1f621921d5b7a4551017ab59f5ea66a2c766",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00550-022-00527-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "b71ec7be1bbe8d7ebe6609cc5c1864d722efaef0",
"s2fieldsofstudy": [
"Environmental Science",
"Business",
"Sociology"
],
"extfieldsofstudy": []
} |
58010970 | pes2o/s2orc | v3-fos-license | Correlation of insulin-resistance with blood fat and glucose in elder patients after surgery for hepatic carcinoma
The present study was designed to analyze variations in blood fat, blood glucose and insulin-resistance in elder patients following surgery for hepatic carcinoma. It also investigated the correlation of insulin with the level of serum leptin and blood fat. A total of 80 patients with primary hepatic cancer who were admitted to The First Hospital of Lanzhou University for treatment between October 2014 and June 2016 were enrolled in the study. At the 1-year follow-up, the patients were divided into two groups based on their recurrence of hepatic cancer after surgery. The levels of serum leptin were detected prior to, one month and one year after surgery; the changes in blood fat, body mass index (BMI), waistline and hipline were measured at one year after surgery; alterations in the fasting blood glucose and blood glucose were measured at 2 h after meal. The fasting insulin (FINS) level and homeostasis model assessment-insulin resistance (HOMA-IR) index were also measured. Correlations between serum leptin and total cholesterol, FINS and fasting blood glucose were analyzed. In the recurrence group, the levels of serum leptin and FINS level were significantly reduced, while waistline and hipline were increased, compared with the non-recurrence group (P<0.05). The BMI and fasting blood glucose in the recurrence group was significantly elevated in comparison with the non-recurrence group (P<0.05). The HOMA-IR index was significantly increased in the recurrence group compared with the non-recurrence group (P<0.05). These results indicated that following surgery for hepatic cancer, the level of serum leptin in patients with recurrence was decreased with an increase in susceptibility to abnormal metabolism of blood fat and glucose. In addition, the serum leptin was negatively correlated with the total cholesterol level and fasting blood glucose and positively correlated with the FINS level in patients. It was concluded that leptin levels decreased in patients with postoperative recurrence, as well as the accumulation of visceral adipose tissue and the development of abnormal blood glucose metabolism was observed.
Introduction
Hepatic malignant tumor is one of the most common malignant tumors in the world. The morbidity and mortality rates of males were, respectively, the 5th and the 2nd in the world, and for females were the 9th and 6th in the world (1). Hepatic malignant tumors contain primary hepatic carcinoma and secondary hepatic carcinoma, the common clinical manifestations include liver pain, abdominal distension, anorexia, fatigue, wasting, progressive liver enlargement or upper abdominal mass, and some patients have symptoms containing low fever, jaundice, diarrhea, upper gastrointestinal bleeding, and so on (2). The etiology and exact molecular mechanism of hepatic malignant tumors are not completely clear. At present, it is considered that the pathogenesis of hepatic cancer is a complex process of multi-factor and multi-step. Epidemiological and experimental data showed that many factors, consisting of hepatitis B virus (HBV) and hepatitis C virus (HCV) infection, aflatoxin, drinking water pollution, alcohol, cirrhosis, sex hormones, nitrosamines, trace elements and so on, are related to the incidence of hepatic malignant tumors (3). Individualized comprehensive therapy according to different stages of hepatic carcinoma is the key to improving the curative effect. Therapeutic methods include surgery, hepatic artery ligation, hepatic artery chemoembolization, radiofrequency, cryotherapy, laser, microwave, chemotherapy and radiotherapy. Biological therapy and traditional Chinese medicine treatment are also useful in the treatment of hepatic malignant tumors (4).
Hepatic liver malignant tumors are the most common solid tumors, and surgical treatment has been regarded as the preferred and the most efficient method (5). Surgical resection, which can effectively prolong the survival time and improve the quality of life of patients, is still the main treatment for hepatic cancer. However, abdominal pain, jaundice, fatigue, weight loss and unexplained fever symptoms are usually observed in patients after treatment. Both increased AFP levels and imaging examination results indicate the recurrence of hepatic cancer. In addition, more than 30% of patients show distant metastasis within 1 year after surgery. Studies have shown that there are 3 major causes of postoperative recurrence of hepatic cancer, including the existence of metastasis before surgery, preoperative assessment error leads to incomplete surgery and reduced immune function (6).
A previous study confirmed that patients after surgery of hepatic carcinoma may experience abnormal metabolism of blood fat and glucose (7). However, there remain few studies focusing on the levels of blood fat, blood glucose and insulin as well as the correlations among them. Serum leptin, a protein hormone secreted by adipocytes (8), is expressed in multiple organs and systems, and can be used in metabolic disorders, such as losing weight through regulating the metabolism of fat, carbohydrate and protein, thereby suppressing the appetite, reducing the intake of energetic substance (9) and increasing the energy efficiency (10). A study (11) has confirmed that serum leptin is significantly associated with the clinical efficacy and recurrence of hepatic carcinoma patients. To better explore the variations in metabolism of blood fat and glucose, and insulin functions of hepatic carcinoma patients with recurrence after surgery, we analyzed the alterations in these indicators and investigated the correlations of serum leptin with the blood fat, fasting blood glucose and insulin levels.
Patients and methods
Patients. We enrolled a total of 80 patients with primary hepatocellular carcinoma who were admitted to The First Hospital of Lanzhou University (Lanzhou, China) between October 2014 and June 2016 for open surgery, and these patients were diagnosed with computed tomography in upper abdomen and biopsy with samples collected from the surgery or before surgery. All participants underwent surgical treatment with an estimated survival time over 1 year. Before enrollment, participants had signed the informed consent and this study was approved by the Ethic Committee of The First Hospital of Lanzhou University. Patients complicated with severe liver or kidney dysfunction, hyperlipidemia before surgery, cachexia, mellitus diabetes, decreased insulin function or insulin-resistance and mental diseases, BMI >28 kg/m 2 before surgery, and those refusing to be enrolled in this study were excluded. After 1-year follow-up, patients were divided into two groups according to the recurrence of hepatic carcinoma after surgery, i.e., the recurrence group and non-recurrence group. The basic information of patients are shown in Table Ⅰ. Differences in sex, age, ratio of patients complicated with HB, HB disease course, alcohol-intake history and duration, staging of hepatic carcinoma showed no statistical significance (P>0.05).
Methods. All enrolled participants underwent surgical treatment followed by regular outpatient follow-up by abdominal ultrasonic examination and CT examination to identify the postoperative recurrence. We also compared the levels of serum leptin before, one month and one year after surgery, changes in blood fat, body mass index (BMI), waistline and hipline in one year after surgery, alterations between the fasting blood glucose and blood glucose at 2 h after meal, and the fasting insulin (FINS) level and homeostasis model assessment insulin resistance (HOMA-IR) index. Finally, we analyzed the correlations of serum leptin with the total cholesterol, FINS and fasting blood glucose.
Evaluation methods. Serum leptin was detected by enzymelinked immunosorbent assay (ELISA) (cat. no. E-EL-H0113c; Elabscience, Wuhan, China). The procedures included: Dilution-loading-incubation-dosing-washing-enzyme addingincubation-washing-color development-termination and determination (8). Normal reference range was 0.69-11.46 µg/l. As for the detection of indicators associated with blood fat, the normal reference range of total cholesterol (TC) referred to the levels of triglyceride (TG), low-density lipoprotein cholesterol (LDL-C) and high-density lipoprotein cholesterol (HDL-C) detected by an automatic biochemical analyzer (Abbott AEROSET; Diamond Diagnostics Inc., Holliston, MA, USA). Specimens were collected at the enrollment and 1 year after intervention from the fasting elbow venous blood in the morning. BMI was calculated using the formula: BMI = body weight (kg)/height x height (m); waistline was measured in supine position and stable breath at 1 cm above the belly button, while the hipline was measured in the fullest part of the hips. The measurement of blood glucose, including fasting blood glucose and blood glucose at 2 h after meal, was carried out with the HITACHI 7080 Automatic Biochemical Analyzer (Beckman Coulter, Inc., Brea, CA, USA) (normal range of fasting blood glucose: 3.9~6.1 mmol/l; normal range of blood glucose at 2 h after meal: <7.8 mmol/l). The level of HOMA-IR was calculated with the formula: HOMA-IR=[fasting blood glucose (mmol/l) x FINS (mU/l)]/22.5, in which the normal range of FINS was 3.0-24.9 U/ml, and the normal ratio of HOMA-IR was 1. In addition, the assay of FINS was carried out with the Beckman Access DXI 800 spectrometer (Beckman Coulter, Inc.).
Statistical analysis. Statistical Product and Service Solutions v21.0 (IBM Corp., Armonk, NY, USA) was used for statistical processing. Measurement data were presented as mean ± standard deviation (SD), and Student's t-test was adopted for the comparison of mean between the two groups. Enumeration data were presented as %, and chi-square test was performed for the intergroup comparison of rate. Scatter diagram was prepared with the correlation analysis. Correlation analysis was performed using scatter chart Pearson's correlation analysis. P<0.05 was considered to indicate a statistically significant difference.
Results
Comparison of the serum leptin levels before, 1 month and 1 year after surgery between the two groups. In the recurrence group, the serum leptin levels before, Fig. 1).
Comparison of the blood fat at 1 year after surgery between the two groups. There was no significant difference in the levels of TC, TG, LDL-C and HDL-C between the two groups before operation (P>0.05). One year after operation, LDL-C, TC and TG in both groups was lower than that before Comparison between before operation and 1 year after operation in recurrence group; b comparison between before operation and 1 year after operation in non-recurrence group; c comparison between non-recurrence group and recurrence group 1 year after operation. TC, total cholesterol; TG, triglyceride; LDL-C, low-density lipoprotein cholesterol; HDL-C, high-density lipoprotein cholesterol. Figure 1. Comparison between the serum leptin levels prior to surgery, and 1 month and 1 year after surgery between the two groups. In the recurrence group the serum leptin levels before surgery were significantly reduced compared with those in the non-recurrence group. P<0.05. operation (P<0.05) and HDL-C was higher than that before operation (P<0.05). After 1-year follow-up, we found that the levels of TC, TG and LDL-C (the indicators of blood fat) in the recurrence group were significantly higher than those in the non-recurrence group (P<0.05), while the level of HDL-C was significantly lower than that in the non-recurrence group (P<0.05; Table II).
Comparison of the BMI, waistline and hipline between the two groups. There was no significant difference in BMI, waist circumference and hip circumference between the two groups before operation (P>0.05). One year after operation, the BMI, waist circumference and hip circumference of the two groups were lower than those before operation (P<0.05). In the recurrence group, the BMI was significantly higher than that in the non-recurrence group (P<0.05), and similar results were also found in comparison of waistline and hipline (P<0.05; Table III).
Comparison of the fasting blood glucose and blood glucose at 2 h after meal.
There was no significant difference in fasting blood glucose and blood glucose at 2 h after meal between the two groups before operation (P>0.05). At one year after operation, the fasting blood glucose and blood glucose at 2 h after meal in the two groups were lower than those before operation (P<0.05). In the recurrence group, the fasting blood glucose was significantly higher than that in the non-recurrence group (P<0.05), and the blood glucose at 2 h after meal was also higher than that in the non-recurrence group (P<0.05; Table IV).
Comparison of the levels of FINS and HOMA-IR index between the two groups.
There was no significant difference in the levels of FINS and HOMA-IR between the two groups before operation (P>0.05), and the levels of FINS and HOMA-IR 1 year after operation were lower than those before operation (P<0.05). In the recurrence group, the level of FINS Comparison between before operation and 1 year after operation in recurrence group; b comparison between before operation and 1 year after operation in non-recurrence group; c comparison between non-recurrence group and recurrence group 1 year after operation. Comparison between before operation and 1 year after operation in recurrence group; b comparison between before operation and 1 year after operation in non-recurrence group; c comparison between non-recurrence group and recurrence group 1 year after operation. BMI, body mass index.
was significantly lower than that in the non-recurrence group, while the HOMA-IR index level was significantly higher than that in the non-recurrence group (P<0.05; Table V).
Correlation between the serum leptin and TC levels in the recurrence group.
In the recurrence group, the level of serum leptin was negatively correlated with that of TC (r=-0.8910, P<0.001; Fig. 2).
Correlation between the serum leptin level and FINS level in the recurrence group.
In the recurrence group, the level of serum leptin was positively correlated with the FINS level (r=0.9547, P<0.001; Fig. 3).
Correlation between the serum leptin level and the fasting blood glucose in the recurrence group. In the recurrence group, the level of serum leptin was negatively correlated with the fasting blood glucose level (r=-0.9562, P<0.001; Fig. 4). Comparison between before operation and 1 year after operation in recurrence group; b comparison between before operation and 1 year after operation in non-recurrence group; c comparison between non-recurrence group and recurrence group 1 year after operation. FINS, fasting insulin; HOMA-IR, homeostasis model assessment-insulin resistance.
Discussion
Liver is a major organ for energy metabolism. Once the hepatic functions are damaged, abnormalities will emerge in the metabolism of blood fat and glucose (12). However, elder patients are more susceptible to hyperlipidemia, hyperglycemia, hypertension and chronic obstructive pulmonary disease. Particularly for the elder hepatic carcinoma patients, their liver functions are significantly damaged with an obvious decline in compensation function, plus the adverse effect of surgical treatment, anesthesia and postoperative chemotherapy, severely compromising the normal functions of the liver (13).
Previous studies confirmed that in over 90% of hepatic carcinoma patients, the level of serum leptin is decreased (14) with the abnormality in metabolism of blood fat and glucose (15). However, for hepatic carcinoma patients who experienced recurrence after surgery, there are few studies reporting the variations in levels of serum leptin, blood glucose and blood fat. Thus, more studies are required to attest whether the changes in these indicators are significant.
In this study, all participants were divided into the recurrence group and non-recurrence group based on the postoperative recurrence. We found that in the recurrence group, the levels of serum leptin before, one month and one year after surgery were significantly lower than those in the non-recurrence group (P<0.05), suggesting that in the hepatic carcinoma patients with postoperative recurrence, the serum leptin level was significantly decreased, thereby remarkably attenuating its regulatory effect on the energy metabolism. In addition, comparisons of the indicators of blood fat, BMI, waistline and hipline one year after surgery in the two groups showed that the levels of TC, TG and LDL-C in the recurrence group were significantly higher than those in the non-recurrence group, while the level of HDL-C was significantly lower than that in the non-recurrence group; besides, the BMI, waistline and hipline in the recurrence group was significantly larger than those in the non-recurrence group; these results suggested that hepatic carcinoma patients with postoperative recurrence are more susceptible to the abnormality of blood fat, thereby affecting the weight and fat distribution. Furthermore, we compared the fasting blood glucose, blood glucose at 2 h after meal, FINS and HOMA-IR index, and found that compared with the non-recurrence group, the levels of fasting blood glucose, blood glucose at 2 h after meal and HOMA-IR index were significantly higher, and the FINS level was decreased; these results revealed that in the hepatic carcinoma patients with postoperative recurrence, the abnormal regulation of blood glucose will be further exacerbated, making patients more susceptible to the simultaneous increases in fasting blood glucose and blood glucose after meal, and frequently complicated with the abnormality in functions of insulin and emergence of insulin-resistance. The analysis of correlation of serum leptin with TC, FINS and fasting blood glucose showed that in the recurrence group, the level of serum leptin was negatively correlated with TC and fasting blood glucose levels, and positively correlated with the FINS level.
In hepatic carcinoma patients, the serum leptin level is significantly correlated with the metabolism of blood fat and glucose. Adipose tissue accumulation (central-visceralwaistline and peripheral-hipline) was negatively correlated with serum leptin level, and the leptin receptor sensitivity was significantly reduced in pathological state, resulting in imbalance of the normal fat-islet axis feedback mechanism and the accumulation of fat, especially visceral fat accumulation (16,17). Decrease in serum leptin (18) will result in the decline in appetite of patients with weight loss. But sufficient nutrition is usually provided for hepatic carcinoma patients with postoperative recurrence, and massive intake of nutrients cannot timely participate in the metabolism (19,20), which will further lead to the abnormality in blood fat metabolism, the increase in BMI and accumulation of fat in abdomen and hips (21), thereby inducing an increase in blood glucose; besides, the long-term hyperglycemia can further stimulate the hepatic carcinoma cells, which can induce the accumulation of nutrients (22), thicken the basilemma in capillary, and decrease the membrane permeability (23), thus facilitating the cell proliferation (24)(25)(26), and further giving rise to the postoperative recurrence.
In conclusion, leptin levels decreased in patients with postoperative recurrence, and accumulation of visceral adipose tissue and abnormal blood glucose metabolism also occurred. Serum leptin level is negatively correlated with total cholesterol level and fasting blood glucose and positively correlated with FINS levels in patients with postoperative recurrence. However, more studies are needed to investigate whether serum leptin is the relevant factor for hepatic cancer recurrence, and to explore the prognostic value of serum leptin in hepatic cancer. | 2019-01-22T22:34:04.875Z | 2018-11-15T00:00:00.000 | {
"year": 2018,
"sha1": "3627b865ac07d631858f0b344bda76660440d7f6",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.3892/etm.2018.6973",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3627b865ac07d631858f0b344bda76660440d7f6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
128877958 | pes2o/s2orc | v3-fos-license | Recent status of bowhead whales , Balaena mysticetus , in the wintering grounds off West Greenland
Bowhead whales, Balaena mysticetus, belonging to the Davis StraitlSaffin Bay stock, have historically wintered in Baffin Bay and Davis Strait, including waters along the west coast of Greenland in and near the entrance of Disko Bay. Aerial surveys of the Disko Bay region during late winter (1981, 1982, 1990, 1991, 1993 and 1994) showed that it was still visited regularly by a few tens of whales. Commercial whaling on bowheads in Baffin Bay and Davis Strait ended in about 1915, but occasional killing continued until as recently as the 1970s. The low numbers of bowheads observed off West Greenland in recent years are consistent with the results of surveys of the summering grounds in the eastern Canadian Arctic, indicating that any recovery has been exceedingly slow. The only conclusion supported by the data is that the current stock size is a small fraction of what it was prior to commercial whaling.
Introduction
The stock of bowhead whales, Balaena mystice-?us,centred in Davis Strait and Baffin Bay (the Davis StraitDaffin Bay stock) consisted of at least 12,000 whales in 1820 (Woodby & Botkin 1993).Since the whaling grounds off West Greenland had already been depleted by then (Ross 1993), the abundance prior to the start of commercial whaling in this region (early 1700s) must have been considerably greater than 12,000.The current stock size has been estimated at about 250 (Finley 1990) or 350 whales (Zeh et al. 1993).
By far the largest concentrations of bowheads in the eastern North American Arctic occur in Isabella Bay, northeastern Baffin Island, Canada, the site of the only dedicated field study ever conducted on this stock (Finley 1990).One individual photographed in Isabella Bay in late September 1986 was re-photographed in the pack ice near the entrance of Disko Bay in early April 1990 (Heide-J~rgensen & Finley 1991).
Eschricht & Reinhardt (1866) summarised information on the bowhead's occurrence off West Greenland, deriving data mainly from journals kept at whaling stations on shore.Since then, little has been published about this species in Greenland apart from reports of opportunistic observations.Born & Heide-J~rgensen (1983) summarised observations near Qeqertarsuaq (Godhavn; see Fig. 1 for localities), and Kapel (1985) reported the capture of a young bowhead in a net set for white whales, Delphinapterus leucas, near Upernavik.During a 29-day icebreaker cruise in Davis Strait andBaffin Bay, February-March 1976, Turl (1987) saw one bowhead off the mouth of Disko Bay (ca.68"N, Systematic aerial surveys of marine mammals in West Greenland waters have provided some quantitative data on bowheads.We have analysed these data and evaluated the current distribution and abundance of bowheads in this part of the Davis StraitDaffin Bay stock's range.
Material and methods
Systematic aerial surveys of West Greenland waters were conducted during March 1981 and1982 with the principal objective of documenting the distribution and abundance of marine mammals along a proposed liquid natural gas tanker route (McLaren & Davis 1981, 1983).Another series of aerial surveys, stratified to provide intense coverage of the core winter distribution of the Baffin Bay stock of white whales, was conducted during March 1991,1993and 1994and April 1990(Heide-Jmgensen et al. 1993;Heide-J~rgensen & Reeves 1996).Some of the effort in these latter surveys, especially to the north of the main white whale strata, was intended to cover the winter distribution of walruses, Odobenus rosmarus, off central West Greenland (Born et al. 1994).
The present study uses sightings and effort data from all six sets of surveys to assess bowhead distribution and abundance.For the 1981 and 1982 surveys we extracted data from the figures presented by McLaren & Davis (1981, 1983), whereas for the other four surveys we used the original field data.It was assumed that the criteria and methods for surveying white whales in the 1990s surveys, e.g.low wind speeds (sea states 5 3 on the Beaufort scale), good visibility (> 1 km), slow flying speed (160-17Okmbr) and a target survey altitude of 750 ft (228 m) above sea level, also ensured optimal conditions for detecting bowheads.The 1981-82 surveys were flown at a lower altitude (150 m) and higher speed (about 220 kmbr), probably rendering them less efficient for detecting animals in the surveyed strips.
From the positions of 23 (92%) of the 25 bowheads sighted in the aggregate data set, a stratum for bowhead late-winter (March-April) distribution off West Greenland was constructed (Fig. 1).This stratum has an area of 21 260 km2.east by land.The two sightings outside the bowhead stratum were within about 75 and 25 km of its borders, respectively.
The low number of sightings precluded any detailed statistical scrutiny of the abundance estimation technique.Instead, simple strip-census estimates were calculated for each year.An effective strip width of 700 m on either side of the aircraft was assumed for all years.This strip width is reasonable for the distribution of sighting distances obtained in the 1990s surveys (Fig. 2).
McLaren & Davis (1981Davis ( , 1983) ) limited their search to an area within 800 m on either side of the flight path.Two sightings (3 animals) in 1994 were at distances greater than 700 m and therefore were not used for calculating that year's abundance estimate.
Results
The observed winter distribution of bowheads was fairly consistent off West Greenland from year to year.Sightings in March and early April tended to be clustered in an area off the mouth of Disko Bay south to Kangaatsiaq (68'00") .
Bowheads were sighted at least once in all of the surveys.Point estimates of relative abundance ranged from as few as 6 in 1993 to 51 in 1991, but all of the estimates had low precision (Table 1).The only conclusion supported by these data is that at least a few tens of bowheads were present in West Greenland waters in the years with surveys.
Most of the bowhead sightings were of solitary individuals although groups of up to 3 or 4 closely associated animals have been reported occasionally (Born & Heide-J~rgensen 1983).A group of 4 was seen in the 1982 survey, and a closely associated pair of large animals was seen in 1994.
Distribution and seasonal movements
Historically, bowheads were found in winter as far south as Labrador in eastern Canada and about 64"N in West Greenland (Eschricht & Reinhardt 1866;Reeves et al. 1983).The area surrounding the mouth of Disko Bay, where the tongue of permanently open water along the east side of Davis Strait ends, was well known for its winter and spring concentration of bowheads (Fig. 1).According to journals of the whaling station at Sisimiut (Holsteinsborg; 66'56") from 1780 to 1839, bowheads usually arrived there in mid December and began migrating north by mid March (Eschricht & Reinhardt 1866).Somewhat farther north at Qeqertarsuaq (69'14") the whales were usually present until mid June (Eschricht & Reinhardt 1866).Assuming that the present-day migratory schedule is similar to the historical one, the timing of the 1990s surveys reported here, as well as those of McLaren & Davis (1981, 1983) 1990,1991,1993 and 1994.The two sightings more than 700 m from the trackline were not used for abundance estimation.Note that the counts (y-axis) are sightings and not individuals; some of the sightings were of more than one individual.** A total of 23 observations within the bowhead stratum, 3 of which were outside the strip; 2 additional sightings of single bowheads were made outside the s t r a t u m 4 n e in 1981 (Fig. 3) and one in 1982 (Fig. 4).
-70" ' ; . .Winnipeg, pers. comm.).Two bowheads engaged in courtship behaviour were seen along the ice edge in Smith Sound at 77"N, 72"W on 10 May 1996 (A.Mosbech, pers.comm.).Bowheads have also been seen during February and March in heavy ice well away from the Greenland coast to as far north as 6T30'-69"OO' N (e.g.Duncan 1827; M'Clintock 1860).The stock affinities of bowheads that winter off southeastern Baffin Island and in the Labrador Sea are uncertain, as some or all of them may be migrants from the summering areas in northern Hudson Bay and Foxe Basin (cf.Reeves and Mitchell 1990).Kapel (1985) interpreted the capture in autumn at Upernavik (mentioned above) as demonstrating the continued use of a traditional southward migration route along the west coast of Greenland.While some whales may use this route, the paucity of bowhead observations by Greenlandic hunters, who are regularly at sea hunting white whales, narwhals, Monodon monoceros, and seals during the autumn (cf.Heide-Jgrgensen 1994), is in striking contrast to the situation off northeastern Baffin Island, where a nearshore autumn migration is well documented from scientific investigations (Davis & Koski 1980;Finley 1990) and observations by local hunters (Anonymous 1995).
Numbers and trends
It is important to bear in mind that bowheads can dive for periods of at least 40 min and that they may spend 80% or more of their time below the surface and thus be unavailable for sighting (Dorsey et al. 1989;Wiirsig & Clark 1993;Finley & Goodyear 1993).The availability bias in survey results presented here is not measurable, as the percentage of time at the surface depends on activity state (e.g.feeding, socialising, travelling, resting) and other confounding variables (Richardson et al. 1995).
The settlement Kitsissuarsuit (Hunde Ejland) is situated in the southern part of the mouth of Disko Bay.The Inuit there derive a large part of their annual sustenance and income from hunting narwhals.During February-April they keep daily look-outs for whales, but bowheads have been observed only occasionally (e.g. one to the west of the island on 27 March 1990; Heide-Jsrgensen, unpubl.data; also see Born & Heide-J~rgensen 1983).The hunters have not noted any increase in sightings despite the fact that more dinghies are being used each winter to search a larger and larger area for whales.
The fishing banks off West Greenland are visited by shrimp trawlers and other types of vessels, e.g.cutters and dinghies, at all seasons, including periods in late winter and spring when the pack ice is navigable.In addition, the land-fast ice along the mouth of Disko Bay is used for hunting trips by dog sledge.The combined effort of all these types of traffic has increased enormously during the post-war period, especially during the past 30 or 40 years with the advent and proliferation of mechanical transportation (Mattox 1973;Horsted 1978;Rask 1993), in combination with rapid human population growth (Lyck & Taagholt 1987).In Qeqertarsuaq municipality alone, the vessel fleet in 1989 consisted of four large multi-purpose trawlers and 19 smaller fishing cutters, as well as several hundred dinghies (= skiffs) with 40+-horsepower outboard motors (Caulfield 1993).Yet in spite of the regular and widespread human presence at sea, few reports of bowheads have been received from this or any other area of West Greenland.Some bowheads presumably follow the shore lead (the commercial whalers called it the "landwater") along the coast of West Greenland during their northward spring migration, much as the 19th century British whalers did (cf.Scoresby 1820; Barron 1970).The shore lead along the west coast of Disko Island during March and April is extremely narrow (usually <2 km wide), bordered by land or land-fast ice to the east and by the dense pack ice of Baffin Bay to the west.To a certain extent, the shore lead can be perceived as a bottleneck for whales travelling north from the relatively extended wintering grounds in Davis Strait and the Labrador Sea.The same lead west of Disko Island is also used by hunters as they pursue white whales, narwhals, walruses and seals during the late winter and early spring.Although these hunters do observe bowheads, the numbers are few (<lo sightings per year; e.g.Born & Heide-J~rgensen 1983).If a major increase in local bowhead abundance had occurred over the last few decades, it definitely would have been noticed by the Greenlanders.It is important to recognise, however, that ice cover does not restrict bowhead movements to nearly the same degree as it does vessel passage, so it should not be assumed that all, or even most, of the whales migrate northwards through the shore lead off Disko.
Group size, group composition and behaviour
It is unlikely that the small mean group size from the aerial survey data (1.6 f SD 1.2) is an artifact of survey procedures because the routine practice (at least during the four sets of surveys in the 1990s) was to interrupt level flight and circle above bowhead sightings for photography.In these circumstances, additional animals in the vicinity should have been detected.
Bowheads in other stock areas are known to migrate in waves or pulses, with young (weaned but immature) animals often travelling separately from adults (Braham et al. 1984;Moore & Reeves 1993).Similar segregation has been reported on the summering grounds in the Beaufort Sea (Cubbage & Calambokidis 1987).Finley (1990) found that the late-summer "resident" whales in Isabella Bay consist mainly of large adults, with few calves and small subadults present.Whaling manuscripts from the late 19th and early 20th centuries, along with recent observations of large adults accompanied by small calves off northern Baffin Island, reinforce the idea that the Davis Strait stock is segregated to some degree during the summer and autumn (Finley 1990;Finley & Darling 1990;Reeves & Mitchell 1991).Relatively few observations of bowheads have been made on any of the wintering grounds (Reeves et al. 1983;Moore & Reeves 1993).The observations reported here include animals subjectively classified as small subadults (< 10 m) as well as sexually mature adults (> 13 m; see Koski et al. (1993) and Schell & Saupe (1993) for discussions of size at age in bowheads).No calves were seen during the aerial surveys, but several sightings of calves have been reported by local hunters in West Greenland (Anonymous 1981;Born & Heide-J~rgensen 1983).
Although sexual activity in bowheads can be observed at various times of the year, most conceptions are believed to occur in the late winter or spring (Koski et al. 1993).Eschricht & Reinhardt (1866) reported that boisterous sexual activity was typical of bowheads near Sisimiut during January and February.Two large animals that we observed off Kangaatsiaq on 14 March 1994 were interacting vigorously at the surface, rubbing bodies and rolling in apparent courtship.
Exploitation, protection and prospects for recovery
In a reconstruction of the Davis Strait stock's catch history, Mitchell & Reeves (1982) concluded that no bowheads had been killed by Greenlanders after 1956.However a published summary of Greenland hunting statistics (Anonymous 1977), supplemented by unpublished correspondence (letters between L. Vesterbirk, Ministry for Greenland, Copenhagen, and E. Vangstein, International Whaling Statistics, Sandefjord, Norway, 1973-74), indicates that one bowhead was taken at Qeqertarsuaq in April 1973.Thus, although commercial whaling for bowheads had ended in both Canada and Greenland by about 1915, some killing continued in both countries into the 1970s (Mitchell & Reeves 1982).Documented post-1915 kills in Greenland include one at Qeqertarsuaq in April 1928 (Rosendahl 1957) and one in the Atammik-Napasoq area (approx.64"30'45"00'N) in March 1956 (Freuchen & Salomonsen 1958) in addition to the one at Qeqertarsuaq in 1973.
Bowheads have been legally protected by international agreements for more than 50 years, although the International Whaling Commission (IWC) sanctions continued hunting by Alaskan Eskimos for subsistence (Gambell 1993).The Greenlanders' hunting of minke whales, Balaen- optera acutorostrata, and fin whales, Balaenoptera physalus, is managed by a quota system under the IWC's aboriginal subsistence whaling scheme (Gambell 1993;Caulfield 1993).The bowhead is fully protected in Greenland waters.
Canada withdrew from the IWC in 1982, and bowhead whaling was initiated on a small scale in the western Canadian Arctic (Bering-Beaufort-Chukchi Seas stock) in 1991 (Freeman et al. 1992).A similar initiative in the eastern Canadian Arctic began prematurely in 1994 when a small bowhead was taken illegally in northern Foxe Basin (Anonymous 1994;George 1995).The establishment of a "total allowable harvest" of at least one bowhead in the eastern Canadian Arctic was mandated under the recent Nunavut landclaim agreement (Indian & Northern Affairs Canada 1993).Initiation of whaling was to be preceded by the commencement of "an Inuit knowledge study to record sightings, location and concentrations of bowhead whales in the Nunavut Settlement Area" (Indian & Northern Affairs Canada 1993, p. 38).Early results from the study reflect the firm belief by Inuit that bowhead numbers have increased substantially since the end of commercial whaling (Anonymous 1995).Aerial surveys in northern Foxe Basin and northwestern Hudson Bay in 1994 and 1995 have been interpreted as suggesting that at least 200-300 bowheads summer there (Anonymous 1996).A large bowhead was taken in a legal hunt at Repulse Bay, northwestern Hudson Bay, on 15 August 1996 (Rassel 1996).
Intensive scientific monitoring of some other severely depleted stocks of baleen whales has demonstrated that they are recovering under protection or conservative hunt management (Best 1993).The data presently available from surveys are clearly inadequate for demonstrating recent or current trends in the Davis StraitBaffin Bay stock of bowheads.They do indicate, however, that the total population size remains a small fraction of what it was prior to commercial whaling.Substantial recovery seems not to have occurred in the Greenland portion of this stock's range.Hunters and fishermen in West Greenland have reported few encounters with bowheads and have not suggested that the stock is increasing.The paucity of sightings cannot be explained by too little time spent at sea in late winter and early spring.In fact, much boating and shipping activity takes place in the open water and loose pack ice along the west coast of Greenland, where bowheads were once seasonally plentiful.The rarity of sightings during aerial surveys is consistent with the shortage of reports by local people.
If the Davis StraitDaffin Bay bowhead stock were increasing, it would appear to be doing so mainly in the western part of its range.Our data from West Greenland corroborate the conclusions of Davis & Koski (1980), Finley (1990) and Zeh et al. (1993) that the stock is at a small fraction of its pre-whaling level of abundance and that any recovery since the early 1900s has been very slow.
Initiation of whaling on bowheads in the eastern North American Arctic must be considered in an international context.Some whales move back and forth between Canada and Greenland, possibly on an annual basis.Others may occur during the winter in the Labrador Sea outside any coastal state's Exclusive Economic Zone.If an objective of management is to allow for substantial population recovery, and we believe that it should be, then complete or nearly complete protection from hunting will be necessary for decades into the future.Also, we believe that Canada has an obligation to seek international cooperation on bowhead conservation and to subject its programmes of bowhead population assessment and hunt management to independent scrutiny.
Acknowledgements. -
We are grateful to the Fisheries Directorate of the Greenland Home Rule Government for sponsoring the white whale and walrus surveys during 1990-94.Our fellow whale spotters, J. Teilmann, J. Jensen and L. Petersen (who also piloted the aircraft), deserve special thanks.The surveys in 1981 and 1982 were conducted by LGL Ltd. and funded by the Arctic Pilot Project.J. Jensen kindly compiled the information on post-1915 bowhead hunting in Greenland.D. Boertmann and A. Mosbech of the Danish National Environmental Research Institute, Department of Arctic Environment, Copenhagen, very generously allowed us to cite their unpublished bowhead observations.P. Barry helped prepare the map figures.Data analysis and manuscript preparation were made possible by the support of the Greenland Institute of Natural Resources, Nuuk, and by a grant from the Whale & Dolphin Conservation Society, Bath, Avon, U.K.
Finley & Renaud (1980)nes is due to the fact that they show actual routes flown.The bowhead stratum is bounded by broken lines.Note that two of the animals on Bowheads in the West Greenland wintering area at any one time represent only a part of the Davis StraitBaffin Bay stock.According to present understanding, this stock includes all of the whales that summer in the eastern Canadian High Arctic or Northwest Greenland and migrate southwards in autumn along either the east coast of Baffin Island or West Greenland.Finley & Renaud (1980)found no bowheads in the Baffin Bay "North Water", a flaw lead system (or polynya) in northern Baffin Bay, during aerial surveys inMarch-April 1978 and March 1979, buttwo sightings were made during a similar survey in March 1993 -one animal at the mouth of Jones Sound and another northwest of Coburg Island (P.R. Richard, Canadian Department of Fisheries & Oceans, | 2019-04-24T13:09:08.066Z | 1996-01-12T00:00:00.000 | {
"year": 1996,
"sha1": "50871c6ac2b5e9fd7da7476d33df7dc8e6341a21",
"oa_license": "CCBYNC",
"oa_url": "https://polarresearch.net/index.php/polar/article/download/1924/5173",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "50871c6ac2b5e9fd7da7476d33df7dc8e6341a21",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
} |
259454294 | pes2o/s2orc | v3-fos-license | Satellite-based, top-down approach for the adjustment of aerosol precursor emissions over East Asia: the TROPOspheric Monitoring Instrument (TROPOMI) NO 2 product and the Geostationary Environment Monitoring Spectrometer (GEMS) aerosol optical depth (AOD) data fusion product and its proxy
. In response to the need for an up-to-date emissions inventory and the recent achievement of geostation-ary observations afforded by the Geostationary Environment Monitoring Spectrometer (GEMS) and its sister instruments, this study aims to establish a top-down approach for adjusting aerosol precursor emissions over East Asia. This study involves a series of the TROPOspheric Monitoring Instrument (TROPOMI) NO 2 product, the GEMS aerosol optical depth
J. Park et al.: Aerosol precursor emissions adjustment using TROPOMI and GEMS ments, supported by the TROPOMI and GEMS-involved data fusion products, performed in this study are generally effective at reducing model biases in simulations of aerosol loading over East Asia; in particular, the model performance tends to improve to a greater extent on the condition that spatiotemporally more continuous and frequent observational references are used to capture variations in bottom-up estimates of emissions. In addition to reconfirming the close association between aerosol precursor emissions and AOD as well as surface PM 2.5 concentrations, the findings of this study could provide a useful basis for how to most effectively exploit multisource top-down information for capturing highly varying anthropogenic emissions.
Introduction
In East Asia, atmospheric aerosols, such as particulate matter (PM), have been a focus of great concern because of their adverse impact on public health and safety, accompanied by rapid urban and industrial growth that has elevated levels of anthropogenic emissions over time (Hatakeyama et al., 2001;Ohara et al., 2007). In response to the growing interest in airborne hazards, many research entities, using ground-based networks of monitoring sites in many industrial regions and megacities over East Asia, have devoted considerable effort to systemically monitoring local and regional air quality. Unfortunately, the limited number of stations often impedes efforts to secure efficient sampling coverage and data availability for aerosol studies Tian and Chen, 2010).
To overcome this limitation, many research entities have substantially improved the collection and, thus, the availability of satellite observational data, which enables them to estimate the spatiotemporal distributions of aerosols over vast areas that are not in close proximity to monitoring sites Levy et al., 2013). A variety of aerosol products derived from sun-synchronous low Earth orbit (LEO) satellite instruments, such as the Advanced Very High Resolution Radiometer, the Visible Infrared Imaging Radiometer (VIIRS), the MODerate-resolution Imaging Spectroradiometer (MODIS), and the Multiangle Imaging SpectroRadiometer (MISR), have been available for many years (Chan et al., 2013;Ahn et al., 2014;Levy et al., 2015;Garay et al., 2020). For example, researchers have conducted a number of comprehensive air quality assessments on local to global scales using the aerosol optical depth (AOD), an essential property of aerosols that represents columnar aerosol loadings in the atmosphere (Bellouin et al., 2005;Remer et al., 2008Filonchyk et al., 2019;Jung et al., 2019Jung et al., , 2021. Since the recent advent of satellite products, researchers have increased their efforts to use top-down observational data to improve the performance of chemical transport mod-els (CTMs) such as the Community Multiscale Air Quality (CMAQ) model (Byun and Schere, 2006). A number of studies have applied satellite data in CTM-based numerical approaches, such as inverse modeling and data assimilation, to reduce the uncertainties in bottom-up estimates of air pollutant emissions and perform more accurate air quality simulations Ku and Park, 2013;Koo et al., 2015;Pang et al., 2018;Xia et al., 2019;Wang et al., 2020;Li et al., 2021;; most of these studies, however, share a common challenge in resolving uncertainties originating from retrieval discontinuity (i.e., coarse orbiting cycles of satellite instruments and cloud contamination). Using Ozone Mapping and Profiler Suite products, Wang et al. (2020) performed top-down optimizations of nitrogen dioxide (NO 2 ) and sulfur dioxide (SO 2 ) emissions and examined the sensitivity of AOD to concentrations of secondary inorganic aerosols over East Asia. Their results suggested a need for spatiotemporally more continuous satellite data. To improve model estimates of the AOD over East Asia, Li et al. (2021) used Ozone Monitoring Instrument data to perform a top-down inversion of SO 2 emissions. Their results emphasized the need for satellite data at finer temporal scales, which would allow one to capture highly variable SO 2 emissions over East Asia.
To address such instrument-inherent challenges, researchers have developed a number of approaches to applying more continuous and frequent observational data afforded by geostationary Earth orbit (GEO) satellite instruments; temporal resolutions of GEO satellite instruments (e.g., from a few minutes to an hour) are relatively finer than those of LEO satellite instruments on a 12 h orbit cycle at best over given geographic locations (Vijayaraghavan et al., 2008). Leveraging aerosol product data derived from GEO satellite instruments such as the Geostationary Ocean Color Imager (GOCI) and the Advanced Himawari Imager (AHI), several CTM-based studies have shown substantial improvements in model performances in estimating aerosol loadings in East Asia (Jeon et al., 2016;Lee et al., 2016;Yumimoto et al., 2016;Jin et al., 2019). In addition, in response to the increasing demand for satellite data available at finer temporal resolutions, the Committee on Earth Observation Satellites has led an international effort to coordinate a new constellation of GEO satellite instruments for monitoring the behaviors of atmospheric constituents over the globe at faster sampling rates. For example, the Geostationary Environment Monitoring Spectrometer (GEMS), jointly developed by the Korea Aerospace Research Institute and Ball Aerospace, was launched on board the Geostationary KOrea Multi-Purpose SATellite 2B (GEO-KOMPSAT-2B) satellite in 2020 as the first ultraviolet-visible (UV-Vis) instrument of its kind that can measure the columnar loadings of both trace gases and aerosols over the Asia-Pacific region in a geostationary manner up to eight times during daytime (W. J. ; before the advent of the GEMS mission, all UV-Vis instruments had been operating on LEO platforms. Furthermore, equipped with similar observational capabilities, a series of GEO satellite instruments are planned to be launched in 2023 to finish building the future constellation, which includes NASA's Tropospheric Emissions: Monitoring of Pollution (TEMPO) above North America (Zoogman et al., 2011) and the European Space Agency Sentinel-4 above Europe and northern Africa and ultimately serves the needs of more detailed and frequent air quality measurements over the Northern Hemisphere.
In addition to taking advantage of such finer spatiotemporal resolutions afforded by GEO satellite instruments, researchers have developed numerous data fusion approaches to integrating atmospheric properties retrieved by multiple individual instruments in order to further improve the quality of satellite products; the products derived from multiple instruments can be spatiotemporally complementary in terms of the completeness of observational data (Zou et al., 2020). For example, several studies have fused multisource satellite products to yield more accurate estimates of air quality over East Asia Go et al., 2020;Lim et al., 2021) in response to the upcoming releases of GEMS products. Choi et al. (2019) fused multiple aerosol products afforded by three LEO satellite instruments (i.e., MODIS, MISR, and VIIRS) and two GEO satellite instruments (i.e., GOCI and AHI) to examine how effective the data fusion approach was at improving the accuracy of AOD estimates over East Asia. Their results showed that the multisource aerosol product could substantially improve observational coverage and frequency, and the AOD estimates from these showed closer spatiotemporal agreement with in situ ground-based measurements at AErosol RObotic NETwork (AERONET) sites (Holben et al., 1998) than those from AOD estimates provided by each of the individual satellite instruments . As a follow-up study over East Asia, Lim et al. (2021) fused the GOCI AOD with AHI AOD (hereafter referred to as GOCI-AHI AOD) products by using multisource aerosol properties and land surface parameters from those in ensemble-mean and maximum-likelihood-estimation (MLE) methods in order to reduce observational and systematic biases occurring during the retrieval process. Their multisource AOD estimates showed substantially improved agreement with AERONET AOD measurements over East Asia, which they considered to be the result of complementary retrievals that reduced the number of pixels with missing values and ensured more cloud-free pixels (Lim et al., 2021). Note that their study aimed to develop and examine data fusion algorithms for near-future use, which would be applied to producing synergistic satellite products after the full product releases of GEMS and its sister instruments, including the Advanced Meteorological Imager (AMI) on board the GEO-KOMPSAT-2A satellite and Geostationary Ocean Color Imager 2 (GOCI-2) on board the GEO-KOMPSAT-2B satellite.
Despite the availability of the many numerical approaches and data fusion techniques for reducing the uncertainties in the model and observations, efforts to couple them have not been sufficiently rigorous over East Asia; therefore, this study aimed to examine the utility of synergistic satellite observation data in improving the performance of CTM-based simulations of aerosol loadings over East Asia. Hypothesizing that finer spatiotemporal resolutions of multisource data fusion products would provide more observational references available for use, we employed the GEMS data fusion product and its proxy data in adjusting the emissions inventory in East Asia in a top-down manner. This study largely consists of two phases: (1) the implementation and evaluation of emissions adjustments using the TROPOspheric Monitoring Instrument (TROPOMI) tropospheric NO 2 columns (hereafter referred to as TROPOMI NO 2 columns) and AHI AOD and GOCI-AHI fused AOD (the proxies of GEMS AOD and GEMS-AHI-GOCI-2 fused AOD, respectively) for the simulation year 2019 and (2) the application of the emissions adjustment approach using the TROPOMI NO 2 columns and GEMS-AHI-GOCI-2 fused AOD for the spring of 2022. For the former period, which represents the most recent year before the COVID-19 outbreak in this study, we first performed inverse modeling to constrain bottom-up estimates of nitrogen oxide (NO x ) emissions using TROPOMI NO 2 columns, and then we constrained bottom-up estimates of primary PM emissions using each of the AHI AOD and GOCI-AHI fused AOD. Prior to proceeding with the second phase, we compared the model performances from using the single-instrument-and multisource-derived AOD products in constraining primary PM emissions. For the latter period, which was considered to be severely affected by the resumptions of city-and province-wide lockdowns (Dyer, 2022) in China, we used the TROPOMI NO 2 columns and GEMS-AMI-GOCI-2 fused AOD to sequentially constrain NO x and primary PM emissions based on the earlier topdown approach. Note that we did not focus on other gaseous air pollutants than NO x considering the future application of the GEMS tropospheric NO 2 product, which was recently released (as of 23 November 2022) by the Environmental Satellite Center of the Korean National Institute of Environmental Research (NIER) (https://nesc.nier.go.kr/, last access: 15 July 2022). Then, using a series of a posteriori emissions (i.e., NO x -constrained emissions and NO x -and primary PMconstrained emissions) in CMAQ, we simulated AOD and PM 2.5 concentrations over East Asia to examine the utility of the GEMS-involved synergistic product in inverse modeling and ultimately to improve model performances in estimating aerosol loadings over East Asia.
Modeling setup and preparation of base emissions
Using the 2016 KORUS-AQ emissions inventory version 5.0 developed by Konkuk University (Woo et al., 2020), we prepared CMAQ-ready anthropogenic emissions inputs over the modeling domain, shown in Fig. 1, which encloses the eastern half of China, the Korean Peninsula, the southern Russian Far East, and Japan. The KORUS-AQ emissions inventory consists of multiple individual emissions inventories, including the Comprehensive Regional Emissions for Atmospheric Transport Experiments version 2.3 (Jang et al., 2019), Clean Air policy Support System 2015 (Yeo et al., 2019), and Studies of Emissions and Atmospheric Composition, Clouds, and Climate Coupling by Regional Surveys (Toon et al., 2016). To prepare biogenic emissions inputs, we employed the Model of Emissions of Gases and Aerosols from Nature (MEGAN) version 3.0 (Guenther et al., 2012), which can speciate, quantify, and regrid biogenic emissions from terrestrial ecosystems based on a series of input data (e.g., meteorological fields and land surface parameters) (Guenther et al., 2006(Guenther et al., , 2020. We used reprocessed MODIS version 6 leaf area index (LAI) products (Yuan et al., 2011) and VIIRS global green vegetation fraction (GVF) products (Jiang et al., 2016) as input data for MEGAN. We merged the anthropogenic and biogenic emissions to prepare the a priori emissions inputs (hereafter referred to as base emissions).
To simulate the meteorological fields and ambient concentrations of gaseous air pollutants and aerosols for each of the study periods, we used the Weather Research and Forecasting (WRF) version 3.8 developed by the National Center for Atmospheric Research (NCAR) (Skamarock et al., 2008) and CMAQ version 5.2 developed by the U.S. Environmental Protection Agency (EPA) (Byun and Schere, 2006). Employing the same modeling setups and initial conditions used in our previous studies over East Asia (Jung et al., 2019(Jung et al., , 2021Pouyaei et al., 2020Pouyaei et al., , 2021Park et al., 2022), we configured WRF and CMAQ to cover the modeling domain at a horizontal resolution of 27 km and 35 vertical variable thickness layers from the surface up to 100 hPa. Detailed model configurations are listed in Table S1 in the Supplement. Then, using the WRF-simulated meteorological fields and base emissions in CMAQ, we simulated NO 2 columns and concentrations, AOD, and PM 2.5 concentrations over the modeling domain for the entire year 2019 and the period from March to May 2022. For each of these two study periods, we initiated both WRF and CMAQ simulations with a 10 d spinup time.
TROPOMI NO 2 product
TROPOMI, a LEO satellite instrument launched on board the Copernicus Sentinel-5 Precursor satellite in 2017, provides global observations of trace gases and aerosols (Veefkind et al., 2012). To obtain daily tropospheric NO 2 and SO 2 column densities observed during the study periods, we used TROPOMI Level-2 NO 2 and SO 2 products. The spatial resolution of TROPOMI was initially 3.5 km × 7 km and was improved to 3.5 km × 5.5 km in early August 2019. The daily acquisition time of the column data was approximately 04:30 UTC, when the instrument overpassed the modeling domain during the study period. For the NO 2 columns, we used pixels with quality assurance values (qa_values) larger than 0.75 and cloud fractions smaller than 0.3. To ensure consistency in the horizontal spacings between the TROPOMI NO 2 columns and CMAQ's modeling grids, we regridded the TROPOMI NO 2 columns into 27 km × 27 km grids by using distance-weighted means of those observation references with a radius of 0.25 • (approximately 27 km).
AHI AOD and GOCI-AHI fused AOD products
The AHI, a GEO satellite instrument launched on board the Himawari-8 geostationary meteorological satellite in 2014, provides regional observations of aerosol properties over the East Asia and western Pacific regions in a spatiotemporally continuous manner (Okuyama et al., 2015;Bessho et al., 2016). For the study period 2019, we used the Japan Aerospace Exploration Agency (JAXA) AHI Level-3 aerosol product to obtain the hourly estimates of AOD over the modeling domain, the spatiotemporal resolutions of which are 0.05 • × 0.05 • and 1 h for eight consecutive daytime (00:30 to 07:30 UTC) retrievals per day. To ensure consistency between the observed AOD and modeled AOD, the latter of which was estimated based on the light extinction of aerosols at a wavelength of 550 nm (Pitchford et al., 2007), we converted the AHI AOD retrieved at 500 nm wavelength to those at a 550 nm wavelength following Eq. (1) (ÅngstrÖm, 1961): where AOD 550 nm and AOD 500 nm are AODs at 550 and 500 nm wavelengths, respectively, and AE is the Ångström exponent at 400-600 nm wavelengths provided in the AHI aerosol product. To ensure the retrieval quality, we used pixels with quality assurance values (AOT_merged_uncertainty) smaller than 1 (very good and good retrievals).
To explore the utility of the synergistic observational data in the emissions adjustments, we employed the GOCI-AHI fused AOD product developed by Lim et al. (2021), which provides near-real-time bias-corrected AOD estimates over East Asia, taking advantage of multisource retrievals of aerosol optical properties that complement each other. GOCI, a GEO satellite instrument launched on board the Communication, Ocean and Meteorological Satellite (COMS-1) in 2010, provides regional observations of ocean environments (i.e., sea surface albedo and reflectance) and aerosol properties (i.e., the AOD) over the East Asia and western Pacific regions (Lee et al., 2010). The GOCI-AHI AOD product affords the best compromise among four individual retrievals postprocessed based on Yonsei Aerosol Retrieval (YAER) retrieval algorithms (M. Choi et al., 2016; the data fusion process comprises a series of postprocessing and data fusion techniques to complement the error characteristics of each other (i.e., the spatiotemporal collocation, the cloud removal process, the ensemble-mean method, the MLE method, and systematic bias correction based on the long-term validation of AERONET AOD measurements) (Lim et al., 2021). For the study period 2019, we used the GOCI-AHI fused AOD product to obtain hourly estimates of the AOD (at a 550 nm wavelength) over the modeling domain, the spatial resolution of which was initially 6 km ×6 km and regridded into 0.05 • × 0.05 • and the temporal resolution of which is identical to that of the AHI AOD product described above. The consistency in the grid spacings among AHI AOD, GOCI-AHI AOD, and CMAQ's modeling grids was ensured in the same approach described in Sect. 2.2 above.
GEMS-AMI-GOCI-2 fused AOD product
The GEMS-AMI-GOCI-2 fused AOD product is a synergistic science product jointly developed by Yonsei University, Chungnam National University, and the Korean NIER based on their earlier data fusion approach applied to the GOCI-AHI fused AOD product (the proxy of the GEMS-AMI-GOCI-2 fused AOD product in this study) described in Sect. 2.3. GEMS provides hourly daytime observations of the columnar loadings of gaseous air pollutants (i.e., ozone, NO 2 , SO 2 , formaldehyde, and glyoxal) and aerosols (i.e., the AOD) (J. . The AMI, a meteorological satellite instrument, provides regional observations of meteorology (i.e., cloud mask) and terrestrial environments (i.e., vegetation indices, surface reflectivity, albedo, and turbid water) as well as aerosol optical properties (i.e., finemode fraction (FMF) and AOD) every 10 min at spatial resolutions of 0.5-1.0 km for visible channels and of 2 km for near-infrared and infrared channels (Chung et al., 2020;Kim et al., 2021). GOCI-2, an advanced ocean color imager that succeeded the mission of GOCI, provides hourly observations of ocean environments (i.e., ocean current, green tide, and red tide) and aerosol optical properties (i.e., FMF and AOD) over the ocean surface at a full-domain spatial resolution of 1 km (J. . Note that these three indi-vidual instruments are in operation on board two sister GEO platforms (i.e., the AMI on board GEO-KOMPSAT-2A and GEMS and GOCI-2 on board GEO-KOMPSAT-2B) over the Asia-Pacific region. To create the best synergy from the superiorities of these instruments over each other (i.e., GEMS' retrieval accuracy over bright surfaces and AMI's and GOCI-2's sampling performances over cloud-free pixels at finer spatiotemporal resolutions) (M. , the data fusion process utilizes GEMS Level-2 aerosol product version 1 and AMI and GOCI-2 aerosol products postprocessed based on the YAER algorithm (M. to produce the GEMS-AMI-GOCI-2 AOD product. For the study period 2022, we used the GEMS-AMI-GOCI-2 AOD product to obtain the hourly estimates of AOD (at a 550 nm wavelength) collocated into the spatiotemporal resolutions identical to those of the AHI AOD and GOCI-AHI AOD products described earlier. Detailed information about the data fusion process is provided by M. .
2.5 Top-down approaches for NO x and primary PM emissions adjustments
Emissions adjustments for the study period 2019
To constrain the NO x and primary PM emissions based on top-down information provided by satellite instruments for the study period 2019, we employed a series of inverse modeling techniques. To adjust the a priori NO x emissions, we performed analytical (or Bayesian) inverse modeling towards mathematically minimizing the difference between TROPOMI NO 2 and CMAQ-simulated NO 2 columns based on the following cost function in Eq.
(2) under the assumptions that (1) the relationship between the changes in NO 2 columns and NO x emissions is not rigorously nonlinear, (2) observation and emission error covariances are described by zero-bias Gaussian probability density functions, and (3) observation and emission error covariances are independent of each other (Rodgers, 2000): where x is a posteriori NO x emissions, x a a priori NO x emissions, S o the observational error covariance provided in the TROPOMI NO 2 product, and S e the error covariance of the a priori NO x emissions, the uncertainty of which was calculated by combining the error covariances of anthropogenic (50 %) and biogenic (200 %) NO x emissions (Souri et al., 2020;. F is the first-order sensitivity coefficient that correlates NO x emissions with tropospheric NO 2 columns. We used the CMAQ decoupled direct method in three dimensions (CMAQ DDM-3D) version 5.2 (Napelenok et al., 2006) to compute the initial sensitivity coefficient, a measure of the responses of modeled NO 2 columns to changes in NO x emissions. We used the same model configurations in CMAQ DDM-3D as those used in CMAQ described in Sect. 2.1. To infer the a posteriori emissions, we used the Gauss-Newton method in Eq.
(3) (Rodgers, 2000): where i is the number of iterations, and K is the Jacobian matrix calculated in CMAQ DDM-3D. We iterated Eq. (3) two times within each month to attain convergence, and F and K were updated after each iteration. It should be noted that we derived log(x) instead of x to constrain negative a posteriori values, the details of which are described in Souri et al. (2018). Then, we applied the monthly emissions adjustment ratios derived from Eqs. (2) and (3) to the base emissions to update the bottom-up estimates of NO x emissions over the modeling domain (hereafter referred to as 2019 NO x -constrained emissions). Further details about the analytical inverse modeling approach employed in this study are provided by Souri et al. (2020) and .
To adjust the primary PM emissions, we applied analytical inversion described in Eqs. (2) and (3) to the emissions of 19 primary PM species predefined as contributors to the AOD in the sixth-generation CMAQ aerosol module (AERO6) (Simon, 2015) listed in Table S2. Note that the primary PM emissions, hereafter, refer to the summation of the emissions of all 19 individual primary PM species. In Eq. (2), x is a posteriori primary PM emissions, x a is a priori primary PM emissions (in the NO x -constrained emissions inventory obtained earlier), and S e is the error covariance of the a priori primary PM emissions, the uncertainty of which was set to 100 % (Crippa et al., 2019). For S o , we employed ±0.1 + 0.3 × AOD (Zhang et al., 2018) and ±0.043+0.178×AOD (Lim et al., 2021) as the observational error covariances of the AHI AOD and GOCI-AHI AOD, respectively. To compute F, since CMAQ DDM-3D is not available for aerosols, we employed the brute-force method (BFM) described in Eq. (4) (Napelenok et al., 2006): where F bfm is the approximate first-order sensitivity coefficient that correlates primary PM emissions with the AOD, C +10 % the CMAQ-simulated AOD of the primary PM emissions perturbed by +10 %, and C −10 % the CMAQ-simulated AOD of the primary PM emissions perturbed by −10 %. In this approach, F bfm represents the sensitivity of the total primary PM emissions with regard to changes in the AOD; therefore, the resultant adjustment ratio was applied to the emissions of each of the primary PM species equally, not in a selective manner due to the limited data availability. Note that no routine observations have been made until today for the loadings of such species over vast areas in East Asia in a top-down manner. We applied the daily emissions adjustment ratios derived from Eqs.
(2), (3), and (4) to the 2019 NO xconstrained emissions to update the bottom-up estimates of primary PM emissions over the modeling domain (hereafter referred to as 2019 NO x -and PM-constrained emissions). To evaluate the model performance before and after the application of the sequential emissions adjustments, we used the series of a priori and a posteriori emissions (i.e., the base emissions, 2019 NO x -constrained emissions, and a pair of 2019 NO x -and PM-constrained emissions using the AHI AOD and GOCI-AHI AOD) to perform CMAQ simulations for the study period 2019. It should be noted that NO x emissions were adjusted monthly due to the relatively coarse temporal resolution of TROPOMI NO 2 columns (providing zero to one valid snapshot of columnar NO 2 per day over the modeling domain), while primary PM emissions were adjusted daily by using the AOD products at sufficiently fine temporal resolutions afforded by geostationary platforms.
Emissions adjustments for the study period 2022
Similar to the approach described in Sect. 2.5.1, we first adjusted NO x emissions by using the TROPOMI NO 2 columns obtained for the study period 2022 prior to proceeding with the primary PM emissions adjustment. To adjust the a priori NO x emissions, we employed the basic mass balance method described by Martin et al. (2003) and Cooper et al. (2017). Assuming a direct linear relationship between changes in both the NO 2 columns and the NO x emissions, we adjusted the a priori NO x emissions based on the ratios between the TROPOMI NO 2 and CMAQ-simulated NO 2 columns following Eq. (5): where E 2022 represents the a posteriori NO x emissions, E 2019 the a priori NO x emissions (from the 2019 NO xconstrained emissions described in Sect. 2.5.1), and 2019 and 2022 the TROPOMI NO 2 columns obtained for the study periods 2019 and 2022, respectively. We then applied the monthly emissions adjustment ratios derived from Eq. (5) to the 2019 NO x -constrained emissions to update the NO x emissions for the study period 2022 (hereafter referred to as 2022 NO x -constrained emissions). Then, to adjust the primary PM emissions, we used the GEMS-AMI-GOCI-2 AOD obtained for the study period 2022 to perform the analytical inversion and BFM described in Sect. 2.5.1 by using the S o of ±(−0.001 + 0.48 × AOD) provided in the GEMS-AMI-GOCI-2 AOD product and the perturbed (±10 %) primary PM emissions. The daily emissions adjustment ratios were applied to the 2022 NO xconstrained emissions to update the primary PM emissions (hereafter referred to as 2022 NO x -and PM-constrained emissions). Using the base emissions, 2022 NO x -constrained emissions, and 2022 NO x -and PM-constrained emissions to perform CMAQ simulations for the study period 2022, we evaluated the performance of the model.
Ground-based measurements for model evaluation
To evaluate the model performance, we used ground-based in situ observations across South Korea (hereafter referred to as Korea) and the North China Plain (NCP) region. To validate the accuracy of the WRF-simulated meteorological fields, we obtained hourly measurements of the 2 m air temperature and 10 m wind U and V components from the Korean Meteorological Administration database (132 sites for 2019 and 95 for 2022). The WRF-simulated hourly meteorological fields showed fair agreements with the in situ measurements (Figs. S1 and S2 in the Supplement; Table S3), which we considered sufficient for further use as meteorological inputs for CMAQ.
To evaluate the performance of the CMAQ model, we obtained the hourly measurements of surface NO 2 and PM 2.5 concentrations from the AirKorea website (https://www. airkorea.or.kr, last access: 15 July 2022) (346 sites for 2019 and 425 for 2022) and from the Chinese Ministry of Ecology and Environment database (MEE) (235 sites for 2019 and 312 sites for 2022) and hourly sun-photometer measurements of the AOD (at a 550 nm wavelength) (85 AERONET sites for 2019). To ensure the quality of the validation sets, we excluded observation sites in which the frequency of missing values exceeded 50 % of all observations made during the study period. For the measurements collected from the MEE sites, we applied the quality assurance processes (e.g., elimination of negative values) that we used in our previous study over mainland China (Mousavinezhad et al., 2021). To quantify the extent of model overestimation and underestimation, we employed a normalized mean bias (NMB) following Eq. (6): where M represents the model predictions, O the observations, and n the total number of pairs. To discuss the success of the sequential NO x and primary PM emissions adjustments described in Sect. 2.5, we obtained the seasonal compositions of surface PM 2.5 assessed at six ground-based supersites in Korea, the constituents of which include secondary inorganic aerosols (i.e., nitrate, sulfate, and ammonium aerosols), organic carbon (the total mass of both primary and secondary organic carbon), elemental carbon, the lumped summation of other PM species listed in Table S2, and the rest remaining undefined (the lumped summation of all unidentified species in 2.5 µm or less in diameter, which still constitute the total PM 2.5 mass).
Evaluation of the top-down approach using TROPOMI NO 2 , AHI AOD, and GOCI-AHI AOD
We performed a series of emissions adjustments by using the TROPOMI NO 2 columns, AHI AOD, and GOCI-AHI AOD as the constraints to updating the bottom-up estimates of NO x and primary PM emissions over the modeling domain. Prior to proceeding with the primary PM emissions adjustments, we examined the model performances in simulating NO 2 columns during the study period 2019 on a seasonal basis. The model using the base emissions tended to underestimate NO 2 columns over the major portion of the modeling domain during the entire study period (Fig. S3). After the NO x emissions adjustment, which resulted in overall increases in NO x emissions by 71.64 %-174.16 % ( Fig. S4; Table S4), the modeled NO 2 columns showed closer spatial agreements with the observed NO 2 columns (Fig. S5) In addition to the NO x emissions adjustment, we performed primary PM emissions adjustments followed by evaluating the model performance in simulating AOD during the study period 2019. To compare the use of the singleinstrument-and multisource-derived AOD products for constraining primary PM emissions, we performed two separate emissions adjustments, using each of the AHI AOD and GOCI-AHI AOD as a constraint to update the primary PM emissions over the modeling domain. To ensure the consis-tency between the comparisons, we spatially collocated the CMAQ-simulated AOD to each of the AHI AOD and GOCI-AHI AOD. We found that, during the entire study period, the model using the base emissions tended to underestimate AODs over a major portion of the modeling domain except for a few inland regions in China (Figs. 2a, b and 3a, b). After the NO x emissions adjustment, the modeled AODs showed closer spatial agreement with the observed AODs in Korea and the NCP region (Figs. 2c and 3c); the model, however, tended to overestimate the AODs in some inland regions such as southeastern China and the Sichuan Basin region; we consider this tendency to be the result of uncertainty in the bottom-up estimates of air pollutant emissions coming from the unique basin landform that often encloses highly concentrated anthropogenic emissions (Chen et al., 2021). After the primary PM emissions adjustments using the AHI AOD and GOCI-AHI AOD, which resulted in overall increases in primary PM emissions by 19.55 %-31.79 % ( Fig. S7; Table S6) and 87.54 %-142.96 % ( Fig. S8; Table S6), respectively, the modeled AODs showed even closer spatial agreement with the observed AODs (Figs. 2d and 3d).
We then evaluated the performance of the model in simulating daily mean AODs at AERONET sites in time series and found that, overall, the series of emissions adjustments resulted in improvements in the performance of the model during the entire study period 2019. In brief, the model's initial underestimation of AOD was mitigated by the NO x emissions adjustment, which led to increased NO x emissions, and then by the subsequent primary PM emissions adjustment, which resulted in overall increases in primary PM emissions. While the model using the base emissions showed an average NMB of −50.73 % ( Fig. 4a; Table 1a), the model using the 2019 NO x -constrained emissions showed an average NMB of −42.52 % ( Fig. 4b; Table 1b). The model using the 2019 NO x -and PM-constrained emissions showed average NMBs of −33.84 % (using the AHI AOD) and −19.60 % (using the GOCI-AHI AOD), respectively ( Fig. 4c and d; Table 1c and d). These results indicate that the sequential adjustments of NO x and primary PM emissions were generally effective at improving model performance in simulating the AOD; in particular, the use of the multisource AOD product led to a greater reduction in model biases than that of the single-instrument AOD product. Despite the success of the sequential adjustments of NO x and primary PM emissions in improving the model's AOD simulations, there are still uncertainties remaining regarding the accuracy of NO x emissions. For example, in the NCP region, the NO x emissions adjustment caused the model to overestimate surface NO 2 concentrations in some seasons and consequently increased the model biases. Nevertheless, this overestimation was shown to help the model to reduce its AOD underestimation. Addressing this issue requires the development of region-specific tactics for adjusting the bottom-up estimates of gas-phase air pollutant emissions in future studies.
Merits and limitations of the sequential emissions adjustments and the use of the data fusion product
Despite the many top-down approaches to achieving more up-to-date emissions inventories, questions still remain about the extent to which each of the aerosol components contributes to aerosol loadings. To ascertain the possible implications for our understanding of sequential improvements in the performance of the model in simulating the AOD, we examined the chemical compositions of surface PM 2.5 in Korea during the study period 2019 on a seasonal basis. While a slightly larger portion (53.26 % on average) of surface PM 2.5 loadings was comprised of secondary inorganic aerosols such as nitrate, sulfate, and ammonium aerosols (20.90 %, 18.56 %, and 13.81 %, respectively), the total of the remaining portion (46.74 % on average) was mostly comprised of primary PM and some secondary aerosols such as the organic carbon category used in this study ( Table 2).
As both the contributions of primary and secondary aerosols to aerosol loadings were significant, we considered the sequential adjustments of NO x and primary PM emissions effective at improving model performance. However, setting aside the earlier improvements in the model performances achieved for the study period 2019, such a chemical makeup of PM 2.5 observed implies that the adjustment of solely NO x emissions performed in this study might not have sufficiently reduced the uncertainty underlying the emissions of the precursors of other secondary inorganic aerosols, such as sulfate and ammonium aerosols. Considering the impending availability of the GEMS tropospheric NO 2 product in its mature stage, this study mainly focused on examining the utility of NO 2 columns, not the other gas-phase precursors, which could have been beneficial for constraining the remaining secondary inorganic aerosols. This limitation presents a need for follow-up research that employs more comprehensive sets of top-down constraints (e.g., observa- Figure 3. Spatial distributions of GOCI-AHI fused and CMAQ-simulated AODs before and after the NO x emissions adjustment (based on TROPOMI NO 2 columns) and primary PM emissions adjustment (based on GOCI-AHI AOD) during the study period 2019. (a) The GOCI-AHI AOD, (b) the CMAQ-simulated AOD using base emissions, (c) the CMAQ-simulated AOD using 2019 NO x -constrained emissions, and (d) the CMAQ-simulated AOD using 2019 NO x -and PM-constrained emissions. Note that CMAQ-simulated AODs were temporally collocated to the GOCI-AHI AOD. tional references for SO 2 and ammonia loadings in the troposphere).
To account for the outperformance of the model that used the adjusted emissions based on the data fusion product, we quantified the number of observational references available from each of the AHI AOD and GOCI-AHI AOD products. The benefit of securing more continuous and frequent observations, which provide more data available for constraining model biases, has often been highlighted in many satellitebased inverse modeling and data assimilation studies (Jeon et al., 2016;Lee et al., 2016;Yumimoto et al., 2016;Choi et al., 2019). We compared the numbers of valid AOD retrievals made for each of the modeling grids (hereafter referred to as AOD records) obtained from the AHI AOD and GOCI-AHI AOD products. Note that the gridspecific number of AOD records does not necessarily indicate the instrumental sampling frequency of the satellite instrument in this comparison. While the AHI AOD product showed some clusters of missing values over several inland regions (i.e., southeastern and northeastern China, the Sichuan Basin, and some areas in Primorye in Russia, North Korea, and Japan), the GOCI-AHI AOD product produced more spatially complete domain-wide observations during the study period 2019 (Fig. 5).
In addition to the improvement in the observational coverage, the GOCI-AHI AOD product showed a noticeable improvement in the amount of available data. Compared to the AHI AOD product, the GOCI-AHI AOD product showed increases in the numbers of AOD records by 132.23 % on average during the entire study period, the seasonal extents of which ranged from 90.20 % to 198.01 % (Fig. 5; Table 3). In other words, even though the AHI AOD and GOCI-AHI AOD were given over the modeling domain at identical spatiotemporal resolutions in the first place, there was a substan- Table 1. Summary statistics of the daily mean AERONET AOD (85 sites) and the CMAQ-simulated daily mean AOD before and after the NO x emissions adjustment (based on TROPOMI NO 2 columns) and primary PM emissions adjustment (based on AHI AOD and GOCI-AHI AOD) during the study period 2019. (a) The CMAQ-simulated AOD using the base emissions, (b) the CMAQ-simulated AOD using 2019 NO x -constrained emissions, (c) the CMAQ-simulated AOD using 2019 NO x -and PM-constrained emissions using the AHI AOD, and (d) the CMAQ-simulated AOD using 2019 NO x -and PM-constrained emissions using GOCI-AHI fused AOD. R: Pearson's correlation coefficient; NMB (%): normalized mean bias. tial difference in the volume of the information available in the end. We accounted for the greater improvement in the model performance afforded by the use of GOCI-AHI AOD (Figs. 3 and 4; Table 1) by the instruments supplementing undetected or discarded pixels of other instruments that originate from different aerosol retrieval algorithms and by the additional bias correction approaches Choi et al., 2019;Lim et al., 2021).
Such an improvement in the quantity of observation references seemed to be beneficial for improving the model performance in AOD estimation. For example, in March-April-May (MAM) 2019, the use of the emissions constrained based on GOCI-AHI AOD reduced the model bias more effectively compared to that based on AHI AOD. This improvement (or the difference in the extent of emissions adjustment) was considered to be due to whether the high AOD peaks along southeastern China were captured (Fig. 3a) Figure 5. The number of AOD records (in each of the modeling grids) obtained from the AHI AOD product and the GOCI-AHI fused AOD product during the study period 2019.
or not (Fig. 2a). Throughout the entire year 2019, the season MAM showed the most frequent occurrences of high AOD peaks over AERONET sites compared to other seasons (Fig. 5). Considering the locations of those ground-based sites ( Fig. 1 above), many of which cover southeastern China, we first presumed that GOCI-AHI AOD would represent the aerosol loadings more realistically. Then, this was supported by the grid-specific number of AOD records afforded by AHI AOD and GOCI-AHI AOD (Fig. 5), the former of which showed noticeably less information available for use. Therefore, we concluded that the use of the emissions constrained based on GOCI-AHI AOD, which was considered to better capture the high AOD peaks across southeastern China in a spatiotemporally more frequent and continuous manner, was more effective at resolving the model's initial AOD underestimation. 3.3 Application of the top-down approach using the GEMS-AMI-GOCI-2 fused AOD product Upon the earlier success of the use of the proxy of GEMS-AMI-GOCI-2 AOD (GOCI-AHI AOD described in Sect. 3.1) in updating the emissions inventory, which resulted in further reduced model biases compared to those from the use of the proxy of GEMS AOD (AHI AOD described in Sect. 3.1), we employed the GEMS-AMI-GOCI-2 AOD product to proceed with the emissions adjustment for the study period 2022. Note that the GOCI-AHI AOD product used earlier served as a prototype for the development of the GEMS-AMI-GOCI-2 AOD product; the production of the GOCI-AHI AOD product has been discontinued, and it is currently only available for research purposes for the year 2019. To explore the utility of the multisource data fusion product in constraining the temporal variations of aerosol precursor emissions and ultimately to leverage the up-to-date emissions inventory to improve model performance at simulating AOD and PM 2.5 concentrations, we used the GEMS-AMI-GOCI-2 fused AODs as top-down constraints to adjust primary PM emissions over the modeling domain. Similarly to the earlier top-down approach, we used TROPOMI NO 2 columns in advance of the primary PM emissions adjustments to constrain NO x emissions.
Using the base emissions, the model tended to overestimate AODs over a major portion of the modeling domain, particularly across the NCP region, during the study period 2022 ( Fig. 6a and b). Upon the relative decreases in monthly mean NO 2 columns in March, April, and May 2022 compared to the corresponding months in 2019 (Fig. S9), the NO x emissions adjustment led to overall reductions in NO x emissions by 2.83 %-13.40 % ( Fig. S10; Table S7), which appeared to be effective at reducing the discrepancy between the observed and modeled AODs (Fig. 6c). After the primary PM emissions adjustment, which resulted in an overall decrease in primary PM emissions by 9.03 % ( Fig. S11; Table S8), the modeled AODs showed even closer spatial agreements with the observed AODs (Fig. 6d). Then we evaluated the model performance in simulating daily mean surface PM 2.5 concentrations in Korea and the NCP region in a time series. Similar to the earlier results shown for the study period 2019, the NO x and primary PM emissions adjustments led to overall improvements in the model performances during the study period 2022. Using the base emissions, the model overestimated PM 2.5 concentrations by 20.60 % in Korea and 47.58 % in the NCP region ( Fig. 7a; Table 4a); on the other hand, using the NO x emissions adjustment, the model reduced the extent of overestimation to 15.74 % in Korea and 39.80 % in the NCP region ( Fig. 7b; Table 4b), and then the primary PM emissions adjustment further reduced those to 6.81 % and 19.58 % ( Fig. 7c; Table 4c).
Unlike top-down constraints used during the study period 2019, those used during the study period 2022 led to overall reductions in NO x and primary PM emissions, particularly over the highly industrialized regions (e.g., the NCP region and other major metropolitan areas in China) (Figs. S10 and S11). An explanation for the noticeable decreases in NO 2 columns and AODs observed across these regions in March, April, and May 2022 compared to those observed in the corresponding months during the pre-COVID-19 period (the year 2019 in this study) (Figs. 2a,3a,6a,and S9) was the strict city-and province-wide lockdown regulations (the so-called "zero-COVID strategy" that resumed in March 2022) (Dyer, 2022), which led to substantial reductions in the amounts of anthropogenic emissions (e.g., vehicular, industrial, and agricultural emissions) (Caporale et al., 2022) in China. In addition, the updated emissions inventory yielded more accurate representations of the aerosol loadings over the sea surface (i.e., the Yellow Sea), which could benefit other studies that involve the long-range transport of aerosols emitted from inland sources (Hatakeyama et al., 2001;Carmichael et al., 2002;Pouyaei et al., 2020Pouyaei et al., , 2021Jung et al., 2021).
Summary and conclusion
In summary, this study attempted to sequentially adjust bottom-up estimates of NO x and primary PM emissions over East Asia by employing observational references afforded by multiple satellite instruments retrofitted on various platforms and the synergistic science product. During the study period 2019, we reconfirmed the utility of LEO and GEO satellite products in emissions adjustments and then explored that of the multisource data fusion product, whose enhanced observational quantity and quality appeared to reduce model biases in AOD simulations to a great extent. During the study period 2022, which experienced noticeable reductions in the amounts of anthropogenic emissions primarily resulting from severe lockdowns across major urban regions in Table 4. Summary statistics of CMAQ-simulated daily mean PM 2.5 concentrations before and after the NO x and primary PM emissions adjustments and ground-based in situ measurements in Korea (425 sites) and the NCP region (312 sites) during the study period 2022. (a) CMAQ-simulated PM 2.5 using base emissions, (b) CMAQ-simulated PM 2.5 using NO x -constrained emissions, and (c) CMAQ-simulated PM 2.5 using 2022 NO x -and PM-constrained emissions. China, the earlier top-down approach to constraining aerosol precursor emissions was also effective at reducing spatiotemporal discrepancies between the modeled and observed loadings of aerosols and their precursors; in particular, the emissions adjustments were effective at improving the model performances in simulating surface PM 2.5 concentrations during the lockdown period.
In light of such findings, we conclude that the series of emissions adjustments in this study, which were capable of closely capturing variations in the emissions of both primary aerosols and the precursors of secondary aerosols in a top-town manner, were generally effective at improving the model performances in estimating aerosol loadings over East Asia. The enhanced observation quality and quantity afforded by the GEMS-involved synergistic product and its proxy appeared to be beneficial for capturing the spatiotemporal variations in the emissions of the aerosol precursors. In terms of possible uncertainties that could originate from other aerosol precursor species, which was outside the scope of this study, the methodology used left some room for further improvement; nonetheless, this study reconfirmed the significant association between emissions of aerosol precursors and the AOD as well as surface PM 2.5 concentrations and underscored the benefit of using multisource, topdown information to best exploit available observational references. In light of the improvement in data availability (e.g., tropospheric SO 2 columns and the operational version of the data fusion product) in the near future, afforded by GEMS and its sister instruments, we conclude that the findings of this study could provide a useful basis for how to more effectively use the new data for producing more up-to-date emission inventories, the expected results of which could provide more precise insight into the spatiotemporal behaviors of air pollutants in pandemic situations.
Data availability. AHI aerosol products can be accessed at https: //www.eorc.jaxa.jp/ptree/index.html (Japan Aerospace Exploration Agency Himawari Monitor (P-Tree system) database, 2022). TROPOMI tropospheric NO 2 columns sampled along the study area are available at https://cophub.copernicus.eu/ (European Space Agency Copernicus Services Data Hub, 2022). Data fusion products may be available upon request to the authors. Ground-based in situ measurements of NO 2 and PM 2.5 concentrations are available from https://www.airkorea.or.kr (Korean Ministry of Environment AirKorea database, 2022) and http://www.cnemc.cn/en/ (China National Environmental Monitoring Center database, 2022).
Author contributions. JP took the lead in drafting the original manuscript. JP, JJ, YC, and KL set up the experimental design. JP and JJ set up the modeling system and conducted emissions adjustments and model simulations. HL, MK, YL, and JK developed and provided the multisource data fusion products (GOCI-AHI and GEMS-AMI-GOCI-2 aerosol products). KL provided AirKorea datasets for model evaluation. YC and KL provided overall context as a principal investigator and project manager, respectively, and supervised the entire research. All the authors discussed the results, exchanged comprehensive feedback on the original manuscript draft, and contributed to preparing the final version of the manuscript.
Competing interests. At least one of the (co-)authors is a guest member of the editorial board of Atmospheric Measurement Techniques for the special issue "GEMS: first year in operation (AMT/ACP inter-journal SI)". The peer-review process was guided by an independent editor, and the authors also have no other competing interests to declare.
Disclaimer. Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Special issue statement. This article is part of the special issue "GEMS: first year in operation (AMT/ACP inter-journal SI)". It is not associated with a conference. | 2023-07-11T00:16:39.509Z | 2023-06-19T00:00:00.000 | {
"year": 2023,
"sha1": "f1958366d036dd2e3217185089371b26c905a958",
"oa_license": "CCBY",
"oa_url": "https://amt.copernicus.org/articles/16/3039/2023/amt-16-3039-2023.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f823bd058b7b71bedd15e402869c995abd97e850",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
53231153 | pes2o/s2orc | v3-fos-license | Efficacy of second-look endoscopy in preventing delayed bleeding after endoscopic submucosal dissection of early gastric cancer
The present study aimed to evaluate whether second-look endoscopy (SLE) is able to prevent delayed bleeding after endoscopic submucosal dissection (ESD) of gastric carcinoma and to identify which types of lesion require SLE. ESD of gastric cancer at the early stage was performed on 210 patients between October 2014 and September 2016. Mucosal damage-associated bleeding within 24 h after ESD was considered as delayed bleeding. The association of the characteristics of patients and lesions, as well as surgical factors, with the incidence of bleeding as a measure of outcome was analyzed. A total of 110 patients with melena and/or hematemesis underwent SLE on the second day following gastric ESD. Within the entire cohort (n=210), late delayed bleeding (LDB) was defined as hematemesis or melena occurring following second-look endoscopy. Early delayed bleeding (EDB) was defined as hematemesis or melena occurring from the end of ESD to second-look endoscopy, or as active or possible bleeding at the time of the second-look endoscopy was reported in 17 (8.1%) and 20 patients(9.5%), respectively. The median interval between late delayed bleeding and ESD was one day (range, 1–10 days). The incidence of late delayed bleeding was significantly decreased in the SLE group compared with that in the non-SLE group (4.5 vs. 12%, P=0.028). Multivariate analyses revealed that ulcer, flat gross type, lesion diameter (>2 cm), the resected tumor size of >40 mm and Helicobacter pylori infection were independently associated with late delayed bleeding after ESD, while flat gross type, ulcer, the resected tumor size of >40 mm and artificial ulcer diameter >3 cm were independently associated with early delayed bleeding. Thus, the data of the present study indicates that second-look endoscopy following gastric ESD may be useful in preventing post-ESD delayed bleeding and should be performed on the second day.
Introduction
Gastric carcinoma usually arises from the gastric mucosa (1). In the US, the incidence of gastric carcinoma is 8.7-17.2 per 100,000 men and 9.7-43.1 per 100,000 women (2). In China, the incidence of gastric carcinoma is high, with the age-standardized incidence being 37.1 per 100,000 men and 17.4 per 100,000 women (3). This high incidence is probably due to the high prevalence of Helicobacter pylori infection in China, which is a definite gastric carcinogen according to the World Health Organization (3,4).
Endoscopic mucosal resection (EMR) and endoscopic submucosal dissection (ESD) are widely used to treat early gastric cancer (EGC) and gastric adenocarcinoma (5). In patients with EGC, the outcome of endoscopic procedures with regard to survival is the same as that of gastrectomy, but endoscopic procedures are associated with a shorter hospital stay and decreased risk of post-operative morbidity (6). Furthermore, ESD is significantly better than EMR for removal of large lesions (7)(8)(9).
One concern regarding ESD is the generation of artificial ulcers, and studies describe delayed bleeding after ESD as life-threatening with an incidence of ~5% (10,11). Endoscopic hemostasis is effective during emergency endoscopy. Therefore, it is necessary to determine the nature of delayed bleeding and administer appropriate treatments. Previous studies have suggested that the tumor site (middle and lower third of the stomach), tumor size and ulcer formation are independent risk factors for delayed bleeding (12)(13)(14), but there is no consensus.
To further reduce the bleeding rate after ESD, numerous hospitals in Europe and the US routinely perform second-look endoscopy (SLE) to prevent delayed bleeding (15). The major purpose of SLE after ESD is to inspect the non-bleeding visible blood vessels of the mucosal defect that had bled recently or may eventually bleed (11,16). When a bleeding or non-bleeding visible blood vessel is identified by SLE, preventive hemostasis should be performed; hemostatic clipping or thermocoagulation may be applied. However, it is controversial whether SLE is able to prevent delayed bleeding. A multicenter, prospective, randomized controlled non-inferiority trial did not recommend SLE to prevent delayed bleeding after gastric ESD (17), as supported by certain other previous studies (16,(18)(19)(20).
Therefore, the present study aimed to evaluate whether SLE is able to prevent delayed bleeding, to assess the clinical and pathological characteristics of patients with delayed bleeding, and determine which specific lesions may require SLE. The results of the present study may lead to the establishment of improved guidelines to manage patients with EGC.
Materials and methods
Study design and patients. The present study was a retrospective analysis of a prospective database of patients who were histologically diagnosed with EGC and treated with ESD at the Center for Digestive Medicine (key clinical entity of Jiangsu Province) of the Second Affiliated Hospital of Nanjing Medical University (Nanjing, China) and at the Department of Gastroenterology (key clinical entity of the Ministry of Health, China) of the Qilu Hospital of Shandong University (Jinan, China) between October 2014 and September 2016.
According to the guidelines of the Japan Gastroenterological Endoscopy Society and the Japanese Gastric Cancer Association (7), the indications for ESD were lymph node-negative EGC, including the following: i) Differentiated intramucosal carcinoma with a diameter of ≥2 cm without ulcer; ii) differentiated intramucosal carcinoma with a diameter of <3 cm and ulcer; and iii) undifferentiated intramucosal carcinoma with a diameter of <2 cm without ulcer. The diagnosis was made based on lesions identified on endoscopy, chromoendoscopic biopsy or endoscopic ultrasonography. The exclusion criteria were as follows: i) Digestive tract perforation or ii) surgical specimens exhibiting submucosal invasion of ≥500 µm. The 3 treating gastroenterologists were all senior resident physicians or deputy chief physicians, and all had a working experience in ESD of >3 years and had performed >100 ESDs. All operators had received training in narrow-band imaging for detection of abnormal tumor vessels.
Of the 217 gastric neoplasm patients, 3 were excluded due to perforation during ESD, and 4 were excluded as an additional surgery was required for submucosal invasion. Finally, 210 patients were randomly divided into 2 groups: The non-SLE group (n=100) and the SLE group (n=110). Of the 210 patients, 172 were diagnosed with gastric high-grade intraepithelial neoplasia and 38 were diagnosed with gastric low-grade intraepithelial neoplasia. The present study was approved by the ethics committee of the Second Affiliated Hospital of Nanjing Medical University and Qilu Hospital of Shandong University. All patients provided written informed consent for inclusion in the database.
ESD strategy. All patients were required to provide written informed consent prior to treatment. ESD was performed as previously described (21). The patients fasted from the morning on the operation day and underwent surgery under conscious sedation. Argon plasma coagulation (PSD-60; Olympus, Tokyo, Japan) was used for marking, and the marking points were 5 mm away from the tumor edge. A submucosal injection of 1:10,000 epinephrine (0.01 mg/ml) + saline solution equal to a total of 10.0-15.0 ml was performed around the lesion. The mucosa was cut 5 mm away from the outer edge of the marking. After mucosal incision, the lesion was dissected using an IT knife (KD-612L or Dual knife (KD-650Q; both Olympus). Electrocoagulation of all visible vessels on the ulcer surface was performed using hot biopsy forceps (FD-410LR; Olympus). Sodium hyaluronate was used when saline: Epinephrine (1:100,000) was not able to completely lift the tumor. After the lesion was dissected from the stomach, conventional electrocoagulation of non-bleeding visible vessels and infiltration were performed using hot biopsy forceps.
SLE or emergency endoscopy. SLE was performed on the second day after ESD. Delayed bleeding was characterized by the presence of melena, hematochezia or hematemesis within 24 h after ESD, and mucosal defects and bleeding were observed during emergency endoscopy (22). Delayed bleeding was classified as early (hematemesis or melena occurring in the time interval between ESD and SLE or active or possible bleeding at the time of the SLE) and late (hematemesis, hematochezia or melena occurring after SLE) delayed bleeding. If there was significant bleeding, hemostasis of the bleeding points or non-bleeding visible vessels was performed under emergency endoscopy, mainly including hemostatic clamps or thermocoagulation. Patients who had hematochezia, hematemesis or hypotension and met the criteria were given component blood transfusion. After ESD, continuous intravenous esomeprazole administration (40 mg/day) was performed for 2 days. On the third day, administration was changed to oral esomeprazole (20 mg twice per day). Most patients started eating food after SLE. The patients were discharged from the hospital at 6 days after surgery unless bleeding complications were noted. If hematochezia or hematemesis occurred after discharge, the patients were required to contact their physicians. When perforation or delayed bleeding occurred, food intake and discharge plans were changed depending on the patient's condition. The patients were routinely followed up for 60 days in the first, second, fourth and eighth week after discharge. The results of routine blood tests and fecal occult blood test were recorded. The resection was considered curative when the lesion was resected en bloc, was <2 cm in diameter, was predominantly of the differentiated type, demonstrated macroscopically intramucosal differentiated carcinomas (pT1a), was absent of ulcers (UL-), lymphatic invasion (ly-) and venous invasion (v-) (7). Expanded criteria for curative resection were en bloc resection of the lesion and a diameter of ≥2 cm, a predominantly differentiated type, pT1a and UL(-); a diameter of <3 cm, a predominantly differentiated type, pT1a and UL(+); a diameter of <2 cm, a predominantly undifferentiated type, pT1a and UL(-); or a diameter of <3 cm, a predominantly differentiated type, pT1b (SM1), ly(-) and v(-); and negative surgical margins applied to all of the above (7).
Data collection. The following information was recorded: Age, sex, comorbidities (hypertension, heart disease, type 2 diabetes and acute cerebrovascular disease), use of anti-coagulants or anti-platelet drugs (patient-associated factors), H. pylori infection, longitudinal axis position (upper, middle or lower third of the stomach), cross-sectional position (anterior gastric wall, posterior gastric wall, lesser curvature or greater curvature), gross type of EGC, lesion diameter (cm), diameter of the resected specimen (cm), histological type (differentiation degree), ESD time, bleeding condition under emergency endoscopy (pulsatile bleeding, active permeating bleeding, vessel exposure, bloodstain or blood clot) and post-operative blood transfusion. The rates of delayed bleeding with and without SLE were used as the endpoints to determine the effectiveness of SLE.
Statistical analysis. SPSS 18.0 (SPSS, Inc., Chicago, IL, USA) was used for statistical analysis. The Student's t-test or Fisher's exact test was used to analyze differences in patient age, tumor size, specimen size and ESD operative time between the two groups. The chi-square test was used to analyze differences in sex, complications, use of anti-coagulation or anti-platelet drugs, longitudinal axis position, cross-sectional position, gross type and degree of differentiation. If more than one predictive index was significantly different on the Cox proportional hazards model was used to determine the independent risk factors. Optimum cut-off values for risk factors were determined using receiver operating characteristic analysis. A two-sided P<0.05 was considered to indicate a statistically significant difference.
Results
Characteristics of the surgeries. Fig. 1 presents the patient flow chart. Table I presents the characteristics of the 2 groups at baseline. The en bloc resection rate was 100%. All resection margins were negative. No gastrointestinal perforation, death or severe complication occurred. The median time interval between ESD and SLE was 2 days after surgery (range, 1-3 days). For patients in the SLE group (n=110) and the non-SLE group (n=100), the mean operative time was 69±38 and 89±35 min, the mean lesion diameter was 2.8±0.9 and 2.5±1.2 cm and the number of specimens sized >40 mm was 8 and 10, respectively. There were no significant differences between the 2 groups with regard to the abovementioned parameters (P>0.05). Late delayed bleeding was observed 2 days following ESD in the non-SLE group. The incidence of late delayed bleeding occurring after SLE was significantly different (4.5 vs. 12.0%, respectively; P<0.05; Table I). Table I also provides information on the occurrence of late delayed bleeding.
Early delayed bleeding. Among the 210 patients, 20 (9.5%) demonstrated early delayed bleeding following ESD. However, no statistically significant differences were identified between patients with SLE and non-SLE groups (P>0.05). The flat gross type (P<0.01), ulcer (P<0.01) and specimen size >40 mm (4.6 vs. 30%; P<0.001) was associated with an increased risk of early delayed bleeding. Furthermore, artificial ulcer diameter (2.61±1.20 vs. 4.06±1.73 cm; P<0.001) was associated with significantly higher early delayed bleeding rates (Table II). Univariate analysis (Table III) revealed that early delayed bleeding was associated with ulcer, flat gross type, artificial ulcer diameter (>3 cm) and the resected tumor size (>40 mm). Multivariate analysis ( (Table V). Bleeding was successfully stopped in all patients during SLE and none of the patients required re-operation. No re-bleeding occurred during the follow-up in the 173 patients without delayed bleeding. Among the 17 patients with delayed bleeding, 2 (11.8%) required blood transfusion.
Seventeen cases of late delayed bleeding were divided into 3 types: Pulsatile bleeding (n=8; Forrest grade I), active permeating bleeding (n=6; Forrest grade IIa) and vessel exposure (n=3; Forrest grade IIb). One patient underwent SLE on the second day after ESD, but delayed bleeding occurred on the tenth day after ESD. The patient had a remnant stomach with the lesion located on the anterior wall of the gastric antrum, and the size of the excised lesion was 1.5x1.5 cm.
Univariate analysis ( were independently associated with the occurrence of late delayed bleeding.
Discussion
It is controversial whether SLE is able to prevent delayed bleeding after ESD for gastric cancer. Therefore, the present study aimed to evaluate whether SLE is able to prevent delayed bleeding after ESD and clarified the types of lesions that require SLE. The results suggest that SLE was effective in preventing delayed bleeding after ESD, particularly within 48 h after ESD. Lesion diameter (>2 cm), ulcer, flat gross type, the resected tumor >40 mm, H. pylori infection and operative time (>60 min) were independently associated with late delayed bleeding after ESD, while flat gross type, ulcer, the resected tumor >40 mm and artificial ulcer diameter (>3 cm) were independently associated with early delayed bleeding.
Certain studies have indicated that procedure-associated bleeding is not associated with age, sex, tumor size and tumor location (21). In addition, the preventive coagulation of non-bleeding visible vessels in SLE following gastric ESD may do little to prevent late delayed bleeding (20), SLE for preventing delayed bleeding after ESD may be excessively performed at present and unnecessary in certain patients (16,18,19). However, the present study suggested that SLE has an important role after gastric ESD, as it is able to identify and treat potential bleeding foci. Takizawa et al (23) suggested that coagulation of visible vessels during ESD prevented delayed bleeding. Therefore, different approaches may be used to prevent bleeding and avoid a second endoscopy.
In the present study, the lesions were completely excised during ESD in all 210 patients who met the criteria for SLE after ESD. The results indicated that late delayed bleeding was markedly more common in the non-SLE group, and numerous patients had H. pylori infection, which may cause a greater local inflammatory response and further influence the gastric mucosal blood flow during healing of ESD-induced ulcer, resulting in injury to the vessel walls. Flat gross type is another risk factor; as such lesions are frequently rich in vascularity and are mostly reddish due to the existence of more vessels in the submucosal layer compared with that in the elevated or depressed type. The presence of more vessels may increase the risk of post-ESD bleeding (11). There was no significant association between age and ESD-associated hemorrhage, which was inconsistent with the results reported by Takahashi et al (18). Regarding the post-operative complications of ESD, the rates of perforation, bleeding, and lymphatic vessel invasion were lower than those reported in previous studies (11,14,(16)(17)(18)(19)(20)(24)(25)(26), which may be due to the improvements in therapeutic instruments and techniques, as well as the absence of positive margins in the 210 patients.
Certain studies have examined the risk factors for delayed bleeding after ESD. Choi et al (24) determined that surface erosion, location of the lesion and high-risk ulcer were independently associated with the risk of delayed bleeding. In a study by Kim et al (16), a large tumor size (>20 mm) was the only independent risk factor for delayed bleeding. Nakamura et al (27) reported that low platelets and positive lateral margins were associated with delayed bleeding. Other risk factors include wide resection (14,18,22), no post-ESD coagulation (23), tumor located in the lower third of the stomach (22,23), tumor located in the L segment (18), large tumor size (18), histological ulcer (14), long ESD procedure (14), age of <65 years (26) and use of anti-thrombotic drugs (26). Ryu et al (19) reported that no specific factor was associated with delayed bleeding after ESD. In the present study, ulcer, flat gross type, lesion diameter (>2 cm), the resected tumor size of >40 mm and H. pylori infection were independently associated with late delayed bleeding after ESD, while flat gross type, ulcer, the resected tumor size of >40 mm and artificial ulcer diameter (>3 cm) were independently associated with early delayed bleeding. Of note, the present study is not without limitations. It was a retrospective study, with all of the inherent limitations, and the sample size was small.
In conclusion, based on the present retrospective study, SLE after ESD has a certain value in the prevention of delayed bleeding in patients with gastric cancer after treatment with ESD, particularly within 48 h after the surgery. Ulcer, flat gross type, lesion diameter (>2 cm), the resected tumor size of >40 mm and H. pylori infection were used to identify those high-risk patients who should ideally be subjected to SLE to prevent late delayed bleeding.
Funding
No funding received.
Availability of data and materials
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Authors' contributions
ZG wrote the manuscript. LM and HH contributed to project development and data collection. LC and ZG recorded and analyzed the results. LC and YX performed the statistical analysis. All authors have read and approved the final manuscript.
Ethics approval and consent to participate
The present study was approved by the ethics committee of the Second Affiliated Hospital of Nanjing Medical University and Qilu Hospital of Shandong University. All patients provided written informed consent for inclusion in the database.
Patient consent for publication
Not applicable. | 2018-11-15T17:36:51.749Z | 2018-09-12T00:00:00.000 | {
"year": 2018,
"sha1": "ce1f82d81228e272c6dcab65c2111c8981015fab",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/etm.2018.6729/download",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce1f82d81228e272c6dcab65c2111c8981015fab",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259144786 | pes2o/s2orc | v3-fos-license | Ethical Aspects of ChatGPT in Software Engineering Research
ChatGPT can improve Software Engineering (SE) research practices by offering efficient, accessible information analysis and synthesis based on natural language interactions. However, ChatGPT could bring ethical challenges, encompassing plagiarism, privacy, data security, and the risk of generating biased or potentially detrimental data. This research aims to fill the given gap by elaborating on the key elements: motivators, demotivators, and ethical principles of using ChatGPT in SE research. To achieve this objective, we conducted a literature survey, identified the mentioned elements, and presented their relationships by developing a taxonomy. Further, the identified literature-based elements (motivators, demotivators, and ethical principles) were empirically evaluated by conducting a comprehensive questionnaire-based survey involving SE researchers. Additionally, we employed Interpretive Structure Modeling (ISM) approach to analyze the relationships between the ethical principles of using ChatGPT in SE research and develop a level based decision model. We further conducted a Cross-Impact Matrix Multiplication Applied to Classification (MICMAC) analysis to create a cluster-based decision model. These models aim to help SE researchers devise effective strategies for ethically integrating ChatGPT into SE research by following the identified principles through adopting the motivators and addressing the demotivators. The findings of this study will establish a benchmark for incorporating ChatGPT services in SE research with an emphasis on ethical considerations.
I. INTRODUCTION
ChatGPT is a cutting-edge language model created by OpenAI [1], designed to generate human-like responses to various prompts. The model employs deep learning algorithms, utilizing the latest techniques in Natural Language Processing (NLP) to generate relevant and coherent responses. GPT, or "Generative Pre-trained Transformer" refers to the model's architecture based on the transformer architecture and pretrained on a vast corpus of textual data [2]. ChatGPT has been fine-tuned on conversational data, allowing it to generate appropriate and engaging responses in a dialogue context [1], [3]. The model's versatility means that it can be applied to numerous applications, including chatbots, virtual assistants, customer service, and automated content creation. The OpenAI team continues to update and improve the model with the latest data and training techniques, ensuring it remains at the forefront of NLP research and development [4].
ChatGPT has significant potential for use in academic research [5], particularly for performing SE activities [6]. Researchers can utilize ChatGPT to generate realistic and high-quality text for various applications, including language generation, language understanding, dialogue systems, and experts' opinion transcripts [7]. ChatGPT can also be finetuned for specific domains or tasks, making it a flexible tool for researchers to create customized language models [8]. In addition, ChatGPT can be used to generate synthetic data for training other models, and its performance can be evaluated against human-generated data. Moreover, ChatGPT can be used for research on social and cultural phenomena related to language use. For example, researchers can use ChatGPT to simulate conversations and interactions between people with different cultural backgrounds or to investigate the impact of linguistic factors such as dialect, jargon, or slang on language understanding and generation [9].
ChatGPT significantly impacts research, particularly in qualitative research using NLP tools. Its ability to generate high-quality responses has made it a valuable tool for language generation, understanding, and dialogue systems [10]. Researchers can leverage ChatGPT to save time and resources, create customized language models, and fine-tune for specific domains or tasks [10]. ChatGPT's simulation capabilities also allow researchers to understand natural language in different contexts and develop more nuanced language models [9], [11]. Overall, ChatGPT has advanced the field of NLP and paved the way for more advanced language models and applications [12].
ChatGPT behaves as a smart, intelligent, and effective tool for SE research [13]- [15]. For instance, the ChatGPT can be used in literature review-based research to extract data by giving specific queries and related text in quotes. Similarly, we noticed that the ChatGPT is also an effective tool for generating the codes, concepts, and categories from transcripts in qualitative research [16]. Considering the effectiveness and arXiv:2306.07557v2 [cs.SE] 13 Aug 2023 usability of ChatGPT in academic research, we conducted this study (1) to explore and understand the motivators (positive factors) and demotivators (negatively influencing factors) across the ethical aspects (principles) of ChatGPT in SE research and (2) to develop Interpretive Structure Modeling (ISM) and Cross-Impact Matrix Multiplication Applied to Classification (MICMAC) based decision-making models in order to understand the relationships between ethical principles for using ChatGPT in SE research. We believe that the outcomes of this research will benefit the academic research community by providing a body of knowledge and serving as guidelines for considering ChatGPT in SE research.
The rest of the papers is organized as follows -the research methodology is presented in Section II, the results are discussed in Section III and the implications of the study findings are reported in Section IV. The threats to the validity of the study findings are highlighted in Section V and finally, we concluded the study with future avenues in Section VI.
II. SETTING THE STAGE
The research aims to comprehensively understand the ethical implications and potential threats associated with using ChatGPT in SE research and develop guidelines and recommendations for responsible research practices to mitigate these issues and threats [17]. By promoting the responsible and ethical use of ChatGPT [13], our research aims to help the SE research community benefit from the important motivators, demotivators, and ethical principles of using ChatGPT in SE research.
In order to establish the objective of this study, we began with a literature survey to examine the factors that motivate and demotivate SE researchers, as well as the ethical principles of using ChatGPT in SE research. Subsequently, we sought validation of our literature findings by engaging expert researchers through a questionnaire survey study. Finally, we employed Interpretive Structure Modeling (ISM) to develop the decision-making model based on the complex relationship between the ethics principles of using ChatGPT in SE research. A visual representation of our methodology can be found in Figure 1, with a concise discussion of each step provided in the following sections.
A. Literature Survey
To identify the motivators, demotivators, and principles associated with the ethical use of ChatGPT in SE research, we conducted a literature survey, examining both peer-reviewed published articles and grey literature [18], [19]. Using the common keywords, we explored the grey literature across general Google search and Google Scholar to investigate peerreviewed literature studies. Furthermore, we employed the snowballing data sampling approach to collect potential literature material related to the study objective [20]. This involved examining reference sections of selected studies (backward snowballing) and citations (forward snowballing), resulting in increasing the sample size by including more relevant studies [20].
B. Questionnaire Survey Study
The questionnaire survey is an appropriate approach to collect the data from a large and targeted population [21]. In this study, we designed a survey questionnaire to validate the identified motivators, demotivators, and principles for evaluating the ethical implications of ChatGPT in SE research. We divided the questionnaire into two parts. The first part focuses on the demographics of survey participants, while the second part consists of the identified motivators, demotivators, and principles. We used the five-point Likert scale (strongly agree, agree, neutral, disagree, and strongly disagree) to encapsulate the opinions of the targeted population. The second part of the questionnaire also includes an open-ended question, enabling participants to suggest any additional motivators, demotivators, or principles overlooked during the literature survey.
To reach the target population, we developed an online questionnaire using Google Forms and sent invitations via personal email, organizational email, and LinkedIn. We employed the snowball sampling approach to collect a representative data sample by encouraging participants to share the questionnaire across their research network. Snowball sampling is efficient, cost-effective, and suitable for large, dispersed target populations [20]. Data collection took place from 15 January to 25 April 2023, returning 121 responses, of which 113 were used for further analysis after removing eight incomplete responses.
We used the frequency analysis approach to analyze the collected data, which is appropriate for the descriptive type of data analysis [22]. This approach compares survey variables and computes the agreement level among participants based on the selected Likert scale. Frequency analysis has also been used in other software engineering studies [23], [24].
Furthermore, in order to address the ethical considerations pertaining to our survey respondents and the gathered data, we incorporated a detailed consent form at the outset of the questionnaire. This form thoroughly addresses all relevant aspects of data privacy and the confidentiality of participant identities. Every respondent must agree to the terms and conditions of the survey questionnaire prior to offering their feedback. A sample of the survey questionnaire can be found at the link 1 .
C. ISM Approach
Interpretive Structure Modeling (ISM) is an interactive learning process approach introduced by Forrester [25], that establishes a map of complex relationships among various factors, resulting in a comprehensive system model. The model provides a clear and conceptual representation in a graphical format [26]. ISM simplifies the complexities associated with relationships among different aspects, offering a better understanding of such factor relationships. Several relevant software engineering studies have employed this approach to develop conceptual models that clarify the relationships between principles [27]- [29]. We used the ISM approach to identify the interactions between the ethical principles (variables) of adopting ChatGPT in SE research. Figure 1 illustrates the detailed steps involved in the ISM approach and elaborated as follows [30].
• Define a contextual relationship that captures the dependencies among the research ethics principles. For example, "Principal A influences Principal B". • Create an initial reachability matrix that captures the direct relationships between the principles based on the contextual relationship defined in Step 1. • Compute the final reachability matrix by considering both direct and indirect relationships among the principles. • Using the final reachability matrix, partition the principles into different levels based on their relationships, starting from the least influential principles at the bottom level and moving up to the most.
The ISM approach was applied by inviting respondents from the first survey to participate in an ISM decisionmaking survey -23 experts agreed to join. Their insights were collected using a seperate questionnaire, a sample of which is provided at the given link 2 . The collected data were then used to develop the Structural Self-Interaction Matrix (SSIM) matrix, although the sample size could potentially limit the generalizability of the study. Nonetheless, previous research has shown that studies with as few as five experts can be effective for similar decision-making processes.
For instance, Kannan et al. [27] used the opinions of five experts for the selection of reverse logistic providers. Similarly, Soni et al. [31] established a nine-member group to analyze factors contributing to complexities in an urban rail transit system. Furthermore, Attri et al. [32] used the input from five experts to make a decision regarding success factors for total productive maintenance. Thus, in light of existing research, we concluded that the sample of twenty-three experts in our study was adequate for the ISM-based analysis.
III. RESULTS AND DISCUSSIONS
In this section, we provide the results and discussions, which are based on the mutual agreement of all authors. Section III-A presents the results of the literature survey -defining identified motivators, demotivators, and ethical principles of using ChatGPT in SE research. In Section III-B, the participants' perceptions regarding the identified motivators, demotivators, and ethical principles are discussed. Finally, Section III-C details the results of an ISM-based analysis of the identified principles.
A. Literature Survey Findings
This section presents the findings derived from both grey literature and peer-reviewed studies. The study reveals the significant motivators, demotivators, and their association with relevant ethical principles of using ChatGPT in SE research (see the subsequent sections).
Research Questions
Step 1 Survey Protocol
Identify Common Keywords
Literature Search
Extract Relevant Literature
Step 2 Data Reporting
1) Motivators of using ChatGPT in SE research:
ChatGPT offers a valuable tool for SE research in several ways. Firstly, ChatGPT can generate synthetic data for software testing, which is essential in SE [33]. This can save time and resources by automating the process of generating test cases, allowing for rapid iteration and software performance evaluation [33]. ChatGPT can be fine-tuned for specific SE domains, such as requirements engineering or software quality assurance [12]. This can help researchers to create customized language models that can be used to study different facets of SE.
Thirdly, ChatGPT can be used to simulate user interactions with software systems, allowing researchers to test and evaluate software usability and user experience. By simulating user interactions, researchers can identify potential issues and improve the overall design and functionality of the software system [33]. Therefore, by leveraging ChatGPT, the researchers can elevate the level of SE research and develop more sophisticated and effective software systems. Eventually, based on the unique characteristics of ChatGPT, we have identified 14 key motivators from the existing literature that are essential to consider when utilizing ChatGPT in SE research [8], [10], [15], [34]- [41]. Motivators are factors or features that encourage or drive a person or organization to take a particular action or make a decision. In the context of SE research, motivators can refer to the benefits or advantages of using ChatGPT, to achieve specific research goals. The identified 14 motivators are briefly elaborated as follows: M1 (Synthetic data generation): ChatGPT can generate synthetic data for software testing, which can save time and resources in SE research.
M2 (Domain-specific fine-tuning): ChatGPT can be finetuned for specific SE domains, such as requirements engineering and software quality assurance.
M3 (Usability simulation evaluation): ChatGPT can simulate user interactions with software systems, allowing researchers to test and evaluate software usability and user experience.
M4 (Generate requirements description): ChatGPT can generate natural language descriptions of software requirements, making it easier for stakeholders to understand the software system.
M5 (Documentation generation improvement): ChatGPT can be used to generate code comments and documentation, which can improve software quality and maintainability.
M6 (Bug reporting assistance): ChatGPT can assist in software bug reporting by generating high-quality, natural language bug descriptions.
M7 (Test case generation): ChatGPT can be used to generate test cases, enabling researchers to evaluate software performance and identify potential issues.
M8 (Automated code generation): ChatGPT can help in automated software code generation, making it easier to build software systems and reducing the potential for human errors.
M9 (Summarize code): ChatGPT can generate natural language summaries of software code, making it easier for developers to understand the codebase.
M10 (Maintenance assistance): ChatGPT can help in software maintenance by generating high-quality documentation and code comments, making it easier for developers to maintain and update the software system. M11 (Performance explanation): ChatGPT can generate natural language explanations of software performance issues, making it easier for developers to diagnose and fix software bugs.
M12 (Generate automated report): ChatGPT can be used to generate automated software reports, providing stakeholders with up-to-date information on software performance and quality.
M13 (Testing assistance): ChatGPT can assist in software testing by generating test scenarios and test data, enabling researchers to evaluate software functionality and performance.
M14 (Develop user manual): ChatGPT can generate natural language user manuals and documentation, making it easier for end-users to understand and use the software system.
Thus, ChatGPT provides a powerful and flexible tool for SE research, offering numerous motivators for researchers to incorporate it into their projects. By leveraging the power of ChatGPT, researchers can advance state-of-the-art research in SE and create more sophisticated and effective software systems.
2) Demotivators of using ChatGPT in SE research: While ChatGPT has several motivators for its use in SE research, there are also some demotivators to consider. ChatGPT may not always generate accurate or relevant responses, requires significant training data, and may not be suitable for certain SE tasks [42]. Its responses can be repetitive, too complex, too simple for the intended audience, and may not align with industry or domain-specific conventions. ChatGPT's responses may also reflect bias in the training data and require manual editing or correction, reducing the efficiency gains provided by automation and natural language processing [10], [43]. Therefore, after reviewing the relevant literature studies [10], [34], [38], [44]- [52], we uncovered the following demotivators (contextually, factors that can potentially limit the effectiveness of utilizing ChatGPT), which must be taken into account when using ChatGPT in SE research.
DM1 (Model limitations acknowledged):
ChatGPT is not a perfect language model, and its responses may not always be accurate or relevant to the task at hand.
DM2 (Data-intensive fine-tuning): ChatGPT requires a significant amount of training data to fine-tune it for specific SE tasks, which can be time-consuming and resource-intensive.
DM3 (Limited task scope): ChatGPT may not be suitable for specific SE tasks requiring specialized knowledge or expertise outside natural language processing. DM4 (Repetitive response issue): ChatGPT's responses can be repetitive, which may not provide sufficient variety in generated data.
DM5 (Response complexity mismatch): ChatGPT may generate responses that are too complex or too simple for the intended audience, making it difficult to communicate with stakeholders or end-users.
DM6 (Convention misalignment issue): ChatGPT's responses may not always align with industry or domain-specific conventions, leading to inconsistencies and inaccuracies in generated data.
DM7 (Bias reflection issue): ChatGPT may generate responses that are biased or reflect the bias in the training data, leading to ethical concerns and potential negative impacts on software development.
DM8 (Multilingual limitations identified):
ChatGPT is able to generate responses only in 50 languages [53], which may limit its usability in international software development projects.
DM9 (Integration challenges anticipated): ChatGPT may generate responses in a format or structure incompatible with certain software development tools or platforms used in an organization's existing workflows. This incompatibility may lead to technical difficulties in integrating ChatGPT into the development process, resulting in delays and additional costs. For example, suppose ChatGPT generates code snippets in a programming language that is not supported by the development platform. In that case, developers may need to convert the code to the correct format manually. As a result, careful consideration and testing are necessary when integrating Chat-GPT into an organization's software development workflow.
DM10 (Misalignment Conflicts):
ChatGPT's responses may not always match the preferences or expectations of stakeholders involved in a software development project. For example, a stakeholder may have a particular vision for a software application user interface or functionality, but ChatGPT's responses may suggest something different. This misalignment may lead to disagreements and conflicts among the project team and stakeholders, impacting the project's progress and success. Project teams need to consider the input of all stakeholders and carefully evaluate ChatGPT's suggestions to ensure they align with the project's goals and objectives. Additionally, stakeholders should be educated on the capabilities and limitations of ChatGPT to manage their expectations and ensure they are not relying solely on the tool for decision-making.
DM11 (Unrealistic responses):
ChatGPT's responses may not always align with the technical constraints of the software development environment, leading to unrealistic or impractical suggestions or recommendations.
DM12 (Demand manual editing): ChatGPT's responses may require significant manual editing or correction, reducing the efficiency gains provided by automation and natural language processing.
Ultimately, the limitations of ChatGPT in SE research include its dependence on large amounts of training data, potential inaccuracies and biases in generated responses, limitations in language and compatibility with specific tools and platforms, potential conflicts with stakeholder preferences, and the need for significant manual editing or correction.
P1 (Bias):
ChatGPT's responses may reflect the biases present in the training data, which can perpetuate existing biases and lead to unfair or discriminatory outcomes.
P2 (Privacy): ChatGPT may generate responses that contain sensitive or personally identifiable information, potentially violating individuals' privacy rights.
P3 (Accountability): ChatGPT's responses may not always be transparent or explainable, making it difficult to determine who is responsible for errors or biases in generated data.
P4 (Reliability): ChatGPT may generate inaccurate or misleading responses, potentially negatively impacting software development or end-users.
P5 (Intellectual property): ChatGPT may generate responses that infringe upon intellectual property rights, such as copyright or patent law.
P6 (Security):
ChatGPT's responses may contain sensitive information that could be exploited by malicious actors, leading to potential security breaches or cyberattacks.
P7 (Manipulation): ChatGPT may be used to generate fake news, propaganda, or other forms of misinformation, leading to potential harm to individuals or society as a whole.
P8 (Legal compliance): ChatGPT's responses may violate legal and regulatory requirements, such as data protection laws or accessibility standards.
P9 (Ethical governance): The use of ChatGPT in SE research requires appropriate ethical governance, including informed consent, privacy protection, and transparency. 4) Relationship of Motivators and Demotivators with Ethical Principles: Preliminary, we develop a taxonomy by considering the identified motivators, demotivators, and their possible impacts on ChatGPT ethical principles. Motivators are factors that encourage or inspire researchers to consider Chat-GPT as a tool for people to take certain actions. At the same time, demotivators discourage or hinder people from taking certain actions. In this context, motivators and demotivators may refer to factors that influence the use of ChatGPT in SE research. The taxonomy considers the possible impact of these motivators and demotivators on the ethical aspects of using ChatGPT in SE research. Ethics aspects (principles) refer to the moral standards that guide the behavior and decisionmaking in SE research. The proposed taxonomy (see Figure 2) provides a roadmap for academic researchers to evaluate both the motivators (positive factors) and demotivators (negative factors) related to the ethical aspects of using ChatGPT in SE research. By considering these factors, researchers can gain a comprehensive understanding of the ethical considerations associated with using ChatGPT in SE research. This taxonomy can serve as a valuable tool for researchers to ensure that they use ChatGPT ethically and responsibly. It can also contribute to developing ethical guidelines and best practices for using ChatGPT in SE research.
B. Empirical Study Findings
The results of the questionnaire survey study are presented in this section. Specifically, we cover (i) the demographic details of survey participants and (ii) survey participants' perceptions of motivators, demotivators, and ethical principles of using ChatGPT in SE research.
1) Demographic Details: We conducted a frequency analysis to systematically organize the descriptive data, which is well-suited for examining a group of variables for numeric and ordinal data. Our study included 113 respondents from 19 countries across 5 continents, representing 9 professional roles, 15 distinct research domains, and 3 different types of research (see Figure 3(a-d)).
Through applying thematic mapping, we categorized the respondents' roles into nine different categories (see Figure 3(b)). The results indicate that 20% of the respondents were primarily distributed between research assistants and research directors. Additionally, the participants' research teams are conceptually organized across 15 key research domains (see Figure 3(c)). We found that 13% of participants were engaged in telecommunications research, while 12% worked within the healthcare context (see Figure 3(c)).
Regarding demographic information, 69% of the survey participants were male (see Figure 3(e)). As measured by the number of researchers, the research team size predominantly ranges from 11 to 20, accounting for 32% of total responses (see Figure 3(f)). Among all respondents, the majority (35%) reported having 3-5 years of research experience (see Figure 3(g)).
2) Empirical Insights on Ethical Principles and related Motivators, and Demotivates:
The survey responses are classified as average agree, neutral and average disagree (see Figure 4(a-c)). We observed that (approx. 80%) of the respondents positively confirmed the findings of the literature survey, i.e., ethical principles of using ChatGPT in SE research, their relevant motivators, and demotivators.
The frequency analysis results show that a significant majority (86%) of the survey participants consider P1 (Bias) as an important ethical principle when using ChatGPT in SE research (Figure 4(a)). The high percentage of participants who emphasize the importance of addressing bias in ChatGPT demonstrates a strong consensus among the SE research community. Addressing bias in AI language models like ChatGPT is crucial for ensuring reliable, valid, and generalizable SE research outcomes while promoting more inclusive, fair, and trustworthy AI tools [57], [58]. Furthermore, P2 (Privacy) and P14 (Fairness) are considered as the second most important (85%) principles for ethical alignment of ChatGPT in SE research. PrivacyPrivacy and Fairness are considered vital to foster responsible, transparent, and accountable research while addressing the potential consequences of inadequate attention to data privacy and equitable treatment of individuals and groups [59], [60].
P7 (Manipulation)
is considered the least significant principle (72%) by the survey participants (Figure 4(a)), potentially due to the nature of SE research and the context of ChatGPT usage. SE research focuses on software development and processes, which may make manipulation less relevant or critical compared to principles like bias, privacy, and fairness [8], [61]. ChatGPT's usage in SE research may not involve as much end-user interaction as other AI applications, reducing perceived potential for manipulation. Despite this, researchers should remain vigilant and address any manipulative effects or unintended consequences associated with ChatGPT use [8], [62].
The survey results reveal that average 75% of participants have confirmed the significance of the identified motivators to support the ethical principles of using ChatGPT in SE research (see Figure 4(b)). The most high frequency motivators -M9 (Summarize code, 90%), M4 (generate requirements description, 86%), and M1 (synthetic data generation, 85%) demonstrate the potential value and usefulness of ChatGPT in SE research when implemented responsibly. M9 streamlines code comprehension and maintenance, saving time and effort while maintaining code accuracy and efficiency [63]. M4 addresses the need for clear requirements descriptions, as ChatGPT can generate precise and coherent descriptions, reducing miscommunication and fostering better understanding among stakeholders [64]. M1 enhances the research process by providing realistic, anonymized data for extensive testing and validation without compromising privacy and ethics [65]. These motivators emphasize the potential benefits of responsibly incorporating ChatGPT in SE research to improve various aspects of the software development life cycle.
Additionally, 80% of participants on average believe that the identified demotivators could negatively impact the ethical principles of using ChatGPT in SE research (see Figure 4(c)). Specifically, DM6 (Convention misalignment issue), DM1 (Model limitations acknowledged), and DM10 (Misalignment conflicts) were considered significant by 87%, 86%, and 85% of survey participants, respectively. Because DM6 emphasizes the need to address inconsistencies of ChatGPT's output in terms of coding conventions or best practices, which can hinder the development process and affect code quality [36], [66]. DM1 emphasizes the importance of acknowledging Chat-GPT's limitations, such as biases or inaccuracies, to prevent over-reliance on the model and maintain ethical research standards [67]. DM10 stresses the need to resolve conflicts between ChatGPT's output and a project's goals, values, or ethical principles to maintain trust and collaboration among stakeholders [68]. Addressing these demotivators is crucial for ensuring AI language models' (i.e., ChatGPT) ethical and responsible use in SE research.
C. ISM Based Modeling of Ethical Principles
By employing the ISM approach to categorize the identified ethical principles into five distinct levels, researchers and organizations can gain a deeper understanding of the relationships between these principles. This structure promotes more responsible, ethical, and socially conscious use of ChatGPT in SE research. The ISM analysis was conducted using the expert's opinions collected using a questionnaire survey (as discussed in Section II-C). The collected responses were summarized and used the following symbols to indicate the direction of the relationship between the ChatGPT principles (principle m and principle n).
• "V" indicates the relationship of principle m to n. • "A" indicates the relationship of principle n to m. • "X" when both principals m and n reach each other. • "O" presents the situation when there is no relationship between principal m and principal n.
Using the experts' opinions and the above-discussed symbols, we developed the Structural Self-Interaction Matrix (SSIM) as presented in Table I. The data shown in Table I indicates no relationship between P1 (Bias) and P17 (Ethical implications of automation), represented by an 'O'. In contrast, a significant relationship is observed between P13 (Informed consent) and P1 (Bias), where 'V' signifies this relationship. Notably, we discovered no instances of 'A' or 'X' type relationships among the principles.
In the next step of ISM analysis, the transformation of the SSIM data into a binary reachability matrix is performed. This process entails converting values represented by 'V', 'A', 'S', and 'O' into binary digits (0, 1) using the following conversion rules. For instance, If the value of m and n in SSIM is V, then we replace it with 1; else, the assigned value is 0. If the value of m and n in SSIM is A, then it is replaced with 0; else it becomes 1. If the value of m and n in SSIM is X, then it is replaced with 1; and give 1 to n and m entry. If the value of m and n in SSIM is O, then it will replace with 0; and for m and n, the assigned value is also 0. These rules are designed to ensure that the binary reachability matrix accurately reflects the relationships outlined in the SSIM. Once this transformation is completed, we then enhance the reachability matrix by applying a transitivity check, as discussed in Section II-C. This introduces a '1*' value to capture transitivity, helping to fill potential gaps in the expert data collected during the SSIM development stage. The application of this transitivity check is further detailed in Table II. This rigorous process ensures our ISM analysis is robust, comprehensive, and reflective of the expert insights gathered.
Warfield [69] stated that the reachability set consists of the variables itself and other variables, which it may help to achieve, whereas the antecedent set consists of the variable itself and other variables, which may help achieving it. Thereafter, the intersection of these sets is derived for all the variables. Subsequently, the intersection of these sets is calculated for all variables. Variables that have the same sets of reachability and intersection are placed at the top level in the ISM hierarchy. These top-tier variables do not contribute to achieving any other variables above their level. Once this apex element is identified, it is isolated from the rest. This methodology is repeated to pinpoint the variables for each subsequent level. This process is continued until every variable's level is determined. These levels are instrumental in constructing the digraph and the ISM model. In our study, we have identified seventeen ethical principles for using ChatGPT in SE research. The finalized levels, as determined by our analysis, are illustrated in Figure 5. A comprehensive analysis is provided at the following link 3 .
Using large language model systems like ChatGPT requires a multi-level approach to ensure ethical research and responsible innovation. Level 1 (Foundations of Ethical Research) focuses on P11 (Ethical decision-making) and P13 (Informed consent), safeguarding the rights and dignity of participants. highlights the importance of considering broader societal implications, such as misinformation, biased outcomes, and potentially harmful applications. Lastly, Level 5 (Accountability) underscores the significance of fostering a collaborative environment and shared responsibility among researchers, organizations, and AI systems, bolstering stakeholder P15 (Transparency) and P10 (Trust). By addressing these levels, researchers can ensure responsible development, deployment, and use of ChatGPT while upholding ethical standards and safeguarding stakeholder interests.
D. MICMAC analysis
MICMAC is abbreviated by matrix cross-impact matrix multiplication applied to the classification of identified principles. The MICMAC analysis assists in examining the key principles that drive the ethical aspect of using ChatGPT in SE research. According to Attri et al. [32], the MICMAC approach "is an analysis to examine principles drive power and dependence power". The principles are classified into four clusters based on their driving and dependence power.
• Autonomous cluster: the principles belonging to this cluster have weak driving and dependence power. They are mostly disconnected from the ethical scope due to weak links. Hence, these principles have a weak influence on the whole system (other principles). • Linkage cluster: the principles belonging to this cluster have strong driving and dependence power and affect other principles due to strong linkage. • Dependent cluster: the principles belonging to this cluster have strong dependence power but have weak driving power. I STRUCTURAL SIMILARITY INDEX MEASURE (SSIM) MATRIX P17 P16 P15 P14 P13 P12 P11 P10 P9 P8 P7 P6 P5 P4 P3 P2 P1 P1 o o o o • Independent cluster: the principles belonging to this cluster have weak, dependent power but have strong driving power; they are also known as key principles.
After developing the hierarchical ISM model for ethical principles of ChatGPT using the ISM analysis, we conducted a MICMAC analysis based on the conical matrix provided at the given link 4 . We employed the classification approach proposed by Kannan et al. [27] for the MICMAC-based categorization and present the results in Figure 6. We identified and organized the ethical principles for conducting SE research using ChatGPT into four distinct clusters, as determined by the MICMAC analysis.
First, the independent cluster includes principles, such as addressing P1 (Bias), ensuring P2 (Privacy), maintaining P4 (Reliability), implementing P6 (Security), and obtaining P13 (Informed consent). These principles form the foundation of ethical research practice and influence other aspects of Chat-GPT development, characterized by weak dependence power and strong driving power. Second, the dependent principles, 4 https://tinyurl.com/39k3xhex such as P10 (Trust), P11 (Ethical decision-making), P14 (Fairness), P15 (Transparency), and P16 (Long-term consequences) signify the outcomes of ethical research. These principles serve as indicators of effective practice and success measurement in SE research using ChatGPT, having strong dependence power but weak driving power.
Third, the linkage variables connect independent and dependent principles, including P3 (Accountability), P5 (Intellectual property), P7 (manipulation), P9 (Ethical governance) and P12 (Social responsibility). This cluster ensures a comprehensive and ethically sound approach to ChatGPT development, with principles exhibiting strong driving and dependence power due to their robust linkage. Finally, autonomous variables, like P8 (Legal compliance) and P17 (Ethical implications of automation) hold unique positions within the analysis. These principles play crucial roles in understanding the broader implications of ChatGPT development and fostering a responsible, sustainable AI ecosystem that aligns with societal expectations and legal requirements. However, with weak driving and dependence power, these principles are mostly disconnected from the ethical scope and have a minor impact on the overall system.
In conclusion, our MICMAC analysis evaluates the relationships between ethical principles, taking into account their driving and dependence power. This insight enables the research community to understand the varying dynamics of these principles, such as those with driving power over others, those that are independent yet influential, those that are completely autonomous without any driving or dependence power, and those wholly dependent on other principles. By recognizing these distinct characteristics, researchers can devise more effective strategies for the ethical adoption of ChatGPT in SE research. The findings of this study can be leveraged by the research community in order to establish best practices to adopt ethically align ChatGPT within the realm of SE research. Here are the key implications derived from the study's findings:
V. THREATS TO VALIDITY
Several threats could potentially affect the validity of the findings of this study. Accordingly, we identified and categorized potential threats, aligning them with internal validity, external validity, construct validity, and conclusion validity as per the guidelines defined by Easterbrook et al. [70].
A. Internal Validity
Internal validity is the degree to which the results of observation -namely, the causal relationships -are trustworthy and not influenced by other factors or biases. The potential internal validity threat in this study is the understandability and interpretation of the survey content. The survey respondents may have a different understanding of the survey questions, which could bias the responses. To mitigate this threat, we piloted the instrument, seeking feedback from SE researchers to enhance the clarity and readability of the survey content prior to its final distribution.
B. External Validity
External validity is the extent to which the results of a study can be generalized or applied to other situations, populations, or settings. In this study, the questionnaire data were collected from 113 researchers, which may not be representative of the broader SE research community. This could limit the generalizability of the findings. Nonetheless, we gathered 113 valid responses from 19 countries across five different continents. The survey participants had a diverse range of experience, fulfilled various roles in different projects, and worked in research teams of differing sizes (see Figure 3). We agree that the study findings could not be generalized to a larger scale; however, based on the details demographics of the survey participants, the overall results could be generalized to some extent.
C. Construct Validity
Construct validity refers to the degree to which a test or experiment measures what it claims to be measuring. In this study, the constructs such as "motivators," "demotivators," and "ethical principles" may not have been defined clearly enough, leading to potential misinterpretation. However, we mitigated this threat by defining and elaborating on the mentioned constructs based on the literature survey. The identified "motivators", "demotivators", and "ethical principles" are comprehensively discussed in Section II-A. Moreover, the survey questionnaire was piloted based on the expert's opinion to improve the interpretations of the survey variables (constructs).
D. Conclusion Validity
Conclusion validity is concerned with the relationship between the treatment and the outcome and whether any observed effect in the data is real or not. One possible threat to the conclusion validity is that with only 113 respondents, the statistical power may be insufficient to detect meaningful differences or relationships. However, based on the existing relevant studies and the novelty of the research field, the given sample size is strong enough to draw the study's conclusions. Moreover, we plan to extend this study by widening the pool of potential respondents, extending the data collection period, and using different methods to reach the potential population (see Section VI-B). Finally, all the authors were invited to participate in the brainstorming sessions to collaboratively dissect the primary findings and formulate definitive conclusions.
VI. CONCLUSIONS AND FUTURE PLANS
We will now present a summary of the conclusions drawn from the study findings, along with a detailed roadmap outlining potential avenues for future exploration.
A. Conclusions
ChatGPT enhances efficiency in knowledge extraction and collaboration within SE research. Its capacity to produce realistic and contextually appropriate language renders it an attractive tool for use in this research field. However, ethical concerns such as plagiarism, privacy, data security, and the risk of generating biased or harmful data must be addressed. This study explores the motivators, demotivators, and ethical principles associated with using ChatGPT in SE research.
We conducted a literature survey, identified 17 ethical principles and their corresponding 14 motivators and 12 demotivators for using ChatGPT in SE research, as detailed in Section III-A. These motivators and demotivators were subsequently mapped to the 17 identified principles. The principles highlight crucial areas that the SE research community must consider in order to conduct ethically responsible research. The associated motivators represent factors that can support adherence to these principles. Conversely, demotivators are factors that may obstruct the consideration of ethical principles when using ChatGPT in SE research.
To empirically evaluate the significance of the identified principles and their associated motivators and demotivators, we conducted a questionnaire survey involving SE researchers. The frequency analysis highlights a strong consensus among survey participants, with 86% identifying P1 (Bias) P1 (Bias) as a critical ethical principle in using ChatGPT for SE research. Furthermore, 85% deemed P2 (Privacy) and P14 (Fairness) as equally important principles. Interestingly, P7 (manipulation) was considered less significant at 72%. The results also demonstrated that an average of 75% of participants acknowledged the importance of specific motivators, such as M9 (Summarizing code), M4 (Generating requirements descriptions), and M1 (Synthetic data generation), in supporting ethical principles. Meanwhile, an average of 80% agreed that identified demotivators, including DM6 (Convention misalignment issues), DM1 (Model limitations acknowledged), and DM10 (Misalignment conflicts), could negatively impact ethical principles. Addressing these concerns is important for ensuring the responsible use of ChatGPT in SE research.
We used the Interpretive Structural Modeling (ISM) approach to create level-based decision models and perform the MICMAC analysis for the development of cluster-based decision models. The ISM based decision model comprises five of levels: Level 1 (Foundations of Ethical Research), Level 2 (Research Design & Methodology), Level 3 (Governance & Compliance), Level 4 (Societal Impact & Responsibility), and Level 5 (Accountability). These levels address various aspects, from ethical decision-making and informed consent to broader societal implications and shared responsibility among stakeholders. This level-based decision model illustrates the relationships among different principles, guiding the research community in addressing the ethical principles of using Chat-GPT in SE research while considering the dependence and driving power of these principles on others.
The MICMAC analysis evaluated the relationships between ethical principles, categorizing them into four distinct clusters: independent, dependent, linkage, and autonomous. The MIC-MAC based decision model assists the SE research community in comprehending the diverse dynamics of the identified principles, including those that exert driving power over others, those that are independent yet carry significant influence, those that are completely autonomous devoid of any driving or dependence power, and those entirely reliant on other principles. By acknowledging these unique characteristics, researchers can formulate more effective strategies for the ethical implementation of ChatGPT in SE research.
B. Future Plans
The ultimate aim of this research project is the development of comprehensive guidelines for using ChatGPT in SE research. This study presents a taxonomy (Figure 2), developed based on the preliminarily investigated motivators, demotivators, and their possible impact on ChatGPT ethics aspects. The next steps are conducting a comprehensive multivocal literature review and questionnaire survey study to refine and identify the additional motivators and demotivators across ethical aspects of using ChatGPT in SE research. Below are the steps we will follow for the development of guidelines: 1) Perform an extensive multivocal literature review to identify a broader range of motivators, demotivators, and their potential impact on ethical aspects. 2) Design a survey questionnaire targeting SE researchers to validate the findings of multivocal literature review and explore additional motivators, demotivators, and ethical aspects of using ChatGpt in SE research. 3) Comparatively analyse the the findings of multivocal literature review and questionnaire survey. Identify common themes, trends, and patterns emerging from these data sources to create a taxonomy of motivators, demotivators, and their relationship to the ethical aspects. 4) Draft a set of guidelines based on the revised taxonomy addressing the ethical concerns of using ChatGPT in SE research. The guidelines should consider the motivators, demotivators, and their possible impact on ethical principles. 5) Invite AI ethics, software engineering, and ChatGPT application experts to review the draft guidelines. Collect their feedback and insights to refine and validate the guidelines. Incorporate their feedback to ensure the guidelines are robust, relevant, and effective in addressing ethical concerns. 6) Conduct academic case studies to evaluate the set of guidelines SE research and incorporate the potential changes based on the case studies' findings. 7) Disseminate the final guidelines with the SE research community through conferences, workshops, and academic journals. Engage with the community to promote adopting and using the guidelines in practice. 8) Regularly review and update the guidelines in light of new developments in ChatGPT technology, ethical concerns, and feedback from the research community. Ensure these guidelines remain relevant and effective in addressing the ethical challenges of using ChatGPT in SE research. | 2023-06-14T01:15:56.711Z | 2023-06-13T00:00:00.000 | {
"year": 2023,
"sha1": "0a5f818915de233b1ce8262a47e78290c84af866",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0a5f818915de233b1ce8262a47e78290c84af866",
"s2fieldsofstudy": [
"Computer Science",
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
201970122 | pes2o/s2orc | v3-fos-license | NKAP functions as an oncogene in Ewing sarcoma cells partly through the AKT signaling pathway
NF-κB activating protein (NKAP) is a highly conserved protein involved in transcriptional repression, immune cell development, maturation, acquisition of functional competency and maintenance of hematopoiesis. In the present study, the function of NKAP in the progress of Ewing sarcoma (ES) was investigated. It was identified that NKAP is highly expressed in ES cells when compared with human mesenchymal stem cells (MSCs). NKAP was knocked-down in human ES cell lines A673 and RD-ES using small interfering (si)RNA transfection. The effectiveness of transfection was then verified using reverse transcription-quantitative PCR and western blot analysis to determine mRNA and protein levels, respectively. The results of the proliferation assays indicated that the knockdown of NKAP inhibited the proliferation and clonogenic abilities of human ES cells. Transwell assays further indicated that cell invasion and migration were significantly inhibited by NKAP knockdown, which may be mediated by downregulation of matrix metalloproteinase (MMP)-9 activity. Gain-of-function analysis also demonstrated the positive role NKAP played in the proliferation, invasion and migration of ES cells. Cell apoptosis was evaluated by flow cytometry, which identified that apoptotic cells were significantly increased when NKAP was silenced. In addition, downregulation of NKAP increased the levels of Bax and cleaved caspase 3, but decreased Bcl2 levels, which suggested that the mitochondrial apoptosis pathway was activated. To explore the action mechanism of NKAP, the status of the AKT signaling pathway in NKAP-silenced A673 and RD-ES cells was investigated. Results indicated that NKAP knockdown led to decreased phosphorylation of AKT and expression of cyclin D1, a down-stream effector of the AKT signaling pathway, suggesting inactivation of the AKT signaling pathway. In conclusion, the present study revealed that NKAP promoted the proliferation, migration and invasion of ES cells, at least partly, through the AKT signaling pathway, providing new approaches for the therapeutic application of NKAP in ES.
Introduction
Ewing sarcoma (ES) is a highly aggressive, small, round cell, malignant neoplasm of bone and soft tissue that typically manifests in children and young adults (1). It demonstrates an aggressive clinical behavior with a high-rate of local recurrence and distant metastasis, with ~25% patients presenting metastases of the bone marrow and lungs at the time of diagnosis, which contributes to the high mortality rate (2). Primary treatment options for ES are traditional methods such as surgical resection, radiotherapy and chemotherapy, curing ~60% of patients with localized disease (3). By contrast, the overall survival rate for patients with distant metastases treated with traditional methods is <30% (4,5). Therefore, the pathogenesis of ES, especially the molecular mechanism involved in metastasis, needs to be elucidated to produce novel treatments.
NF-κB activating protein (NKAP) was originally identified as a nuclear localized protein that promotes tumor necrosis factor-and interleukin-1-induced NF-κB activation (6). Its protein structure contains three domains; Serine and arginine repeats at the N-terminus (RS domain), followed by a basic domain and a C-terminal DUF926 (domain with unknown function) (7). More specifically, NKAP has been identified to interact with RNA-binding proteins through its RS domain in order to regulate RNA splicing and processing, while its basic domain is essential for nuclear localization (7). Functional investigations have found that NKAP is involved in the development, maturation and acquisition of functional competencies of multiple immune cells including T cells, Invariant Natural Killer T (iNKT) cells and regulatory T cells (8)(9)(10)(11)(12). Furthermore, two other studies have demonstrated that NKAP is required for the maintenance and survival of adult hematopoietic stem cells and may serve an important role in mouse neurogenesis (13,14). However, whether NKAP plays a role in tumor progression remains not clearly understood. Li et al (15) reported that SUMOylated NKAP is required for chromosome alignment in mitosis, and that its dysregulation causes chromosomal instability, potentially contributing to tumorigenesis. Liu et al (16) reported that NKAP functions as an oncogene and its expression is induced by CoCl 2 treatment in breast cancer via the AKT/mTOR signaling pathway.
In the present study, the role of NKAP in the proliferation, migration and invasion of ES cells was investigated using RNA interference technology and pcDNA transfection. In addition, the potential action mechanisms of NKAP were determined using signaling pathway investigation. The present study identified that NKAP has the potential to serve as a promising therapeutic target for ES. In order to knock down or overexpress the expression of NKAP, ES cells were transfected with either 5 nmol of NKAP-targeted siRNA (siNKAP) or pcDNA-NKAP (Shanghai GeneChem Co., Ltd.) when cells reached 70% conf luence using Lipofectamine™ 2000 (Invitrogen; Thermo Fisher Scientific, Inc.) for 5 h in DMEM. Cells were transferred into DMEM supplemented with 10% FBS and normally cultured for 24 h before subsequent experimentation. The sequences of siRNAs were as follows: siNKAP, 5'-GAGAAGAGAGCCCTTGCAT-3'; non-targeting negative control siRNA (siNC; Shanghai GeneChem Co.,Ltd.), 5'-UUCUCCGAACGUGUCACGUTT-3'. siNC was transfected into ES cells as the control of siNKAP.
Reverse transcription-quantitative PCR (RT-qPCR).
Following transfection for 24 h, total RNA was isolated from the A673 and RD-ES cells using Ultrapure RNA kit (Beijing CWBio). A total of 1 mg of RNA was transformed into cDNA through a RT reaction with oligo (dT) primers using HiFiScript cDNA Synthesis kit (Beijing CWBio). The liquid mixture was incubated at 42˚C for 30-50 min, and then at 85˚C for 5 min. Then 1 ng of first-strand cDNA was used as a template for qPCR using SYBR Premix Ex Taq II kit (Takara Bio, Inc.; 95˚C for 10 min, 40 cycles at 95˚C for 15 sec and at 60˚C for 1 min). The relative expression of target gene was analyzed using the 2 -ΔΔCt method (17). β-actin was used as an internal control. Sequences of the primers were as follows: NKAP forward, 5'-CGG CAG AAG AGA TTA AGT GAG-3' and reverse, 5'-CGT TCA TAC CCC CAG AGG TTT AG-3', and β-actin forward, 5'-CCC GAG CCG TGT TTC CT-3' and reverse, 5'-GTC CCA GTT GGT GAC GAT GC-3'.
Proliferation assays. For the Cell Counting Kit-8 (CCK-8) assay, A673 and RD-ES cells transfected with siNKAP or pcDNA-NKAP were seeded into a 96-well plate at a density of 3,000 cells per well, each group including three wells. Cells were cultured for 24, 48 and 72 h then were treated with 10 ml CCK-8 solution (Dojindo Molecular Technologies, Inc.) at 37˚C for 2 h. Optical density values with 450 nm were detected using a microplate reader (iMark; Bio-Rad Laboratories, Inc.).
For the colony formation assay, 500 cells transfected with siRNA for 24 h were seeded in 6-cm petri dishes and cultured in DMEM medium supplemented with 10% FBS at 37˚C for two weeks. The colonies were thereafter dried in the air for 1 h prior to fixing with 4% paraformaldehyde at room temperature for 15 min and staining with 0.1% crystal violet solution at room temperature for 20 min. The number of colonies was subsequently counted using a light microscope (magnification, x4).
Transwell invasion and migration assays. Transwell chambers (pore size, 8 mm; EMD Millipore) were used to detect the invasion and migration of the ES cells transfected with siNKAP or pcDNA-NKAP. For cell invasion, assay the chambers were pre-coated with Matrigel (BD Biosciences) at 4˚C for 30 min. Following transfection, A673 or RD-ES cells were seeded in the upper chambers at a density of 1x10 5 in 200 µl of serum-free DMEM. The lower chambers were then filled with 500 µl of DMEM with 20% FBS as the chemoattractant. Following 24 h of incubation at 37˚C, the non-invaded cells on the upper Transwell membrane were removed by scraping, while the invaded cells were fixed with 4% paraformaldehyde at room temperature for 30 min and stained with 0.1% crystal violet at room temperature for 20 min. The stained cells were photographed using a light microscope at magnification x100 and counted in three random view fields.
The cell migration experiment followed the aforementioned protocol however no Matrigel was used in the Transwell chambers.
Gelatin zymography. The influence of NKAP knockdown on the proteolytic activities of MMP-9 was detected using gelatin zymography. First, the A673 and RD-ES cells were transfected with siNC or siNKAP for 24 h. The cells were then harvested and seeded in a 6-cm petri dish (1x10 5 cells/dish) with DMEM with 10% FBS. Following culture for 24 h at 37˚C, the supernatants were collected and centrifuged at 4˚C and 2,066 x g for 5 min to remove cell debris. Protein concentrations were quantified with the BCA protein assay kit (Sangon Biotech Co., Ltd.). A total of 20 µg of proteins from each sample were then loaded into a 10% gel with 1 mg/ml gelatin A (Sigma-Aldrich; Merck KGaA) and separated by SDS/PAGE for 1.5 h. Finally, the gels were incubated with 0.1% Coomassie Brilliant Blue at room temperature for 3 h then destained with 45% methanol and 10% (v/v) acetic acid until clear bands suggestive of gelatin digestion were present. Bands were photographed using a gel imager (Bio-Rad Laboratories, Inc.) and analyzed with Quantity One software v4.5 (Bio-Rad Laboratories, Inc.).
Flow cytometry detection for apoptosis. After being transfected with siNC or siNKAP for 48 h, the apoptosis of ES cells was evaluated using flow cytometry. A total of 1x10 6 ES cells were centrifuged at 4˚C and 1,033 x g for 5 min and resuspended in 1 ml PBS. A total of 5 µl Annexin V (1 µg/ml; Aposcreen annexin v-biot; Beckman Coulter, Inc.) was then added and incubated at room temperature for 15 min. Subsequently, propidium iodide (PI; 1 µg/ml; Beckman Coulter, Inc.) was added for 5 min at room temperature. All staining incubation steps were performed in dark. The apoptosis rate was analyzed by a flow cytometer and calculated using BD FACSDiva software v4.1 (BD Biosciences).
Statistical analysis. All experiments were repeated three times with the data expressed as mean ± standard deviation. Comparisons between different groups were performed using Student's t-test for two groups or one-way analysis of variance followed by a Tukey's post-hoc test for multiple groups. Statistical analysis was performed using GraphPad Prism 7.0 (GraphPad Software, Inc.). P<0.05 was considered to indicate statistical significance.
Downregulation of NKAP induces growth inhibition of ES cells.
To investigate the biological functions of NKAP in ES, the expression of NKAP in five different ES cell lines (A673, SK-ES-1, RD-ES, TC-71 and A4573 cells) and MSCs was analyzed using western blot analysis. Results indicated that NKAP was markedly upregulated in ES cell lines when compared with MSCs (Fig. 1A). Due to the high expression of NKAP in A673 and RD-ES cell lines, NKAP was knocked down in human A673 and RD-ES cell lines via siRNA transfection to investigate the biological functions of NKAP. The interference effects of siNKAP were evaluated using RT-qPCR and western blot assays. As demonstrated in Fig. 1B, NKAP mRNA expression was significantly decreased in A673 and RD-ES cells transfected with siNKAP compared with siNC transfection (P<0.05). Consistent with the mRNA level, NKAP protein expression was also downregulated when NKAP was silenced (P<0.05; Fig. 1C). Effects of downregulation of NKAP on cell proliferation were investigated using CCK-8 and colony formation assays. Findings determined that siNKAP-transfected A673 and RD-ES cells exhibited significantly decreased cell proliferation and colony formation efficiency when compared with cells transfected with siNC (P<0.05; Fig. 2).
Downregulation of NKAP inhibits the invasion and migration of ES cells by regulating MMP-9 activity.
Invasion and migration are important characteristics of cancer cells, frequently initiating tumor metastasis in vivo (18). Therefore, it was investigated whether NKAP was involved in the invasion and migration of ES cells. As shown by the Transwell assays, the number of invasive and migratory A673 cells were significantly decreased when NKAP was knocked down when compared with the NC group (P<0.05; Fig. 3A and B). Similar results were observed in the RD-ES cells (P<0.05; Fig. 3C and D). In order to determine whether NKAP knockdown induced a decrease in the activity of secreted MMP-9, which might explain the reduced invasion and migration, the medium of ES cells was analyzed for MMP-9 activity by gelatin zymography. As shown in Fig. 3E and F, the degradation of secreted MMP-9 was significantly decreased in both A673 and RD-ES cells following NKAP silencing compared with the control (P<0.05).
Overexpression of NKAP promotes the viability, invasion and migration of ES cells.
Gain-of-function analysis in ES cells was used to investigate the function of NKAP more comprehensively. Overexpression efficiencies of NKAP in A673 and RD-ES cells were confirmed as NKAP expression in NKAP overexpressing vector-transfected cells was significantly greater compared with the control group (P<0.05; Fig. 4A). A673 and RD-ES cell viability significantly increased when NKAP was overexpressed following pcDNA transfection compared with the control at 72 h (P<0.05; Fig. 4B and C). In addition, Transwell assays determined that NKAP overexpression had
Downregulation of NKAP promotes ES cell apoptosis and
activates the mitochondrial apoptosis pathway. In order to investigate whether cell apoptosis contributed to the inhibitory effect of NKAP knockdown on ES cells, flow cytometry analysis was performed. As demonstrated in Fig. 5, the apoptosis percentage (Q 2 +Q 4 ) significantly increased when NKAP was silenced in A673 and RD-ES cells compared with the control (P<0.05). Whether the pro-apoptosis effect of NKAP knockdown was mediated by the mitochondrial apoptosis pathway was investigated next. Western blot analysis suggested that NKAP knockdown led to a significant increase in the expression of pro-apoptosis factors, including Bax and cleaved caspase 3, and a decrease of anti-apoptotic member Bcl2 compared with the control group (P<0.05; Fig. 6A and B). Taken together, NKAP knockdown resulted in the activation of the mitochondrial apoptosis pathway, increasing the apoptosis of ES cells.
Downregulation of NKAP inhibits the activation of the AKT signaling pathway. Finally, the mechanism of action of NKAP in ES cells was investigated by focusing on the status of the AKT signaling pathway as it has an important role in tumorigenesis and progression, being involved in cell proliferation, apoptosis and metastasis (19,20). Western blot analysis demonstrated that silencing of NKAP downregulated both the phosphorylation level of AKT and the expression of its down-stream effector cyclin D1 in both A673 and RD-ES cells compared with the control (P<0.05; Fig. 6C and D). These findings suggested NKAP knockdown led to inactivation of the AKT signaling pathway.
Discussion
ES is the second-most frequent, primary malignant bone tumor in children and adolescents (1). Though great improvements have been achieved in ES therapy, the prognosis for patients with metastasized ES still remains poor (4). Scientists have thus been committed to identifying effective prognostic markers or therapeutic targets. The present study hypothesized that NKAP has the potential to serve an important role in tumor related functions. Previous studies have demonstrated that NKAP is a highly conserved protein with various roles in transcription repression, immune development and maturation, the maintenance and survival of adult hematopoietic stem cells, chromosome alignment in mitosis, and RNA splicing and processing (7,11,14,15). Furthermore, NKAP deficiency has been identified in soft tissue sarcomas as well as several other types of human cancer (15). However, the functions and mechanisms of action of NKAP in tumorigenesis and progression need to be further elucidated. Therefore, the present study investigated whether NKAP functions as an important regulator in the proliferation, migration and invasion of ES cells.
In order to determine this, NKAP was knocked down in ES cells lines A673 and RD-ES. The effects of NKAP silencing on the tumor-related phenotypes of ES cells were investigated. Results determined that NKAP knockdown induced the inhibition of both cell proliferation and clonogenic abilities in ES cells, whereas NKAP overexpression promoted cell viability. Flow cytometry indicated that cell apoptosis was promoted in ES cells following NKAP silencing. Previous studies demonstrated that NKAP depletion inhibits the proliferation of iNKT in mice, leading to severe reductions in thymic and peripheral iNKT cell numbers (8), and decreased proliferation and increased apoptosis of hematopoietic stem cells (14). This suggests that NKAP functions as an important regulator in cell proliferation. The present study identified that NKAP knockdown inhibits migration and invasion in ES cell lines, most likely mediated by downregulation of MMP-9 secretion. The MMP family has an essential role in regulating cell mobility by degrading the extracellular matrix to promote tumor metastasis (21,22). Taken together, the present findings identified NKAP as a potential novel therapeutic target for tumor metastasis.
Emerging evidence has identified that NKAP is involved in the transcription repression of Notch (9). In general, NKAP binds CBF1-interacting corepressor and recruits histone deacetlylase in T-cell development, also inducing the activation of the NF-κB signaling pathway (13). The present study identified that NKAP knockdown led to the activation of the mitochondrial apoptosis pathway as evidenced by the increased levels of Bax and cleaved caspase-3 as well as decreased levels of Bcl2 detected by western blot analysis. Bax is an established key apoptosis initiation protein that promotes the permeability of the mitochondrial outer membrane and triggers the release of cytochrome C to the cytoplasm (23,24). In turn, cytochrome C induces cell apoptosis via activation of the caspase cascade (23,24). In addition, Bcl2 functions as an anti-apoptotic protein by antagonizing Bax (25,26). The present study determined that the AKT signaling pathway was inactivated in NKAP-silenced ES cells as evidenced by the decreased levels of p-AKT and cyclin D1. The AKT signaling pathway is a key pathway in the regulation of multiple biological processes such as promoting the proliferation, survival and migration of tumors (27). When the AKT pathway is activated, the phosphorylation level of AKT is elevated, which promotes the expression of a number of its downstream effectors important in cellular growth such as cyclin D1 (28). The activation of the Akt/mTOR signaling pathway results in enhanced cell proliferation, finally leading to tumorigenesis (27).
The present study has some limitations. The involvement of the mitochondrial apoptosis pathway should be further confirmed by assessing additional specific markers (including JC-1), following knockdown of NKAP. In addition, no rescue experiments were performed to verify that the oncogenic effects of NKAP on ES cells were indeed mediated by AKT. These areas of focus should be investigated further in the future studies.
In conclusion, the present study revealed that NKAP functions as an oncogenic gene in the progression of ES partly via the mitochondrial apoptosis and AKT pathways. The findings suggested that NKAP could be a novel therapeutic target for ES. | 2019-09-03T17:13:20.330Z | 2019-08-20T00:00:00.000 | {
"year": 2019,
"sha1": "c829ea12775e1dc7fd8ba1261ea17ad34e30719f",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/etm.2019.7925/download",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c829ea12775e1dc7fd8ba1261ea17ad34e30719f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247677454 | pes2o/s2orc | v3-fos-license | Multimorbidity Patterns in US Adults with Subjective Cognitive Decline and Their Relationship with Functional Difficulties
Objectives This study identified different multimorbidity patterns among adults with subjective cognitive decline (SCD) and examined their association with SCD-related functional difficulties. Methods Data were obtained from the 2019 Behavioral Risk Factor Surveillance System. Latent class analysis was applied to identify different patterns of chronic conditions. Logistic regression was implemented to examine relationships between multimorbidity patterns and risk of SCD-related functional difficulties. Results Five multimorbidity patterns were identified: severely impaired (14.6%), respiratory/depression (18.2%), obesity/diabetes (18.6%), age-associated (22.3%), and minimal chronic conditions group (26.3%). Compared with minimal chronic conditions group, severely impaired group was most likely to report SCD-related functional difficulties, followed by respiratory/depression and obesity/diabetes group. Discussions Individuals in the three multimorbidity groups had elevated risk of SCD-related functional difficulties compared with minimal chronic conditions group. Characteristics of the high-risk groups identified in this study may help in development and implementation of interventions to prevent serious consequences of having multiple chronic conditions.
Background
The percentage of US residents living with at least one chronic disease increased from 40% to 60% in 2009 and the percentage of those with two or more chronic diseases (i.e., multimorbidity) increased from one third to 40% in 2021 (CDC, 2009(CDC, , 2021. Individuals with chronic diseases have an increased risk of death (Nunesa et al., 2016), disability (Anesetti-Rothermel & Sambamoorthi, 2011), adverse events (Barile et al., 2015;Laires et al., 2021;Salive, 2013), and impaired functional status (Caracciolo et al., 2013), as well as lower health-related quality of life (Barile et al., 2015). Beyond these individual consequences, multimorbidity can lead to increased medical service utility via more emergency department visits, hospitalizations, etc., (Zheng et al., 2020a).
Previous studies examining the impact of multimorbidity on health outcomes considered chronic diseases as multiple covariates, measured multimorbidity as a cumulative count of diseases or treated multimorbidity as a binary indicator (Barile et al., 2015;Caracciolo et al., 2013;Laires et al., 2021;Nunesa et al., 2016;Salive, 2013). However, more recently there has been a call for studies examining the synergistic effects of multimorbidity (Barile et al., 2015). One such study multimorbidity as unique subgroups rather than using a binary or cumulative approach.
Patterns of multimorbidity vary by subpopulations and disease groups. Among US residents aged 65 years and older, Zheng et al. (2021) identified five patterns of multimorbidity (healthy, age-associated chronic conditions, respiratory conditions, cognitively impaired, and complex cardiometabolic group) and found the cognitive impaired group had significantly higher mortality than the respiratory group, despite the number of reported chronic disease/conditions being similar between the two groups. Among patients diagnosed with multiple myeloma, Fillmore et al. (2021) identified five multimorbidity groups (minimal comorbidity, cardiovascular and metabolic, psychiatric and substance use, chronic lung disease and multisystem impairment group), and found that chronic lung disease group had higher risk of mortality than the cardiovascular and metabolic group, though the median number of chronic conditions were the same. Among acutely hospitalized patients, eight multimorbidity patterns were found (Juul-Larsen et al., 2020), and the pattern with the highest number of chronic conditions did not show the highest healthcare utilization.
Subjective cognitive decline (SCD) is a self-reported experience of confusion or memory loss that is happening more often or getting worse (CDC, 2019b). It is one form of cognitive impairment and can be considered one of the earliest noticeable symptoms of Alzheimer's disease (AD) (Neto & Nitrini, 2016) (Murphy et al., 2013). Individuals with SCD may unconsciously give up daily activities such as cooking and cleaning, or be unable to manage medical appointments or medication regimens, resulting in poor health outcomes (Taylor et al., 2018). In the US, one in nine adults aged 45 years and older reported having SCD, with half (50.6%) of whom reporting SCD-related functional difficulties (Taylor et al., 2018). Among those with SCD, 66.2% had two or more chronic diseases (CDC, 2019b). To date, only one study has examined multimorbidity patterns within a population with subjective memory complaints (i.e., a measure similar to SCD) (Yap et al., 2020). However, this study was not nationwide, with only 6179 participants and eight chronic diseases considered. Other studies examining the relationship between multimorbidity and cognitive function found that adults with multimorbidity had a higher risk of SCD-related, or more broadly, cognitive-related functional impairment (Caracciolo et al., 2013;Jindai et al., 2016;Melis et al., 2013;Taylor et al., 2020), but, the synergistic effect of chronic diseases was not considered. In addition to chronic diseases/conditions, healthrelated risk behaviors including sedentary life style, excessive alcohol consumption, smoking, and unhealthy diet are associated with cognitive-related functional impairment (Alzheimer's Association International Conference, 2019; National Institute on Aging, 2020). Therefore, the purpose of this study was to identify different multimorbidity patterns among adults with SCD and examine their association with SCD-related functional difficulties.
Study Data and Study Population
Data from the Behavioral Risk Factor Surveillance System (BRFSS) 2019 survey were used for this study. The BRFSS-a national premier system of health-related telephone surveys-is administered and supported by the Centers for Disease Control and Prevention (CDC) (CDC., 2014). BRFSS collects data about noninstitutionalized US adults (≥18) regarding their health-related risk behaviors, chronic diseases and health conditions, access to health care, and use of preventive services (CDC, 2019a). Data were obtained from the Core Component and the Cognitive Decline Optional Module. The core component contains a standard set of questions that all states use, inquiring about demographic information, health-related perceptions, conditions and health-related behaviors. The cognitive decline module targets residents aged 45 years and older and was developed in 2011 to understand the effect and burden of SCD in the population (CDC, 2018).
Participants who responded to the BRFSS 2019 survey were eligible to be included in this study. The inclusion criteria were (a) those 32 states implemented the cognitive decline model in 2019, (b) residents aged 45 years and older, and (c) reported "yes" to the question of SCD. After using the inclusion criteria, 15,621 observations were retained for this study.
Study Variables
SCD was obtained from the cognitive decline optional module and is assessed using the question "During the past 12 months, have you experienced confusion or memory loss that is happening more often or is getting worse?" If participants answered "yes," further questions regarding SCDrelated functional difficulties were asked, including "During the past 12 months, as a result of confusion or memory loss, how often have you given up day-to-day household activities or chores you used to do, such as cooking, cleaning, taking medications, driving or paying bills?" and "During the past 12 months, how often has confusion or memory loss interfered with your ability to work, volunteer, or engage in social activities outside the home?" An individual was considered as experiencing SCD-related functional difficulties if at least one answer to the two questions is "always," "usually," "sometime," or "rarely," or as not experiencing SCD-related functional difficulties with the answer of "never" to the two questions. Chronic diseases/conditions were obtained from the core component of the BRFSS survey. 17 chronic diseases/conditions were used to identify different patterns of multimorbidity: hypertension, diabetes, high cholesterol, myocardial infarction/coronary heart diseases (MI/CHD), stroke, asthmas, arthritis, chronic obstructive pulmonary disease (COPD)/emphysema/chronic bronchitis, depression, kidney disease, hepatitis B, hepatitis C, deaf/ serious difficulty hearing, blind/serious difficulty seeing, obesity, skin cancer, and other cancer. All conditions except obesity were determined via the questions: "Have you ever been told you have … by a doctor, nurse, or other health professionals?" Obesity was defined as body mass index (BMI) equal or larger than 30 (WHO, 2021), where BMI was calculated by dividing body weight in kilograms by the square of height in meters.
12 variables were used to examine the relationships of different patterns of multimorbidity and SCD-related functional difficulties, including seven demographic variables (age group, sex, urban/rural, educational level, marital status, employment, and household income) and, five health-related risk behaviors (heavy drinker, physical exercise except job, fruit consumption at least one time per day, vegetable consumption at least one time per day, smoking status). Observations that coded as "refused," "not sure," "not asked," or "don't know" were recoded as missing.
Statistical Analyses
All the statistical analyses were conducted using R with main packages poLCA for latent class analysis (LCA), mice for multiple imputation, twang for propensity score weighting, survey for logistic regression with weights.
LCA was applied to classify individuals to homogeneous subgroups of multimorbidity patterns, with the reported chronic diseases or conditions as indicators (Lazarsfeld, 1950). The large number of variables in the LCA resulted in about 97% of cells being small (i.e., containing less than five observations) in the joint tabulation of the 17 chronic diseases, which leads to low power of chi-square goodnessof-fit tests and numerical problems in estimating parameters and the asymptotic variance-covariance matrix (Galindo Garre & Vermunt, 2006;Langeheine et al., 1996;Wurpts & Geiser, 2014). Therefore, composite variables including cancer (35%), hepatitis (5.5%), blind/deaf (35.8%), and asthma/COPD (33.5%) were created based on variables with similar traits including skin/other cancer (17.2%/18.7%), hepatitis B/C (2.2%/4.1%), blind or serious difficulty seeing/deaf or serious difficulty hearing (17.2%/25%), and asthma/COPD or emphysema or chronic bronchitis (19.2%/ 23.4%). Hepatitis was removed because of large percent of missing (85.3%) (see Supplementary Material Appendix 1). Finally, 12 chronic conditions were included in the LCA with the percent of small cells decreased to 73%. The missing percentages for the 12 variables ranged from 0.3% to 8.9%. Further details of the frequency table and missing percentage for all the variables in this study can be found in Supplementary Material Appendix 1. The correlation matrix for variables in LCA was examined using the tetrachoric method (Pearson, 1900); no extremely high correlations were observed (see Supplementary Material Appendix 2). A series of LCA models with one to eight subgroups were fit to the data and the optimal number of subgroups was determined based on Bayesian information criteria (BIC), consistent Akaike information Criteria (CAIC), adjusted BIC (ABIC) and clinical significance. Smaller BIC, CAIC and ABIC indicate a better goodness-of-fit. Average posterior class probabilities (AvePP) and odds of correct classification ratio (OCC) were calculated to assess the degree of subgroup separation of the selected LCA model. AvePPs values lager than 0.7 and OCCs 5 or larger were used to define an acceptable degree of subgroup separation (Nagin, 2005). Meaningful labels were given according to the characteristics of the combination of chronic diseases and existing literature on clustering of multimorbidity using population-based data. The predicted subgroup membership for each individual was estimated based on the probability of having chronic conditions; it was then treated as a predictor in the subsequent logistics regression analyses.
Descriptive statistics including the percentage of observations for multimorbidity subgroup by demographic factors and percentage of observations for SCD-related functional status by multimorbidity groups and healthrelated risk factors were generated taking the sample weights into account. Chi-square tests adjusting for sample weights were used to compare the distributions of demographic factors across multimorbidity groups and compare the distributions of multimorbidity groups and the five healthrelated risk factors across SCD-related functional status (Kariya, 1984;Lumley, 2020).
After classifying participants into subgroups, multiple imputation (MI) was implemented for missing data, including age, race, educational level, household income, employment status, and the five health-related risk factors (Sterne et al., 2009). 20 imputations were generated to ensure accurate standard errors (Von Hippel, 2020). Logistic regression was then used to examine the association of the defined multimorbidity subgroups with SCD-related functional difficulties. For each imputed data set, inverse probability of treatment weighting (IPTW) with propensity score (PS) method was used to control confounding effects of age, sex, race, urban/ rural, education, income, and employment status (Chen & Moskowitz, 2016;GW, 2000;Leite, 2019b). Generalized boosted modeling (GBM), which is a data mining method that does not require a specification of statistical model and is able to automatically take interactions and nonlinear effects into account, was applied to estimate the IPTWs (Leite, 2020;Strobl et al., 2009). Distribution of propensity scores and overlap between groups (i.e., common support) were evaluated using the Box-and-Whiskers plots (Leite, 2019a). Covariate balance was evaluated by comparing the absolute standardized difference between unweighted and weighted means or proportions across groups (Leite, 2019b); a value below 0.1 indicates an adequate covariate balance (Austin, 2011). Summary statistics (means, standard deviation, and range of the estimated PSs) were generated to assess the existence of extreme values. Final weights were derived by multiplying the IPTWs with the CDC sample weights, which incorporate the complex survey design and adjusted of characteristics of the population to make the collected data more representative of the population (CDC, 2020). Logistic regression results were then combined across all 20 imputations.
Patterns of Multimorbidity Among People with SCD
The LCA model with five subgroups was selected, as minimum model improvements (i.e., lower BIC, CAIC, and ABIC values) were observed beyond this model (Figure 1). In addition, results from this model were the most interpretable and clinically meaningful according to relevant literature (Collerton et al., 2016;Marengoni et al., 2020;Zheng et al., 2021).
The five subgroups were labeled according to their specific multimorbidity patterns (Table 1). Classification of the SCD population is shown in Table 1 and Figure 2. Twenty-six percent of the study population were classified into the minimal chronic conditions group with low probabilities of having any of the chronic diseases; 22.3% were classified into age-associated group with high Note. CHD-coronary heart disease; MI-myocardial infarction; COPD-chronic obstructive pulmonary disease. The number in bold indicates higher probability of having that chronic disease/condition in that multimorbidity subgroup.
probability of having hypertension, high cholesterol level, and arthritis; 18.6% were classified into obesity or diabetes group with high probabilities of having obesity, diabetes, age-associated chronic conditions; 18.2% were classified into respiratory or depression group with high probabilities of having asthma or COPD, depression and arthritis; and 14.6% were classified into the severely impaired group with high probability of having the majority of the chronic diseases.
Factors Associated with Multimorbidity Patterns
Demographic factors across groups are shown in Table 2.
The eight demographic factors were distributed differently across the five groups. Age-associated group had the lowest percentage of participants in the age range of 45-49; more than 60% of this group was aged 65 years and older and mostly retired. Minimal chronic condition group was mostly not-white, living in urban counties, bettereducated, having higher income, mostly working, married or a member of unmarried couple comparing with other groups.
Association Between Multimorbidity Patterns and Functional Difficulties
Physical exercise, vegetable consumption, smoking status, and multimorbidity patterns were distributed differently when comparing those with and without SCD-related functional difficulties groups (Table 3). All larger proportion of participants in the minimal chronic conditions group reported no SCD-related functional difficulties compared to other groups. A larger proportion of people without SCD-related functional difficulties reported doing physical exercise, consuming fruit or vegetables at least one time per day, and having never smoked or formerly smoked comparing with people with SCD-related functional difficulties. After implementing the IPTW propensity score method, the absolute standardized difference of unweighted and weighted means and proportions for each pair of the groups on all imputed data sets are below 0.1 (Supplementary Material Appendix 3 Figure 1-20). This indicates that the covariate balance was achieved for all measured covariates. Extreme propensity scores were not observed (Supplementary Material Appendix 3 Table 1).
Results from logistic regression models found that the multimorbidity pattern groups had higher odds of reporting SCD-related functional difficulties when compared to the minimal chronic conditions group, except the age-associated group (Table 4). Compared with minimal chronic conditions group, the severely impaired group was most likely to report SCD-related functional difficulties (OR = 2.43, 95% CI: 1.92-3.08), followed by respiratory/depression group (OR = 1.72, 95% CI: 1.39-2.12), and obesity/diabetes group (OR = 1.57, 95% CI: 1.29-1.92), controlling for other factors. Further, paired comparisons found that the odds of reporting SCD-related functional difficulties were the highest in the severely impaired group compared with the age-associated (OR = 2.08, 95% CI: 1.60-2.69), obesity/diabetes (OR = 1.55, 95% CI: 1.24-1.93) and respiratory/depression group (OR = 1.42, 95% CI: 1.13-1.78). The odds of reporting SCDrelated functional difficulties in obesity/diabetes and respiratory/depression groups did not significantly differ (OR = 1.09, 95% CI: 0.90-1.32), but both were significantly higher than age-associated group with OR equals 1.34 (95% CI: 1.07-1.68) and 1.47 (95% CI: 1.17-1.84), respectively. The odds of reporting SCD-related functional difficulties in people reported not having physical exercise was 47% (95% CI: 27%-69%) higher than those who did have. Vegetable intake at least one time per day was significantly related to a 32% (95% CI: 10%-57%) increase of the odds of reporting SCD-related functional difficulties. Compared to participants who never smoked, current smoker had 111% (95% CI: 75%-154%) increased odds of reporting SCDrelated functional difficulties. Heavy drinker and fruit consumption were not significantly related to the odds of reporting SCD-related functional difficulties. The difference of odds between former smokers and never smokers was also not significant.
Conclusions and Discussions
This research is the first nationwide study to investigate multimorbidity patterns among the US noninstitutionalized residents aged 45 years and older who reported experiencing SCD. Five multimorbidity patterns were revealed by latent class analysis-age-associated (22.3%), severely impaired (14.6%), minimal chronic conditions (26.3%), obesity or diabetes (18.6%) and respiratory or depression (18.2%).
Previous research identified five groups among the noninstitutionalized general population aged 50 years and older in the US (Zheng et al., 2021). There are several differences in the multimorbidity patterns observed in the SCD population within this study and general population in previous literature. First, the five groups in our study had more chronic conditions on average compared with the groups with a similar rank of mean number of chronic conditions in their study. For example, both studies found a group with the smallest probability of having almost all the chronic diseases, but the group in this study (minimal chronic condition group) had an average number of chronic conditions of 1.8, whereas the group in the other study (healthy group) has an average number of 0.6. Both studies found a group with the highest probability of having almost all the chronic diseases. However, in this study, that group (severely impaired group) had an average number of chronic conditions of 7.7 compared to (complex cardiometabolic group) an average number of 1.2 in the other study. Second, 85.1% of the general population were classified into the two groups with the smallest and second smallest mean number of chronic conditions (healthy and age-associated), while only 40.8% of people with SCD were classified into the minimal chronic conditions and ageassociated groups. Third, 14.6% of people with SCD were in the most severe group (severely impaired group), while only 3.2% of the general population were classified into the most severe group (cardiometabolic group). Moreover, the most severe group among the SCD population was more acute than it in the general population.
Comparing with the minimal chronic conditions group, the severely impaired group has the greatest increased risk of reporting SCD-related functional difficulties, followed by respiratory/depression group and obesity/diabetes group. Previous studies examining the association of chronic diseases and SCD-related functional difficulties found that a greater number of chronic disease was related to more severe SCD-related functional difficulties (Dunlop et al., 2002;Jindai et al., 2016;Taylor et al., 2020;Yap et al., 2020;Yokota et al., 2017); however, these studies treated chronic diseases as a cumulative count, categorical indicator or multiple covariates. Results here found the age-associated and respiratory/depression groups reported similar mean number of chronic diseases but the odds of having SCDrelated functional difficulties in the respiratory/depression group was 1.47 (95% CI: 1.17, 1.84) times the odds in the age-associated group. Moreover, the mean number of reported chronic diseases in age-associated group (4.1) was much higher than in the minimal chronic condition group (1.8) but the odds of reporting SCD-related functional difficulties in these two groups was not significantly different (OR = 1.17, 95% CI: 0.92, 1.49).
Our findings support that people can benefit from doing physical exercise and eating vegetables at least one time per day, as this was associated with decreased odds of reporting SCD-related functional difficulties regardless of multimorbidity group. Current smokers had a higher risk SCDrelated functional difficulties compared to never smoker.
This study has several limitations. First, only noninstitutionalized adults and individuals from the 32 states that included the cognitive decline optional module were included. Therefore, caution is needed when attempting to generalize findings to the whole population with SCD in the US. Measurements of cognitive decline and SCD-related functional difficulties in this study are self-reported at only one time point of a year and rely on only one or two questions. As such, recall bias and misclassification may be present. However, it has been shown that SCD occurs at the preclinical state of AD which cannot be detected by objective tests (Jessen et al., 2014(Jessen et al., , 2020. Future studies could consider other validated multiple-item questionnaires to measure SCD-related functional difficulties such as those in Diaz-Galvan et al., (2021) and Rabin et al., (2020). Third, the sample weighting used to reflect the complex survey design were not considered in the LCA models. It is recommended to use sample weights when the sampling weights are related to the class membership (Vermunt, 2002). Demographic factors such as age has been shown to be associated with chronic disease status (Anesetti-Rothermel & Sambamoorthi, 2011; Barile et al., 2015) and thus could affect the class participants belong to; however, the "poLCA" package used in this study does not allow the use of weights (Linzer & Lewis, 2013). Incorporating sample weights in the LCA should be considered for future research. Fourth, measurement invariance for age group, and sex was not tested which could affect the estimated item response probabilities (Vermunt, 2002). However, 73% of small cells were still present after combining conditions with low prevalence and the chi-square test for measurement invariance is not advised when having more than 20% small cells (Linzer & Lewis, 2013). Fifth, the conclusions based on the two discrimination ability indices of the selected LCA model are not consistent. Only two out of five AvePPs are above 0.7 indicating that the subgroups were not well separated (Supplementary Material Appendix 4 Table 1). In contrast, all the OCCs for the five subgroups were above 5 indicating good latent class separation and high assignment accuracy. This may indicate the existence of overlap among the five latent subgroups, potentially, due to the high heterogeneity among older adults within groups (Zheng et al., 2020a). Nevertheless, there were enough differences among the five multimorbidity groups to distinguish them. For example, the probability of having obesity is the highest in obesity/diabetes group while low in other groups, and the probability of having depression is the highest in the respiratory/depression group while low in the minimal chronic condition and age-associated groups. Six, the estimated PSs may not have adequate common support across the five multimorbidity classes as the boxplots not well overlapped (Supplementary Material Appendix 4 Figure 1). However, the lack of common support may or may not result in covariates balance in propensity score weighting analysis (Leite, 2019a). Moreover, the absolute standardized difference in the means and proportions for each pair of the subgroups were below 0.1, indicating the covariates balance was achieved. Lastly, the predicted classes in the LCA are used as a predictor in the subsequent analysis; misclassification may introduce bias in the estimation of following analysis (Croon, 2002). Advanced methods such as bias-adjusted three-step methods (Asparouhov & Muthén, 2014;Z. Bakk et al., 2013) and two-step method (Bakk & Kuha, 2018) were proposed in recent years and can be explored in the future.
Missing observations were assumed to be missing at random, which is likely a reasonable assumption according to Supplementary Material Appendix 5. Multiple imputation was not applied on the variables in the LCA because the Expectation-Maximization algorithm used in the "poLCA" package allowed missing data in the observed variables (Linzer & Lewis, 2013). Only 8.9% of observations were missing of data used in the LCA.
This study may provide valuable information to help understand multimorbidity patterns among the SCD population and contributes to the evidence of vulnerable populations of SCD-related functional difficulties. The results may provide insights for clinicians and others to improve health resources allocation and managing patient complexity. The characteristics of high-risk groups identified by this study may help the relevant organizations to develop and implement interventions to prevent more serious consequences of multimorbidity. The relevant organizations and media could promote healthy lifestyle especially doing physical exercise, eating vegetables and stopping smoking by educating the public of the decreased risk of SCD-related functional difficulties benefitted from these behaviors. | 2022-03-26T06:23:45.280Z | 2022-03-24T00:00:00.000 | {
"year": 2022,
"sha1": "151b7ef2e894d6aadf87d47e5a982e2b2be2743c",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/08982643221080287",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "ae3f4a07a4f186242b9ab9e1b84074297901e048",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248053665 | pes2o/s2orc | v3-fos-license | The Effects of the Food Additive Titanium Dioxide (E171) on Tumor Formation and Gene Expression in the Colon of a Transgenic Mouse Model for Colorectal Cancer
Titanium dioxide (TiO2) is present in many different food products as the food additive E171, which is currently scrutinized due to its potential adverse effects, including the stimulation of tumor formation in the gastrointestinal tract. We developed a transgenic mouse model to examine the effects of E171 on colorectal cancer (CRC), using the Cre-LoxP system to create an Apc-gene-knockout model which spontaneously develops colorectal tumors. A pilot study showed that E171 exposed mice developed colorectal adenocarcinomas, which were accompanied by enhanced hyperplasia in epithelial cells, and increased tumor size. In the main study, tumor formation was studied following the exposure to 5 mg/kgbw/day of E171 for 9 weeks (Phase I). E171 exposure showed a statistically nonsignificant increase in the number of colorectal tumors in these transgenic mice, as well as a statistically nonsignificant increase in the average number of mice with tumors. Gene expression changes in the colon were analyzed after exposure to 1, 2, and 5 mg/kgbw/day of E171 for 2, 7, 14, and 21 days (Phase II). Whole-genome mRNA analysis revealed the modulation of genes in pathways involved in the regulation of gene expression, cell cycle, post-translational modification, nuclear receptor signaling, and circadian rhythm. The processes associated with these genes might be involved in the enhanced tumor formation and suggest that E171 may contribute to tumor formation and progression by modulation of events related to inflammation, activation of immune responses, cell cycle, and cancer signaling.
Introduction
Colorectal cancer (CRC) is a complex disease with high morbidity and mortality that is persistent in Western countries, which displays an increasing risk to the younger population [1,2].Dietary factors are known to be involved in the development of CRC [3] and small particles, such as nanoparticles (<100 nm), in food are suspected to induce adverse effects due to their size-dependent reactivity [4,5].
Titanium dioxide (TiO 2 ) is a food additive that has been approved in food products by the European Union (EU), coded E171.It consists of at least 10-40% nanoparticles in number-size distribution, with one or more external dimensions in the size range of 1-100 nm [5][6][7].E171 is used for its bright and white color, high refraction index, and resistance to UV light, which makes it a very stable pigment over time.Consequently, E171 is used in a wide variety of foods, such as dairy products, sweets, cookies, and sauces [5,8,9].The adverse effects of TiO 2 were first observed after inhalation.As a consequence of these findings, the International Agency for Research and Cancer (IARC) has classified TiO 2 as "possibly carcinogenic to humans after inhalation" [10].In 2020, the EU also classified TiO 2 as a suspected carcinogen (category 2) by inhalation in powder form with at least one particle dimension with an aerodynamic diameter ≤10 µm under the Classification, Labelling, and Packaging (CLP) Regulation (EC No 1272/2008) [11].This classification was mandatory from 1 October 2021 [11].Findings in inhalation studies as well as the change of classification to group 2B raised concern about other routes of exposure, such as ingestion [10,11].
Therefore, many studies have been performed to identify the potential carcinogenicity of E171 after ingestion, as well as other related mechanisms.Based on the outcome of these studies, as of May 2021, E171 is "no longer considered to be safe when used as a food additive" by the European Food Safety Authority (EFSA), particularly due to concerns about its genotoxic potential [5].
The potential of TiO 2 to enhance CRC has been shown in different rodent models.In a chemically induced CRC model, based on the combination of azoxymethane (AOM) and dextran sodium sulfate (DSS), BALB/c mice were orally exposed to 5 mg/kg bw /day of E171 for 10 weeks [12].The expression of tumor progression markers, such as COX2, Ki67, and ß-catenin, was increased [12].In the same study, no development of tumors was observed after the exposure to E171 alone; although, a decrease in goblet cell numbers and induction of dysplastic changes in the colonic epithelium were detected [12].Another study in a chemically induced CRC model showed the growth of aberrant crypt foci in the colon after oral exposure to 10 mg/kg bw /day of E171 for 100 days [13].Bettini et al. (2017) also showed the development of preneoplastic lesions in normal rats after oral exposure to 10 mg/kg bw /day of E171 for 100 days [13].They found a shift in T-helper cell 1/T-helper cell 17 (Th1/Th17) balance in the immune system and observed impaired intestinal homeostasis in rats exposed to E171 for 7 days [13].Gene expression analysis after the oral exposure to 5 mg/kg bw /day of E171 for 2, 7, 14, and 21 days in the colon of normal BALB/c mice showed changes in olfactory/G-protein-coupled receptor (GPCR) signaling genes, immune system, oxidative stress responses, and cancer-related genes [14].Similar results were observed in a colitis-induced mouse model (AOM/DSS) with the same experimental design.Gene expression changes in the colon of these mice indicated modulations of immune-related genes, olfactory/GPCR signaling genes, oxidative stress, extracellular matrix modulation, and biotransformation of xenobiotics [15].
In the current study, we used a transgenic (Tg) mouse model based on the Cre-LoxP system to produce a tissue-specific knockout model for CRC, which does not require chemical induction of inflammatory colorectal cancer [16].The Cre-LoxP system is a tissue-specific gene-editing technology, which allows researchers to carry out deletions, insertions, and translocations in a site-specific manner [17].A knockout of the floxed Apc gene in the colon by Cre-recombinase with the Car1 promoter induced 26% of the mice to spontaneously develop colorectal tumors, which were limited to the epithelial of the large bowel [16].The model is representative of the human situation, since the loss of function of the Apc gene after mutation, and the resulting transformation of the normal epithelium to early adenoma/dysplastic crypts, is suspected to be the primary cause of sporadic and hereditary CRC and is found in around 80% of human colorectal tumors [18].To the best of our knowledge, this study is the first to utilize this transgenic mouse model for CRC to screen for effects on tumorigenesis and transcriptome changes following exposure to the food additive E171.
We investigated the effects of intragastric exposure to E171 in the distal colon of this Tg mouse model by monitoring tumor formation and gene expression profiling following a similar exposure schedule to that previously reported [14,15].We hypothesize that E171 induces gene expression changes that may lead to altered signal transduction, oxidative stress response, inflammation, impairment of the immune system, and the activation of cancer-related genes, which may stimulate colorectal tumor formation.
E171 Characterization
E171 was kindly donated by the Sensient Technologies Company in Mexico.The characterization of E171 by electron microscopy was used to evaluate the size morphology.In short, E171 was dispersed in sterile water with a bath sonicator (Branson 2200, Branson Ultrasonic SA, Danbury, CT, USA) at 40 kHz for 30 min (unless stated otherwise).The vehicle control, consisting of sterile water, was also bath-sonicated for 30 min at 40 kHz.E171 primary particle size was analyzed by transmission electron microscopy (TEM), with a FEI Tecnai G2 Spirit BioTWIN electron microscope at 60,000× magnification (Thermo Fisher Scientific, Hillsboro, OR, USA).A measure of 20 µL of E171 dispersion in sterile water was added to an EMS CF200-Cu-50 support film for EM (Electron Microscopy Science, Hatfield, PA, USA).For each E171 concentration, 50 pictures were taken and analyzed using the ellipse fitting mode of the ParticleSizer plug-in for ImageJ, which was developed by the NanoDefine project [19,20].At least 1500 particles were analyzed for each sample.A Malvern NanoZS (Malvern Instruments, Malvern, UK) dynamic light scattering instrument was used to determine hydrodynamic diameter size and zeta-potential The stock dispersions of 1, 2, and 5 mg/mL were diluted 1:100 (v/v).Measurements were performed as biological duplicates at 25 • C, with equilibration time of 0 s, viscosity of 0.8872 cP, and the refraction index was set at 1.330.
Animals
All animal experiments were reviewed and approved by the Animal Experimental Committee, which included an ethical testing framework from Maastricht University, Maastricht, The Netherlands, or the ethics committee of Facultad de Estudios Superiores Iztacala from Universidad Nacional Autónoma de México, Mexico City, Mexico.The pilot study was approved under the number CE/FESI/102021/1430.The phase I tumor formation study was approved under the number 2014-080 and the phase II gene expression study under the number 2014-079.All mice were housed in polycarbonate cages and kept in a housing room (21 • C, 50-60% relative humidity, 12 h light/dark cycles, air filtered until 5 µm particles and exchanged 18 times/h).Mice received a standard chow diet and water ad libitum.Cardboard was removed from cages to avoid potential contamination with TiO 2 [21].The doses of 1, 2, and 5 mg/kg bw /day of E171 were based on the EFSA exposure assessment estimation of the mean dietary intake of adults (18-64 years) [5].The CAC5 Tg/Tg expressing the Cre recombinase under the control of the Car1 promoter and the APC 580S/+ mice carrying the floxed Apc gene were crossed to obtain CAC Tg/Tg ;APC 580S/+ mice based on the model created by Xue et al. [16].The combination of LoxP and Cre-recombinase, expressed in the colon, induced a spontaneous development of colon carcinomas in 26% of the mice within the first 10 weeks of age [16].Heterozygosis of the Apc gene and the homozygosis of Cre-recombinase with the Car-1 promoter was confirmed by Charles River Biopharmaceutical Service GmbH, Germany, by extracting DNA from the tip of the tails of the mice, to perform a polymerase chain reaction (PCR) before the start of the phase I and phase II experiments.
Pilot Study: Histopathology
A pilot study was performed to investigate whether the Tg mice developed tumors and whether E171 would stimulate the development of colorectal tumors.In this pilot experiment, 3 heterozygous CAC Tg/Tg ; APC 580S/+ mice were treated with ~5 mg/kg bw /day of E171 via drinking water for 4 weeks, 7 days a week.Another 2 mice served as control receiving only sterile water.The mice were exposed from 35 weeks of age.The water container was filled with 200 mL of water or 200 mL of 22 µg/mL E171 dispersion (bath sonicated at 60 kHz for 30 min at room temperature) every 2 days and was shaken at least 2 times per day.Water consumption was monitored.Euthanasia was performed under light anesthesia (xylazine/ketamine 10/80 mg/kg bw ) followed by cervical dislocation.Colorectal tumors were counted, measured, and histological samples were taken.The whole colon and the cecum were isolated and photographed (data not shown) before the cecum was removed and the colon was washed with a cold saline solution.Then, the colon was dissected lengthwise and photographed (data not shown).The length and width of the adenomas were measured using digital calipers and the volume (mm 3 ) was calculated considering ((LxW 2 )/2), where L indicates length and W indicates width.Then, the colon was fixed in 100% alcohol for 24 h, dehydrated, and embedded in paraffin.Tissue sections of 4 µm were obtained from the paraffin-embedded samples and stained.Briefly, slides were hydrated by washes with xylene, xylene/ethanol, 100% ethanol, 96% ethanol, and water.The slides were routinely stained with hematoxylin and eosin (HE).The samples were dehydrated and preserved with Entellan ® .The histopathology was determined by a certified histopathologist.
Phase I: Tumor Formation Study
The 80 CAC Tg/Tg ;APC 580S/+ pups were weaned at 3 weeks of age.Mice were randomly divided into two groups: the E171 exposure group and the control group.From 5 weeks of age, the CAC Tg/Tg ;APC 580S/+ mice were treated with 5 mg/kg bw /day of E171 (n = 40) or sterile water (n = 40) for a maximum of 9 weeks.Mice were given 250 µL of E171 dispersion or sterile water by intragastric administration.The number of particles in 250 µL of a 1 mg/mL stock dispersion was ~2.3 × 10 11 (based on a median of ~79 nm).Exposure was performed 5 days a week.Weight gain or loss was monitored weekly.A total of 16 mice, with 8 in the E171 group and 8 in the control group-4 males and 4 females in each group-were euthanized bi-weekly.Euthanasia was performed under light anesthesia (4% isoflurane) followed by cervical dislocation.At week 4 of exposure, one mouse in the E171 group (group of 9 weeks of exposure) was euthanized ahead of schedule due to severe rectal prolapse.At week 7 of exposure, one more mouse in the control group had tumors all over the colon from the caecum to the distal colon, which is unusual in this model [16].Therefore, this mouse was considered an outlier and was not included in the assessment of the tumor formation.Colon, liver, and spleen were removed and weighed.Colons were cleaned with a sterile swab before they were weighed and checked for the presence of tumors; the number of visible tumors was registered.
Phase II: Gene Expression Study
For the gene expression study, 112 CAC Tg/Tg ;APC 580S/+ mice were randomly divided into 4 groups: 3 exposure groups with different concentrations of E171 (1, 2, and 5 mg/kg bw /day) and 1 control group exposed to sterile water.The mice, at 5 weeks of age, were exposed 5 days a week, as indicated in the tumor formation study.After 2, 7, 14, and 21 days of exposure, 28 mice were euthanized: 7 mice per group, with 3 females and 4 males in each group.Three mice were euthanized ahead of schedule due to severe rectal prolapse and were not included in further analysis.There was 1 mouse in the 1 mg/kg bw /day of E171 group of 21 days of exposure, and 2 mice in the control group, with 1 of the 14 days group and 1 of the 21 days group.Colon, liver, and spleen were sampled from every mouse.The presence of tumors in the colon was registered.Colons were cleaned with a sterile swab before they were weighed and dissected into the distal, mid, and proximal colon.One segment of each dissection was stored at −80 • C in RNAlater (Thermo Fisher, Breda, The Netherlands).
mRNA Extraction from Colonic Tissues
As tumor formation in this mouse model was mainly found in the distal colon [16], mRNA was extracted from this part of the colon as previously reported [14,15].Briefly, before RNA isolation, the distal colon was disrupted and homogenized by submerging it in Qiazol (Qiagen, Venlo, The Netherland) and by subsequently using a Mini Bead Beater (BioSpec Products, Bartlesville, OK, USA) at a speed of 48 beats per second for 30 s. mRNA isolation followed the manufacturer's protocol for "Animal Cells and Animal Tissues" in the mRNAeasy Mini Kit (Qiagen, Venlo, The Netherlands), with DNase treatment (Qiagen, Venlo, The Netherlands) [22].The quality and yield of the mRNA were verified on a Nanodrop ® ND-1000 spectrophotometer (Thermo Fisher, Breda, The Netherlands).Samples passing the quality check with a 260/230 ratio between 1.8-2.0 and a 360/280 ratio between 1.9-2.1 were checked for RNA integrity on a 2100 Bioanalyzer using the manufacturer's protocol (Agilent Technologies, Amstelveen, The Netherlands).All samples with an RNA integrity number (RIN) above 6 were used for the microarray analysis.The average RIN value of the 99 samples that were used was 7.8 ± 0.8.
cRNA Synthesis, Labeling, and Hybridization
Samples were prepared for microarray analysis by synthesizing the mRNA into cRNA, labeling with Cy3, amplifying, and purifying it using the RNeasy Mini Kit (Qiagen, Venlo, The Netherlands) according to the One-Color Microarray-Based Gene Expression Analysis Protocol Version 6.6 [23].Furthermore, quantification of Cy3 labeled to the cRNA was performed by using a Nanodrop ® ND-1000 spectrophotometer with microarray measurement.For hybridization, 600 ng of labeled cRNA was used on SurePrint G3 Mouse Gene Expression v2 8 × 60K Microarray slides (Agilent Technologies, Amstelveen, The Netherlands).Moreover, microarray slides were scanned using an Agilent DNA Microarray Scanner with Surescan High-resolution Technology (Agilent Technologies, Amstelveen, The Netherlands).The scanner was set to Dye Channel: G, Profil: AgilentG3_GX_1Color, Scan region: Agilent HD (61 × 21.3 mm), scan resolution of 3 µm, Tiff file dynamic range of 20 bit, red PMT gain of 100%, and green PMT gain of 100%.
Preprocessing and Data Analysis of Microarrays
Preprocessing of microarray raw data was performed as previously described [14].Briefly, gene expression values were obtained from the microarray scans using Agilent's Feature Extraction software (FES) v10.7.3.1).Next, the samples were checked for quality and normalized with ArrayQC, an in-house developed pipeline (https://github.com/arrayanalysis/arrayQC_Module, accessed on 1 February 2021) using the following steps: local background correction, flagging of bad spots, controls/spots point with too low intensity, log2 transformation, and quantile normalization.Bad spots were removed from the normalized data (normalized data can be accessed on GEO under the number GSE186727).A total of 16 groups were defined, based on exposure and timepoints: 4 different exposure groups for 4 different timepoints.Spot identifiers were deleted when more than 40% of samples in each group had a missing value and when the average expression in each group was <4.Missing values were imputed by the k-nearest neighbors using GenePattern ImputeMissingValues. KNN module v13 (k-value 15) [24].Repeated Agilent probe identifiers were merged (setting: median) with Babelomics 5. Next, Agilent probe identifiers were reannotated to EntrezGeneIDs and again merged (setting: median) with Babelomics 5 [25].Differentially expressed genes (DEGs) were identified by performing a moderate t-test using LIMMA (v1.0), which corrected the exposure samples with their time-matched controls.The cutoff values of fold changes (FC) ≥ 1.5 or ≤ −1.5 (absolute FC ≥ 1.5) and p-value < 0.05 were used [26].Additionally, false discovery rate (FDR) was applied according to the Benjamini-Hochberg method with a threshold q-value < 0.05 [26].
Pathway Analysis
To identify biological processes and pathways associated with the identified DEGs, an over-representation analysis (ORA) was performed.Each timepoint and dose was analyzed with ConsesusPathway DB (CPDB, release 34, accessed on 1 January2021) [27,28].The p-value was calculated for each annotation set, and within each set a correction for multiple testing was performed (q-value).Cutoff values for the ORA were set at a minimum overlap of the genes with the input list of 2 and p-value < 0.01 for each pathway.All available mouse databases from CPDB were used.
STEM Analysis
The short time-series express miner (STEM) analysis was performed with the STEM tool developed by Ernst and Bar-Joseph [29].STEM is an algorithm that compares short time-series gene expression data and clusters gene expression patterns over time, which helps to visualize and analyze microarray data in regard to the directionality of genes [29].The analysis was performed with the log2FC expression values of all genes processed by LIMMA analysis and was grouped per dosage (1, 2, and 5 mg/kg bw /day of E171) over the 4 timepoints (2, 7, 14, and 21 days).The gene annotation source was set on Mus musculus.No additional cross-references, no gene location, and no normalization settings were used.The STEM clustering method was set on a maximum of 50 model profiles per analysis.Each expression of a gene was compared to the previous timepoint, and the maximum unit change in the model profiles between the timepoints was set to 2. Significance (p-value < 0.05) was calculated by comparing the actual number of genes per cluster to the expected number of genes per cluster by using Bonferroni correction [29].The clustered genes that were assigned to statistically significant profiles were grouped by biological function (color) and combined into a single graph.The genes in these obtained graphs/clusters were analyzed with ORA via CPDB, as described in the previous section.
Network Analysis
Enrichment and network analysis were performed using the web-based online tool Metascape (20210801, updated 20 July 2021).Metascape provides gene annotation, membership analyses, and multi-gene list meta-analyses, which are based on well-adapted hypergeometric tests and Benjamini-Hochberg p-value correction algorithms to classify ontological parameters that contain a substantially larger set of genes common to an input list than expected [30].Pathway enrichment analysis accessed Gene Ontology, KEGG, Reactome, and MSigDB.Based on a Kappa test score, pairwise similarities between any two enriched terms were computed.These similarity matrices were clustered by hierarchy and a 0.3 threshold was applied to create clusters.Metascape chose the most significant (lowest p-value, Bonferroni correction <0.05) terms within each cluster to represent the cluster in a heatmap [30].Interactome/network analysis utilized protein-protein interactions which were captured in BioGrid, with the additional integration of InWEB_IM and OmniPath.Each network complex was analyzed via function enrichment analysis and the top three enriched terms were annotated as biological associations.All network visualizations were generated by Cytoscape [30].Network analysis was performed with all 5% FDR-corrected DEGs after the exposure to 1 mg/kg bw /day of E171 for 2, 7, 14, and 21 days of exposure, with the standard settings of the express analysis function and "input as species" and "analysis as species" set to Mus musculus.
Particle Characterization
Food-grade titanium dioxide (E171) was analyzed using quantitative TEM analysis and DLS measurements to determine Z-average and zeta potential.TEM analysis showed that E171 was comprised of two size fractions: ~64% nanoparticles (<100 nm) and ~36% microparticles (>100 nm).The median particle size (Fmin-short axis; Fmax-long axis) is displayed in Table 1.TEM pictures showed that the E171 particles tended to agglomerate (Figure 1).Similar effects were observed in the DLS analysis.E171 formed larger clusters, which increased in Z-average with increasing particle concentration.The Zeta-potential of the three different particle concentrations showed a slight decrease with increasing particle concentration, indicating a small reduction in the stability of the particle dispersion.
Table 1.Summary of particle characterization of food-grade E171 dispersed in sterile water.Particle characterization included determination of median particle size, percentage of particle >100 nm obtained from the quantitative TEM analysis, Z-average PDI, and zeta-potential obtained from DLS measurements.Food-grade titanium dioxide (E171) was analyzed using quantitative TEM analysis and DLS measurements to determine Z-average and zeta potential.TEM analysis showed that E171 was comprised of two size fractions: ~64% nanoparticles (<100 nm) and ~36% microparticles (>100 nm).The median particle size (Fmin-short axis; Fmax-long axis) is displayed in Table 1.TEM pictures showed that the E171 particles tended to agglomerate (Figure 1).Similar effects were observed in the DLS analysis.E171 formed larger clusters, which increased in Z-average with increasing particle concentration.The Zeta-potential of the three different particle concentrations showed a slight decrease with increasing particle concentration, indicating a small reduction in the stability of the particle dispersion.
Table 1.Summary of particle characterization of food-grade E171 dispersed in sterile water.Particle characterization included determination of median particle size, percentage of particle >100 nm obtained from the quantitative TEM analysis, Z-average PDI, and zeta-potential obtained from DLS measurements.
Pilot Study: Histopathology
In the pilot study, CAC Tg/Tg ;APC 580S/+ mice were exposed to 5 mg/kg bw /day of E171 or sterile water via drinking water for four weeks.E171 intake ranged from 3.5 to 5.5 mg/kg bw /day.No E171 particles were observed in the colon specimens.The two control mice harbored 6 and 15 tumors in the colon, with an average volume of 15.3 mm 3 and 11.9 mm 3 , respectively.The mice exposed to E171 harbored 1, 10, and 9 tumors in their colon, with an average volume of 3.32, 48.4, and 99.1 mm 3 , respectively (Supplementary Figure S1).Histological analysis of these tumors in the control and E171-treated mice (Figure 2) showed well-differentiated adenocarcinomas.Mice exposed to E171 showed adenocarcinomas with enhanced hyperplasia in epithelial cells as well as epithelial cell infiltration in the muscle layer.
Phase I: Tumor Formation Study
The CAC Tg/Tg ;APC 580S/+ Tg mouse model was used to study tumor formation in the colon after exposure to 5 mg/kg bw /day of E171 by intragastric administration for 9 weeks.Our experiment showed no effect on the bodyweight of these Tg mice, following the exposure to E171 (Figure 3).Similarly, no significant effect on organ weight of the colon, liver, or spleen was detected (Figure 3).However, Figure 4A shows a tendency that the average number of tumors per mouse increased after 7 and 9 weeks of exposure.Additionally, a statistically nonsignificant increase in the number of mice with tumors was registered after E171 exposure, when compared to the controls (Figure 4B).Average bodyweight and organ weight of Tg mice following intragastric exposure to 5 mg/kg bw /day of E171 over time (n = 78).One mouse was euthanized ahead of the schedule due to rectal prolapse.A second mouse from the control group was ruled as an outlier due to severe tumor formation.These mice were not included in the graphs.After euthanasia, colon, liver, and spleen were weighed.The black data series corresponds to the control mice exposed to sterile water and the grey data series is attributed to the mice exposed to 5 mg/kg bw /day of E171 via intragastric administration.Data is presented as the mean +/− standard deviation of each timepoint.
Figure 4.
Tumor formation in the colon of Tg mice exposed to 5 mg/kg bw /day of E171 via intragastric gavage for 7 and 9 weeks.(A) shows the average number of tumors per mouse.(B) shows the number of mice bearing tumors.Data in (A) is presented as the mean of each group (n = 8 for each group for 7 and 9 weeks; n = 7 for control 7 weeks and 5 mg/kg bw /day of E171 for 9 weeks).E171 exposure showed a statistically nonsignificant increase in the number of tumors per mouse as well as the number of mice with tumors.
Phase II: Gene Expression Study
To study gene expression, CAC Tg/Tg ;APC 580S/+ Tg mice were exposed to 1, 2, and 5 mg/kg bw /day of E171 for 2, 7, 14, and 21 days.One tumor was found in the colon of one mouse after exposure to 1 mg/kg bw /day of E171 for 21 days (data not shown) and was registered.No other tumors at any other timepoint or concentration were detected.DEGs were identified with LIMMA and corrected for multiple testing (FDR 5%).Table 2 shows the complete output of the LIMMA analysis including the number of up-and down-regulated genes at different timepoints and doses.No clear dose-response curve was observed over time while analyzing the number of DEGs (Figure 5), except for a time-dependent increase in DEGs following the exposure to 1 mg/kg bw /day of E171.The number of DEGs following exposure to 2 mg/kg bw /day was continuously the lowest, compared with the other two dosages.The majority of 5% FDR-corrected DEGs was detected following the exposure of 1 mg/kg bw /day of E171.All further analyses focused only on the dosage of 1 mg/kg bw /day of E171. Figure 6 shows the Venn diagram of 5% FDR-corrected DEGs (absolute FC ≥ 1.5 and q-value < 0.05) for mice exposed to 1 mg/kg bw /day of E171 per timepoint and their overlap.A total number of three genes were differentially expressed throughout all four timepoints.These three genes were of the D site albuminpromoter-binding protein (Dbp) and nuclear receptor subfamily 1, group D, member 1/2 (Nr1d1, Nr1d2).After 2, 7, and 14 days of E171 exposure, the genes Aryl hydrocarbon receptor nuclear translocator-like (Arntl/Bmal1) and nuclear-factor-regulated interleukin 3 (Nfil3) were modulated.Nuclear factor of kappa light polypeptide gene enhancer in B-cell inhibitors, zeta (Nfkbiz) was modulated at 2, 14, and 21 days after exposure to E171.After 7, 14, and 21 days of E171 exposure, the gene RAR-related orphan receptor gamma (Rorc) was modulated.Ring finger protein 125 (Rnf125) and protein yippee-like 2 (Ypel2) were modulated at 2 and 14 days, while neuronal PAS domain protein 2 (Npas2), apolipoprotein L7a (Apol7a), and tensin 4 (Tns4) were modulated at 2 and 7 days following E171 exposure.Figure 7 shows a heatmap of all significantly expressed genes (absolute FC ≥ 1.5, q-value < 0.05) that were modulated at one or more timepoints, following the exposure to 1 mg/kg bw /day of E171.
Table 2. DEGs after LIMMA analysis of the microarray data obtained from Tg mice exposed to E171 at 1, 2, and 5 mg/kg bw /day, including absolute FC, the number of up-and downregulated genes, p-value, and q-value, as well as a combination of absolute FC and p/q-values.The DEGs in bold (absolute FC ≥ 1.5 and q-value < 0.05) were used for ORA pathway analysis.2.
Figure 7.
Heatmap of all genes that were differentially expressed (absolute FC ≥ 1.5, q-value < 0.05) at one or more timepoints, following the exposure to 1 mg/kg bw /day of E171.The rows represent the differentially expressed genes, while the columns represent the expression value for the timepoints 2, 7, 14, and 21 days.Red and blue colors indicate the log2FC: red stands for upregulation and blue for downregulation of the gene, in comparison with its time-matched control.The order of genes is based on the number of timepoints a gene was significantly differentially expressed (marked by *).
Pathway Analysis
The pathway analysis was performed with the corresponding DEGs that passed absolute FC ≥ 1.5 and q-value < 0.05 (FDR 5%) for each timepoint and dose.The resulting pathways are shown in Table 3.Only DEGs following an exposure to 1 mg/kg bw /day of E171 resulted in the detection of significantly altered genes and their consequent involvement in genetic pathways.Pathways related to cancer, cell cycle, circadian rhythm, metabolism, post-translational modification, and gene expression (transcription) were identified.The pathways of the circadian rhythm, exercise-induced circadian rhythm, nuclear receptor, and nuclear receptor transcription were modulated at all timepoints.On day 2, additional pathways relating to the cell cycle, disease, cancer, metabolism, and post-translational modification were identified.On day 21, additional pathways related to gene expression were detected.Supplementary Tables S1-S3 (Supplementary Materials) show the ORA pathway analysis of DEGs (absolute FC ≥ 1.5, p-value < 0.05) without FDR correction after exposure to 1, 2, and 5 mg/kg bw /day of E171.The ORA analysis via CPDB showed that on day 2 of exposure to 1 mg/kg bw /day of E171 pathways related to cell cycle and cell proliferation were modulated.More specifically, ERBB-2-, ERBB-4-, and PI3K-related pathways, as well as pathways associated with the circadian rhythm, post-translational modification, and nuclear receptor (transcription)-related pathways were identified.After 7 days of exposure to 1 mg/kg bw /day of E171, pathways related to cholesterol and lipid metabolism, modulation of the cell cycle, and signaling (e.g., p53 signaling) were found.Exposure to 1 and 5 mg/kg bw /day of E171 for 14 days showed colorectalcancer-related gene modulation in pathways associated with cell cycle and proliferation (PLK1, TP53 pathways), as well as DNA damage response.After 21 days of exposure to 1 and 5 mg/kg bw /day of E171, our analysis revealed genes modulated in pathways that are associated with lipid and cholesterol metabolism as well as cell proliferation (FGFR1).This analysis of non-FDR-corrected DEGs showed additional modulations of genes involved in pathways that are strongly associated with the development of CRC and the general development and progression of tumors in the intestinal tract.
STEM Analysis
Figure 8 shows the output of the STEM analysis for all genes that were identified as significant.Genes are grouped by dosage and over time and were combined into clusters according to their biological function.The pathways associated with each of the clusters and dosage can be found in Table 4.Following the exposure to 1 mg/kg bw /day of E171, cluster 1 showed an upregulation of pathways related to G alpha signaling events and olfactory signaling.Genes assigned to cluster 2 were associated with the pathways of olfactory transduction/signaling and neuroactive ligand-receptor interactions.Cluster 3 showed the downregulation of genes involved in pathways related to circadian rhythm, cytokine-cytokine receptor interactions, O-glycosylation, and RAF/MAPK signaling cascade.Following the exposure to 2 mg/kg bw /day of E171, two clusters were significantly altered, which correlates with the overall lower number of DEGs that were observed at this timepoint.Cluster 1 showed an increased expression of genes involved in pathways related to immune responses, including cell adhesion molecules (CAM), complement cascade, hematopoietic cell lineage, immunoregulatory interactions, and B-cell receptor signaling.The genes present in cluster 2 did not correlate with any known pathways.Tg mice exposed to 5 mg/kg bw /day of E171 showed the highest number of genes and pathways that were significantly modulated.Genes combined in cluster 1 were related to signaling, inflammation, immunoregulatory responses, and disease.Cluster 2 showed modulation of genes involved in the cell cycle (e.g., TP53-and APC-related pathways), tumor suppression, and signaling.Cluster 3 showed an upregulation of genes related to olfactory and GPCR signaling, as well as extracellular matrix organization.The genes combined in cluster 4 showed modulation of an inflammatory mediator regulation of transient receptor potential (TRP) channel.Overall, STEM analysis showed the temporal development of genes related to pathways involved in signaling, circadian rhythm, inflammation/immune responses, and cell cycle and tumor development (Table 4).It highlighted the significance of pathways identified via LIMMA/ORA, such as circadian rhythm, glycosylation, nuclear receptor signaling, inflammation, and cell-cycle-related events.Figure 8. STEM analysis was performed with all genes that passed the preprocessing step to examine their temporality.All significant genes that were assigned to a profile were clustered by biological function and are represented in this figure.The X-axis corresponds with the timepoints 2, 7, 14, and 21 days, and the y-axis indicates the expression based on log2FC.Original STEM output profiles and assigned genes per profile can be found in the y data (Supplementary Tables S4-S6).
Functional Enrichment Analysis and Network Analysis
Metsacape heatmap enrichment analysis identified similar alterations of genetic pathways as previously described by ORA and STEM analysis.Figure 9 shows the enrichment heatmap of the significant (p-value < 0.05) pathways that were identified when analyzing all DEGs (absolute FC ≥ 1.5 and q-value < 0.05) after the exposure of Tg mice to 1 mg/kg bw /day of E171 for 2, 7, 14, and 21 days.These pathways included rhythmic processes, hormone-mediated signaling pathways, nuclear receptor transcription pathways, retinol metabolism, negative regulation of inflammatory response, platelet-derived growth factor receptor signaling pathways, small cell lung cancer, muscle tissue development, O-linked glycosylation, and regulation of T-cell differentiation.Figure 10 shows the functional network relationships between the identified biological processes and their interactions, indicating the connection between genes and the pathways they are involved in through nodes.Network showing the interconnection of enriched terms.Clustered genes were typically close to each other and colored the same way when belonging to the same biological process, indicated by the legend in the bottom right corner.Edges linked similar terms, where thicker edges indicate higher similarity.The functional network shows genes according to their function and interaction.All 5% FDR-corrected DEGs (absolute FC ≥ 1.5, q-value < 0.05) following the exposure to 1 mg/kg bw /day of E171 from all timepoints were used to construct the network via Metascape.
Discussion
In this study, we examined the effects of food-grade E171 exposure in a CAC Tg/Tg ; APC 580S/+ transgenic mouse model.The pilot study showed histopathological changes in the colon of this Tg mouse model, including hyperplasia around the epithelial cells and the invasion of epithelial cells into the muscle layer.In phase I of this experimental setup, exposure to E171 increased the average number of colonic tumors per mouse, as well as the number of mice bearing tumors, compared with the controls, however without statistical significance.Furthermore, this study showed transcriptome changes in the distal colon that indicated genes involved in genetic pathways that may contribute to the onset of CRC within the same model.This study supports the observed effects of E171 on the development of colorectal tumors, as previously reported in a chemically induced CRC murine mouse model (AOM/DSS); although, the effects of E171 in the Tg mouse model were less pronounced [12,15].Histological examination of colonic specimens from the pilot study revealed an increase in tumor size and enhanced hyperplasia at the bottom of the adenocarcinomas.We observed large hyperplastic areas, showing loss of tissue architecture, nuclear enlargement, and increased nucleus-to-cytoplasm ratio (anaplasia), but also infiltrated epithelial cells in the muscle layer, which denotes malignancy.Although infiltrated epithelial cells were observed in both, control and treated mice, the amount of cell in-filtration was higher in E171 exposed mice [31].
The tumor formation study indicates an earlier onset of tumor formation and differences in tumor size, following the exposure to E171.The occurrence of these changes in tissue architectures and tumors size might be explained by the findings of our gene expression study.The gene expression experiment in this Tg mouse model showed that the exposure to 1, 2, and 5 mg/kg bw /day of E171 did not result in a clear dose-time response in the expression changes, except for a tendency of a time-dependent increase in the number of DEGs following the exposure to 1 mg/kg bw /day.The absence of a linear dose-response curve over time might be a consequence of the aggregation and agglomeration of the TiO 2 particles at higher concentrations, as observed by us previously [32].Based on earlier studies, we hypothesized that E171 exposure enhances colorectal cancer development by modulation of gene expression changes in signal transduction, oxidative stress, inflammation, DNA damage/repair, and interferences with the immune system [14,15].This Tg mouse model revealed consistent modulations of genes in pathways related to the circadian rhythm, namely the "circadian rhythm" and the "exercise-induced circadian rhythm", following ORA via CPDB.The modulation of genes in these pathways is linked to CRC through their involvement in transcriptional and translational networks and nuclear signaling [33].Our study revealed the modulation of several genes in these pathways, including Per3, Cry1, Nr1d1, Nr1d2, Arntl (Bmal1), Npas2, and Rorc.Alterations of genes in the Npas and Per families were associated with a dysfunctional cell cycle, resulting in higher susceptibility for DNA-damage-induced cancer development and overall survival in CRC patients [34][35][36].Epidemiological studies provide an additional link and state that the alteration of the circadian clock in shift workers has been identified as a probable carcinogen to humans [37].Cell cycle, cell signaling, and proliferation are also rhythmically expressed and hence partially regulated by the expression of circadian clock genes e.g., the Per and Cry families, making them a potential target for modulation of circadian-rhythm-related pathways [36,38].Furthermore, a comparison of mRNA expression levels of tumor tissue and non-tumor mucosa of human colorectal specimens showed a decrease in Cry1 and Cry2 [39], whereas a different study reported Cry1 overexpression in CRC cell lines and specimens, suggesting an association with a poor prognosis in patients with CRC [38].
Another potential driver in the development of CRC is the redox-oscillating Arntl (Bmal1) gene [40].It has been shown to inhibit tumor proliferation through G2-M phase cell cycle arrest and is known as an important regulator of the p53/p21 tumor suppressor pathway [39,41,42].Downregulation of Arntl (Bmal1) reduces the cells' ability to induce cell cycle arrest upon p53 activation in response to cellular stress signals or DNA damage [39].The formation of heterodimers between Npas2 and Arntl (Bmal1) may regulate the transcription of tumor cell growth and survival, further indicating a potential tumor-suppressing effect associated with these genes [36,42].Arntl (Bmal1) has been further associated with the control of extracellular matrix organization, through dysregulation of matrix metalloproteinase 2 and 9 (MMP-2, -9) [43,44].MMPs, especially the gelatinases MMP-2/-9, are strongly related to the development of metastasis and secondary tumors in CRC [44].Our STEM analysis revealed modulation of extracellular matrix degradation, including genes that modulate pathways related to activation of matrix metalloproteinase.
Additional modulation of genes in pathways following the oral exposure to 1 mg/kg bw /day of E171 for 2 days were linked to glycosylation, O-linked glycosylation, and O-linked glycosylation of TSR-containing proteins.The genes Adamts4, Adamts14, and Spon2 were significantly modulated in these pathways, and further indicate a correlation between E171 exposure and extracellular matrix organization related to the development of CRC.The Adamts gene family encodes for proteinases that are responsible for extracellular matrix degradation and regulation.Adamts4 has been shown to contribute to the development and progression of CRC, especially due to its effects on tumor growth, the regulation of macrophages, and the influence on the inflammatory microenvironment in cancer [45].A correlation between Adamts4 and cancer progression has also been published by Filou et al. (2015), which indicates the involvement of these collagen-processing proteases in CRC [46].Overexpression of Spon2 is highly associated with colon cancer metastasis, which displays one of the most feared side effects of CRC [47].Contrary to these findings, this Tg mouse model showed downregulation of Spon2 expression after 2 days of exposure to 1 mg/kg bw /day of E171, which might result from the short exposure time and potential compensation mechanisms of the organism.The regulation of the extracellular matrix environment and its involvement in the expression of proteins, growth factors, chemokines, and cell adhesion molecules overall contribute to the high risk of metastasis within CRC patients and showcase an important hallmark of cancer [44].
Genes relating to pathways of nuclear receptor signaling were consistently modulated after exposure to 1 mg/kg bw /day of E171.Nuclear receptors and their function as a sensor for dietary or endogenous stimuli are responsible for the translation of nutritional or hormonal signals into transcriptional modifications [48].These are commonly regulated by hormones and metabolites of steroid retinoids (vitamin A metabolites), vitamin D, fatty acids, bile acids, and other dietary derived lipids [48].Our study demonstrated the modulation of nuclear receptors over time, involving the genes Rarb (day 2), Nr1d1/Nr1d2 (day 2, 7, 14, and 21), and Rorc (day 7, 14, and 21).Rarb (Rar-ß) plays a critical role in the progression of several human cancers, including CRC, where it is responsible for the transcription of genes involved in cellular differentiation and acts as a potential tumor suppressor, through the subsequent modulation of the retinoic acid response element (RARE) [49,50].Furthermore, Rarb and Rorc are involved in the regulation of ß-catenin/WNT signaling [51].While a direct modulation of transcripts relating to ß-catenin/WNT signaling was not found in this Tg mouse mode, many studies suggest a link between the dysregulation of this pathway and CRC [52,53].Rorc is dysregulated in a multitude of cancers and is likely to participate in carcinogenesis through the modulation of IL-17, androgen receptors, and protein arginine N-methyltransferase 2 (Prmt2), leading to a ligand-dependent interaction and co-regulation of mechanisms associated with the development of inflammatory diseases, homeostasis, and circadian rhythm [54,55].Furthermore, Rorc functions as a transcription factor for RORyt, which is involved in the maturation of thymocytes and T-helper cells (Th), particularly Th17, which main function is the production of IL-17 [56].Modulation of Th1/Th17 has been shown by Bettini et al. (2017), in rats exposed to 10 mg/kg bw /day of E171 [13].Changes in the regulation of T cell differentiation have also been identified in our functional enrichment analysis.Nr1d1/Nr1d2 are additional nuclear receptor-associated core-clock genes that are modulated after exposure to E171.Nr1d1 especially impacts the circadian rhythm phenotype of Myc, Wee1, and Tp53; hence, it is involved in the processes of cell proliferation, apoptosis, and cell migration [57].Another potential tumor suppressor gene is the Dhrs3 gene, which together with Rarb was modulated in pathways relating to retinol metabolism.Dhrs3 and Rarb have been correlated with various types of cancer, including CRC and gastric cancer.Dhrs3 as a tumor suppressor gene has been suggested to play a critical role in connecting the retinol metabolism with the circadian rhythm, leading to an alteration of immune responses and cellular metabolism [58][59][60].
The cell-cycle-dependent checkpoint-altered gene expression of Skp2, Mcm6, and E2f2 were identified.Skp2 modulation, and the loss of its substrates p27 and Mcm6, and the E2f family are clinical markers for a poor outcome in CRC, its malignancy, and CRC tumor growth [61][62][63].
Previously, we studied gene expression responses after exposure to 5 mg/kg bw /day of E171 via intragastric gavage in various mouse models, including normal BALB/c mice and colitis-induced AOM/DSS mice [14,15].Across these three different models, similarities were found in the modulation of pathways related to signal transduction, cell cycle events, metabolism, inflammation, and tumor development.More specifically, we found the modulation of olfactory/GPCR-related pathways, inflammation-related pathways, modulation of extracellular spaces, activation of the immune system, metabolic changes, DNA damage/repair, and cancer-related signaling pathways (e.g., MAPK, PI3K) to be in common in this study [14,15].
Conclusions
In this comprehensive study, using a CAC Tg/Tg ; APC 580S/+ transgenic mouse model for colorectal cancer, we showed increased tumor growth and progression, but found no statistically significant increase in tumor formation induced by E171.We used ORA and STEM analyses to identify significantly modulated genes in genetic pathways and their temporality, to define mechanisms that are potentially involved in the increased tumor formation observed after exposure to E171.Our data suggest that E171 may contribute to tumor formation and progression by modulation of pathways related to inflammation, activation of immune responses, cell cycle events, and cancer signaling.These findings can be of relevance in the ongoing debate on the safety evaluation of E171 and contribute to the identification of molecular mechanisms related to E171-induced genotoxicity and carcinogenicity.Further toxicity studies are needed to evaluate the safety of E171 and other metal-based nanomaterials, which are used as food additives or food packaging materials.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/nano12081256/s1.Figure S1.Shows an increase in the average volume of tumors found in the homozygous and heterozygous transgenic mice after exposure to 5 mg/kg bw/day of E171 via drinking water (Kruskal-Wallis test, * < 0.05, ** < 0.005).Table S1: Pathway over-representation analysis (ORA) resulting from LIMMA analysis after exposure to 1 mg/kg bw /day of E171 (absolute FC ≥ 1.5, p-value < 0.05).Table S2: Pathway over-representation analysis (ORA) resulting from LIMMA analysis after exposure to 2 mg/kg bw /day of E171(absolute FC ≥ 1.5, p-value < 0.05).Table S3: Pathway over-representation analysis (ORA) resulting from LIMMA analysis after exposure to 5 mg/kg bw /day of E171 (absolute FC ≥ 1.5, p-value < 0.05).Table S4: List of genes to significantly altered profiles, following STEM analysis of 1 mg/kg bw /day of E171.Table S5: List of genes to significantly altered profiles, following STEM analysis of 2 mg/kg bw /day of E171.Table S6: List of genes to significantly altered profiles, following STEM analysis of 5 mg/kg bw /day of E171.
Figure 2 .
Figure 2. Histopathological analysis of tumors in control and E171-treated mice showed welldifferentiated adenocarcinomas.The mice exposed to E171 additionally showed enhanced hyperplasia in the epithelial cells as well as epithelial cell infiltration in the muscle layer of the adenocarcinomas.Red squares-epithelial cell hyperplasia and anaplasia; blue squares-epithelial cell infiltration into the muscle layer (hyperplasia).
Figure 3 .
Figure3.Average bodyweight and organ weight of Tg mice following intragastric exposure to 5 mg/kg bw /day of E171 over time (n = 78).One mouse was euthanized ahead of the schedule due to rectal prolapse.A second mouse from the control group was ruled as an outlier due to severe tumor formation.These mice were not included in the graphs.After euthanasia, colon, liver, and spleen were weighed.The black data series corresponds to the control mice exposed to sterile water and the grey data series is attributed to the mice exposed to 5 mg/kg bw /day of E171 via intragastric administration.Data is presented as the mean +/− standard deviation of each timepoint.
Figure 9 .
Figure 9. Metascape functional enrichment heatmap following the analysis of all 5% FDR-corrected DEGs (absolute FC ≥ 1.5, q-value < 0.05) after the exposure to 1 mg/kg bw /day of E171 in a Tg mouse model.Bar graphs of enriched terms across the input gene list are colored by p-values.Significantly altered genetic pathways included rhythmic processes, signaling, nuclear transcription pathway, retinol metabolism, negative regulation of defense responses, platelet-derived growth factor signaling, small cell lung cancer, skeletal muscle cell differentiation, O-linked glycosylation, and regulation of T-cell differentiation, confirming genetic alteration in pathways that were identified by ORA and STEM analysis.
Figure 10 .
Figure10.Network showing the interconnection of enriched terms.Clustered genes were typically close to each other and colored the same way when belonging to the same biological process, indicated by the legend in the bottom right corner.Edges linked similar terms, where thicker edges indicate higher similarity.The functional network shows genes according to their function and interaction.All 5% FDR-corrected DEGs (absolute FC ≥ 1.5, q-value < 0.05) following the exposure to 1 mg/kg bw /day of E171 from all timepoints were used to construct the network via Metascape.
Number of differentially expressed genes (DEGs) after exposure to 1, 2, and 5 mg/kg bw /day of E171 in the colon of Tg mice.Bars correspond with an absolute FC ≥ 1.5.The legend indicates the days of exposure.Exposure to 1 mg/kg bw /day of E171 showed a time-dependent increase in DEGs from 2 to 21 days.A measure of 2 mg/kg bw /day of E171 continuously showed the lowest response throughout all four timepoints, while 5 mg/kg bw /day of E171 resulted in a decreasing number of DEGs with an increased exposure time.The exact number of DEGs per timepoint and dose can be found in Table
Table 3 .
Pathway over-representation analysis (ORA) of the 5% FDR-corrected DEGs following the exposure to 1 mg/kg bw /day of E171 (absolute FC ≥ 1.5, q-value < 0.05).Pathways were grouped by timepoint and biological function.
Table 4 .
Pathways over-representation analysis (ORA) resulting from STEM analysis of the gene expression changes from all genes following the exposure to 1, 2, and 5 mg/kg bw /day of E171 over time.Profile colors indicate gene expression profiles that were assigned to a similar biological process. | 2022-04-10T15:21:02.661Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "76eb660f3e37e318a4283ccfb1d0841b965c6b34",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/12/8/1256/pdf?version=1649659644",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b4ded408e144cf9995943a8edcc8a93ab972a897",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
24067884 | pes2o/s2orc | v3-fos-license | Novel G-quadruplex stabilizing agents: in-silico approach and dynamics
The stabilization of overhang G-rich repetitive DNA units at the 3′-end of telomeres, which are well known to form functionally important G-quadruplex structures, is a current goal in designing novel anticancer drugs. In the present study, we have undertaken an in silico approach by molecular docking using a small molecule library to find potential G-quadruplex stabilizing agents. Two molecules, A, [N′1-imino(2-pyridyl)methyl-3,4,5-trimethoxybenzene-1-carbohydrazide] and B, [(3-[4-({[3-({4-[(2cyanoethyl)(methyl)amino]benzylidene}amino)propyl]imino}methyl)(methyl) anilino]propanenitrile)], that had good docking scores have been investigated for interaction with G-quadruplexes in a Molecular Dynamics simulation study. Fluorescence spectroscopy of G-quadruplexes bound to the screened molecules A and B was used to experimentally validate the theoretical results. The binding of ligands A and B to G-quadruplexes resulted in blue shifts of 10–18 nm, respectively, in the fluorescence emission spectra of the G-quadruplexes, demonstrating that both molecules bind to the G-face of the quadruplex. The same experiment was performed for the complexation of these small molecules with a G-rich DNA duplex, . Interestingly, no blue shift was observed in the fluorescence emission spectra of the DNA duplex in the presence of these small molecules. Thus, these findings indicated that these ligands very selectively bind to G-quadruplexes instead of the duplex DNA. In addition, a one-dimensional water ligand observed via a gradient spectroscopy Nuclear Magnetic Resonance (NMR) experiment showed that both molecules bound to the 23-mer G-quadruplex DNA. The molecular properties of the ligand–quadruplex complex have been analyzed with the help of the Adaptive Poisson-Boltzmann Solver, revealing that electrostatics govern the binding of the small molecules to G-quadruplexes. Both molecules were investigated in detail using solvation free energy calculations and Absorption, Distribution, Metabolism, Elimination and Toxicity (ADMET) predictions, which provide insight into lead optimization for designing G-quadruplex stabilizing agents; therefore, these molecules have potential as new therapeutic agents.
Introduction
The 3′-end region of chromosomes consists of almost 200 nucleotides composed of tandem repeats of hexanucleotide (TTAGGG) n and ends with an overhanging single strand. Telomeres are the main location in which these crucial functional secondary structures of DNA form (Cech, 2004). Telomeres have a vital role in life because of the various functions of telomere-associated proteins, their attachment to the nuclear matrix, and their higher order chromatin structures (Campbell, 2012;Collado, Blasco, & Serrano, 2007). Conservation of telomeric length is an important biological condition for cell growth (Engelhardt & Finke, 2001;Lange, 2009). Telomerase is the enzyme that functionalizes the addition of hexanucleotide repeats of TTAGGG to the 3′-end of a telomere and maintains the telomere during normal cell division (mitosis) (Eisenstein, 2011;Flores et al., 2005). Unusual overexpression of telomerase causes massive extension of telomeric ends and anomalous cell proliferation, which causes cancer. Many studies report that 80-85% of cancerous cells have overexpression of telomerase (Cong, Wright, & Shay, 2002;Lin et al., 2008;Rodriguez-Brenes, & Peskin, 2009). Researchers have hypothesized that the inhibition of telomerase can be an effective approach to return cancer cells to natural apoptosis (Finkel, Serrano, & Blasco, 2007;Sun & Hurley, 2001;Zaug, Podell, & Cech, 2005). The finding that cells switch to alternative mechanisms to maintain telomeric extension is valuable for adopting different strategies for the down regulation of genes (Hu et al., 2012;Ward & Autexier, 2005).
The overhanging G-rich repetitive DNA units at the 3′-end of telomeres can form various tertiary structures, including G-quadruplexes, where the guanine bases stack over each other and are stabilized by the cyclic Hoogsteen type of hydrogen bonding (Burge, Parkinson, Hazel, Todd, & Neidle, 2006;Dai, Carver, Punchihewa, Jones, & Yang, 2007;Luu, Phan, Kuryavyi, Lacroix, & Patel, 2008). These unusual but functionally important DNA structures are also present at other genomic positions, such as promoter oncogenic regions, including c-kit (Fernando et al., 2006), c-MYC (Jain, Grand, Bearss, & Hurley, 2002), bcl-2 (Dexheimer, Sun, & Hurley, 2006), VEGF (Sun, Guo, Rusche & Hurley, 2005), and others. There has been immense research on the biological and physical properties of these noncanonical DNA structures; this research indicates that G-quadruplexes might be a target for designing new anticancer therapeutics (Balasubramanian & Neidle, 2009;Neidle, 2009;Neidle & Parkinson, 2008;. The most promising aspect of the G-quadruplex structure is its topology, which cannot be recognized by the single-stranded RNA component of the telomerase enzyme (Rodriguez et al., 2008). In contrast, the G-quadruplex itself can be a signal of DNA damage and can, therefore, induce apoptosis (Rodriguez et al., 2008). Currently, researchers are more focused on developing agents that can stabilize the secondary G-rich structures of DNA rather than developing agents to inhibit telomerase activity (Dailey et al., 2009;Monchaud & Teulade-Fichou, 2008).
Determining which organic chemical class should be investigated experimentally is a crucial procedure in the initial stage of drug development. The synthesis, derivatization with modification in functional groups, and addition-deletion of excluded volumes for achieving a molecule that is truly specific for the target are indeed tedious, requiring a huge investment of manpower and finances. Therefore, despite much investment in research, no drug has been developed as a potential stabilizing agent of G-quadruplexes. Alternatively, structure-based drug design is an efficient method to screen several potential chemical classes of compounds; after lead optimization, the resulting chemical moiety may prove worthy to the pharmaceutical community (Holt, Buscaglia, Trent, & Chaires 2011;Neidle & Thurston, 2005). Because of the advancement of computing power, in silico drug design has proven to be advantageous in the rational design of novel therapeutic agents; this method is currently in high demand. In the present study, we have adopted the most consistent and efficient methodology of molecular docking to screen a drug library for potential G-quadruplex-stabilizing agents. We have attempted to validate the methodology at regular intervals and to reduce the selection criteria for molecules to produce the most accurate positive result. The docking result was further strengthened theoretically with another promising method, Molecular Dynamics (MD) simulation. In addition, the results of the theoretical studies were validated using low-resolution and high-resolution spectroscopy. Fluorescence and Nuclear Magnetic Resonance (NMR) spectroscopies were employed to show that the hit molecules bind selectively to the G-quadruplex structure. The fundamental nature and various other characteristics of the hypothesized molecules, such as electrostatic properties, solvation free energy, and pharmacokinetics (Absorption, Distribution, Metabolism, Elimination and Toxicity [ADMET]), have been analyzed; this information could be used by various research communities to test and develop these G-quadruplex stabilizing agents.
We search for a promising algorithm that can be adapted successfully to satisfy our goal of finding a G-quadruplex stabilizing agent via in silico docking/virtual screening. All of the available programs have cuttingedge results against protein targets, yet their efficacies against nucleic acids are still debated. Anthony and coworkers have performed docking simulation experiments where the small molecules are targeted to the minor groove of DNA (Anthony et al., 2005). The simulation showed an excellent agreement with experimental results from X-ray crystallography, NMR, and gel-based footprinting. In another type of assessment, it was found that protein-based docking programs such as Glide and GOLD are efficient enough to reproduce the binding modes of the 60 tested RNA complexes (Li et al., 2010). A similar type of validation was also reported recently by a molecular modeling group; they proved that the performance of Glide and GOLD with the simulated structure is very good and produces accurate results in accordance with the structures elucidated by spectroscopic techniques (Srivastava, Chourasia, Kumar, & Sastry, 2011). Moreover, a comparative analysis of different DNA binding drugs for disease targets such as leishmania has also been reported where Glide is used for docking studies with nucleic acid targets (Chauhan, Vidyarthi, & Poddar, 2012). Considering these reports, we have adopted the efficient Glide docking algorithm to screen a drug library for potential G-quadruplex stabilizing agents.
Chemicals and reagents
The DNA duplex and G-quadruplex sequences were purchased from Kric, India. The small molecules, A and B, were purchased from Fisher Scientific Inc (Maybridge, Cornwall, UK). Deuterium oxide was obtained from Cambridge Isotope Laboratories Inc.
G-quadruplex and molecular data-set For this investigation, the NMR structure of human telomeric G-quadruplex TAGGG(TTAGGG) 3 (PDB accession code: 2ld8) (Heddi & Phan, 2011) was chosen. The potassium (K + ) ion was described as an important factor contributing to the structural topology of G-quadruplexes (Lim et al., 2009;Renčiuk et al., 2009). Two K + ions were manually placed between three quartets using the edit option in Maestro. Potential binding sites for G-quadruplex were identified using SiteMap v2.5; the calculations begin with an initial search stage that finds one or more regions called sites, which are suitable for binding to ligand molecules. The molecular data-set containing the small molecules used for virtual screening was obtained from the Maybridge database (HitFinder) of 14,400 molecules. This database was chosen because it has selection of molecules using a clustering algorithm that employs standard Daylight Fingerprints with the Tanimoto (Butina, 1999) similarity index. Clustering with .71 similarities can then be used in NMR for screening against the target (Baurin et al., 2004). Moreover, the entire molecular data-set is in the permissible range of drug-likeness, i.e. the Lipinski rule of five (Lipinski, Lombardo, Dominy, & Feeney, 2001): ClogP ≤ 5, H-bond acceptors ≤ 10, H-bond donors ≤ 5, and mol. wt. ≤ 500.
Docking Studies
All of the molecules were docked using Glide v5.7. The conformational flexibility of the ligand molecule is determined by an extensive conformational search augmented by a heuristic screen that eliminates unsuitable conformations with long-range internal H-bonds. For each core conformation of a ligand, an exhaustive search for possible locations and orientations is performed over the surface of the G-quadruplex. The refined ligand undergoes energy minimization based on precomputed OPLS-AA (Jorgensen, Maxwell, & Tirado-Rives, 1996) van der Waals and electrostatic grids to reduce the large energy and gradient terms resulting from inter-atomic contacts that are too close. A grid with dimensions of 40.10 Å Â 38.63 Å Â 30.56 Å over the G-quadruplex was used to define the binding region and to allow ligands to dock at distances ≤ 20 Å. The defined grid covers the entire topology of the G-quadruplex so that the ligand position and orientation relative to the receptor can be sampled sufficiently; the conformation of the receptor was also fixed during the docking. The standard precision (SP) mode of the software was used for the initial screening of the molecules. Selected top scoring molecules binding to the desired core of the G-quadruplex were selected for the extra precision (XP) mode of docking (Eldridge, Murray, Auton, Paolini, & Mee, 1997). To validate the docking results, we used the docking score of Quarfloxin (CX-3543) (Simonsson et al., 1998), a lead molecule designed and synthesized by Cylene Pharmaceuticals, as a positive control for the docking procedure. Quarfloxin, a pentacyclic system with amino side chains and high selectivity for G-quadruplexes, is already in a phase II clinical trial. Other classes of compounds with reported interaction with G-quadruplexes were also used to perform the control test of docking studies (Zhen, Xi, Chattopadhyaya, & Hecht, 2011). The Glide score is based on the ChemScore scoring function coupled with the ligand-receptor molecular mechanics interaction energy and ligand strain energy; thus, provides a docked pose. Based on the binding site of these drugs and their docking score, we then validated our docking study of small molecules from the Maybridge database. As part of the docking result analysis, we also used a 1000 ligand-like decoy set from Schrödinger (www.schrodinger.com/productpage/14/5/74/) to evaluate the results using a receiver operating characteristic curve (ROC) curve approach (Triballeau, Acher, Brabet, Pin, & Bertrand, 2005). The characteristic area under the curve obtained from ROC can be helpful in predicting whether the obtained hits are true positives.
Molecular dynamics (MD) simulation
Dynamic simulation of the drug-bound and drug-free states of the G-quadruplex was performed using Desmond Molecular Dynamics System v2.2 with OPLS-AA force fields (Jorgensen et al., 1996). The OPLS force field generates accurate charges and force field parameters for ligands. This force field was adopted with an objective to keep the same set of protonation states and charge parameters for ligand pose refinement and conformational sampling during docking; thus, the same parameter set will also be used in the simulation studies. The solute system containing the drug-bound and drug-free states of the G-quadruplex was solvated using the TIP3P water model (Jorgensen, Chandrasekhar, Madura, Impey, & Klein, 1983), with orthorhombic (α=β=γ=90°) boundary conditions extending a distance of 10 Å from any solute atom in all directions. The charge of the total system was neutralized by adding the appropriate number of Na + counter ions. The number of explicit water molecules ranged from 3891 to 4008. After the set up of the system, an initial minimization was performed with a convergence threshold of 1.0 kcal.mol À1 Å À1 to allow all of the atoms to adjust to the system environment. An MD simulation was performed for each system using an isothermal-isobaric ensemble (NPT) with a relaxation process. The relaxation process consisted of minimization with and without solute restraints and simulation in an canonical ensemble using a Berendsen thermostat with a temperature of 10 K and restraints on solute heavy atoms for 12 ps. In contrast, simulation in the NPT ensemble was performed using a Berendsen thermostat and a Berendsen barostat with a temperature of 10 K and a pressure of 1 atm for 12 ps. In the next stage of relaxation, the system was simulated for 24 ps with Berendsen NPT with a temperature of 300 K and a pressure of 1 atm with and without restraints on solute heavy atoms. The particle mesh Ewald (Darden, York, & Pedersen, 1993) method was used with a tolerance of 1e À09 as a part of the Coulomb method. A cutoff of 14 Å was used for calculating the nonbonded solvent-solvent and solute-solvent interactions. The pressure and temperature of the system was maintained at 1.013 atm (isotropic) and 300 K, respectively. The SHAKE algorithm (Krutler, Gunsteren, & Hnenberger, 2001) was employed with an integration time step of 2 fs. A recording interval of 1.2 and 4.8 ps for the energy and trajectory frames, respectively, was set. The MD simulation for the entire system was continued for up to 5 ns, and the trajectory analysis was performed using the Simulation Event Analysis module of Desmond (Scheme 2).
Adaptive Poisson-Boltzmann Solver (APBS) iso-surface calculations
The Adaptive Poisson-Boltzmann Solver (APBS) was used to analyze the electrostatic characteristics of the macromolecular complex. The standard all-atom charges generated by the OPLS force field were used for this purpose. The Poisson-Boltzmann equation (PBE), the most widespread model for evaluating electrostatic properties, was adopted from the reference of Baker and coworkers (Baker, Sept, Joseph, Holst, & Cammon, 2001). The essence of the model is that it relates the electrostatic potential to the dielectric properties and the distribution of the atomic partial charges of the biomolecular system. The various computations required to evaluate the electrostatic properties were performed using a plugin option of PyMOLv1.2r3pre, and the resulting electrostatic contour visualizations were collected at the appropriate positive and negative iso-surface values, optimized for visualization.
Free energy perturbation (FEP) and concept of free energy The main objective of performing the MD simulations for an ensemble system is to explore the macroscopic properties of that system based on its microscopic properties. The macroscopic properties include various thermodynamic terms, such as temperature, pressure, number of particles in the system, and the free energy of binding of a particular compound; in contrast, the atomic positions (r) and the momenta (p) are the microscopic properties of a system, which are defined as the coordinates of the multi-dimensional space called the phase space. The concept of free energy calculations was developed in the early nineties (Kirkwood, 1935;Zwanzig, 1954) but was not popular because of the complexity. The first successful treatment of free energy calculations was for the proton transfer in the lysozyme (Warshel & Levitt, 1976). These calculations gained popularity with significant works by several scientists (Kollman, 1993;McCarrick & Kollman, 1999) who integrated thermodynamic parameters with the free energy calculations. This free energy calculation is mainly based on Zwanzig's equation, which gives the free energy difference between the two states A and B.
where, β = 1/kT (k is Boltzmann constant and T is temperature), […] A denotes an MD or Monte Carlo generated ensemble average of state A, and ΔV = V A ÀV B . V A and V B are the potential energies of A and B states. For absolute solvation free energy expressions, A can be defined as the solute molecule in the gas phase and B as the solute molecule in the solution phase. The free energy calculations were performed in Desmond. From Figure 1, the solvation free energy for Figure 1. Schematic representation of the free energy change in solute-solvent interaction in different phases. Reactant is referred to ligand molecule and since there is no differentiation in the formed product except the molecular conformation, nothing is referred to the products here. ligand molecules can be calculated by annihilating the reactant (ligand) in solvent and in vacuum.
The main advantage of Desmond is that it requires only one step to calculate the solvation free energy of the solute; the reactant in vacuum is recovered from the solvent phase by rejecting the interactions of reactant with any other atoms in system (Shivakumar et al., 2010).
The solvation free energy calculation plays a major role in predicting the binding affinity of the compound and is a powerful tool in the lead optimization step for drug discovery. Implementing a FEP is a multi-step method, which involves the use of a group of intermediate potential energy functions defined between the paths from state A to state B (Brandsdal et al., 2003). In these functions, λ is the coupling factor whose value ranges from 0 to 1, and m = (1 to n) is the number of points; each point possesses an individual potential energy function corresponding to the particular λ value.
A new process of multiple step free energy calculations was proposed, which is called "λ-dynamics." In this process, the λ value is treated as a dynamic variable whose value ranges from 1 to n (Kong & Brooks, 1996).
The main concept involved in this method is that different ligands compete for the same receptor in the body; this competition can be modeled by the simultaneous searching of the various ligands with a varying value of λ (Elber & Karplus, 1990). The use of various intermediate states increases the accuracy of the free energy calculations. A more accurate method for the calculation of free energies was proposed by Bennett (1976) and referred to as the Bennett Acceptance Ratio. The Free energy perturbation (FEP) calculations were performed in Desmond using this method in which a number of simulations were run in parallel for a single FEP calculation.
For the solvation free energy calculation in Desmond, the total free energy by the FEP workflow was employed: the small molecules were annihilated separately in the solvent system with the TIP3P water models, the simulation was run for up to 2 ns, and λ was given a value of 11. A correction value for the energy, which represents the misplaced long-range dispersion energy because of the cutoff of the van der Waal potential energy, was also obtained using FEP calculations.
Fluorescence studies
Fluorescence emission spectroscopy was used to analyze the binding behavior of both molecules (A and B) with a G-quadruplex, 5′-TAGGG(TTAGGG) 3 -3′. A G-rich DNA duplex was used as a control in this study with the following sequence: 5 0 À GCGCATGCTACGCG À 3 0 3 0 À CGCGTACGATGCGC À 5 0 . Emission spectra were recorded at room temperature using a Hitachi spectrometer (F-700 FL spectrophotometer) with 1 cm path-length quartz cuvette. Each emission spectrum was recorded from 300 to 500 nm with an excitation wavelength of 260 nm. The excitation and emission slits were set to 5 nm each. The emission spectra were recorded by titrating the small molecules with concentrations of up to 50 μM against 10 μM of the G-quadruplex and the duplex DNA. All fluorescence experiments were performed in 10 mM phosphate buffer, 10 mM EDTA, and 100 mM KCl (pH 7.0). Because of solubility problems with molecule A, a small amount of NaOH was added to dissolve it in at the same buffer.
WaterLOGSY NMR experiments All spectra were recorded at 298 K with a Bruker AVANCE III 500 MHz NMR spectrometer equipped with a 5 mm smart probe. For each sample, a reference spectrum and a 1D WaterLOGSY spectrum were recorded in 10 mM phosphate buffer, 10 mM EDTA, and 100 mM KCl (pH 7.0) containing 10% D 2 O. The first water-selective 180°pulse was 25 ms long. A weak rectangular PFG was applied for the entire mixing time (1.5 s). A short gradient recovery time of 1 ms was introduced at the end of the mixing time before the detection pulse. The two water-selective 180°square pulses of the double spinecho sequence were 2.6 ms long. The gradient recovery time was 0.2 s. The data were collected with a sweep width of 10 ppm, an acquisition time of 1.5 s and a relaxation delay of 2.0 s. Prior to Fourier transformation, the data were multiplied with an exponential function with a line broadening of 2.0 Hz. All chemical shifts were referenced using (2,2-Dimethyl-2-silapentane-5-sulfonate sodium salt) as an internal standard.
Results and discussion Docking Studies
Two types of binding sites were found for the G-quadruplex from SiteMap analysis (Figure 2). By structure, these two binding sites can be classified as: (a) a loop region, which formed small cavities and (b) a co-facial/ end stacking region at the top of the macromolecule (Scheme 1). The loop regions are highly rich in electron density and are favorable for binding with small molecular fragments. These loop regions provide good binding affinity for hydrogen bond acceptors. The other binding site, i.e. the co-facial end stacking region containing the stacked guanine residues (G-face) (forming the first quartet), is enriched with π-electron clouds from all four directions (Scheme 1). This site is important for the design of novel G-quadruplex stabilizing agents because it is the most suitable region for binding of small molecules, peptides or DNA-RNA aptamers (Murat, Singh, & Defrancq, 2011), which can add favorable entropy to the system, resulting in its stabilization. The region was explored as the most suitable region when we docked the previously reported stabilizing agents with the G-quadruplex ( Figure 3); hence, we excluded the investigation with the loop binders. This initial docking study not only enables us to explore the favorable binding site for interaction but also provides a means to validate our docking studies (Scheme 1). We screened a set of 14,400 molecules that were available in the Maybridge database using the SP mode of Glide.
The docked poses obtained for the molecules from this study were superimposed with the docked poses obtained for the test set of molecules. The selection criteria for the molecules to be processed in the XP mode of docking were that the Glide SP score was comparable to that of the test set with a rmsd fit of up to 2 Å for their binding pose. A total of 42 molecules of comparable Glide score were selected; these molecules had a docked pose in the co-facial region only and did not have any unusual binding poses in the loop region. Although the basic docking algorithm utilized by Glide is the same for both SP and XP modes, the alternative sampling of ligand conformation with ligand-receptor complementarity and scoring function is more extensive in the XP mode. Hence, with the XP mode of docking, the false positives can be excluded. The top Glide score obtained from the XP mode of docking was À5.65 for Quarfloxin (the docked pose is represented in Figure 3). Two molecules, A and B, showed satisfactory results in agreement with the score and conformation of Quarfloxin. The two molecules, N′1-imino(2-pyridyl)methyl-3,4,5-trimethoxybenzene-1-carbohydrazide (molecule A) and ( Figure 4. The competence of the docking result was also assessed with a set of ligand decoys containing 1000 molecules. These decoys are nonspecific binders and can bind to any part of the receptor macromolecule. This data-set is known to consist of inactive molecules; they will be treated as false positive even if they bind to the receptor site. Thus, the docking efficiency and accuracy of the 42 selected molecules can be compared with respect to their binding pose using a receiver operating characteristic curve (ROC) ( Figure 5). The ROC value for the plot, "sensitivity" vs. "1-specificity", was .865; interestingly, the area under the fitted curve was found to be .850. This result dem- onstrates that the docking was accurate. The high ROC value (close to 1) indicates that the 42 selected molecules in our study selectively bind to the G-core of the quadruplexes (Table S1).
Structural stability of drug-bound G-quadruplexes
The high docking score for two molecules, A and B, suggested that they may have a high affinity for G-quadruplexes. To generate the additional theoretical information required to design a new therapeutic agent, a 5 ns MD simulation was performed with both ligand molecules bound to the G-quadruplex. The simulation fetch average conformations of the drug-like molecule bound to the G-quadruplex with very few fluctuations in rmsd (Table S2). A free-state G-quadruplex was also simulated with the same time scale and used as a control and reference for the simulation results of the drug-bound state. Figure 6 (A-C) shows the rmsd plot for all of the models compared with the initial structure and considering various atoms under investigation. In the ligand-bound state, the fluctuations, based on the rmsd, of the backbone are higher than those for the nucleotide bases. The greater variation in the rmsd of the free-state G-quadruplex than that of the ligand-bound states indicates the absence of steric strain by ligands. Ensemble structures of various MD frames are aligned and shown in Figure 7. The rmsd deviation of molecules A and B bound to G-quadruplexes appears to be almost comparable for the bases and for the backbone; this finding demonstrates that both ligands behave as stabilizing agents with prominent contacts and are potent in controlling the fluctuations of G-quadruplex structures (Table S2). An initial jump in the rmsd scale during first few ps of simulation compared with that of the starting frame corresponds to the relaxation of the model. The stability of the trajectories of the drug-bound states can be observed from 1 to 5 ns, with minimal fluctuations ranging from 0.5 to 1.5 Å. The rmsd of both drug-bound states are limited to this deviation and indicate that the average structures of the bound states are approximately convergent; thus, the MD simulation was terminated after 5 ns. In contrast, the rmsd plot for the free-state G-quadruplex does not appear to have stabilized in 5 ns ( Figure 6). Although achieving stability in the free-state model was not the subject of investigation, it was used as a control for the study of A and B in their bound states. A plot of the allatom rmsd vs. time also explains the similar behaviors ( Figure 6(C)). The standard deviations for the all atoms rmsd of A and B were $0.487 and $0.427 Å, respectively, whereas that of the free-state G-quadruplex was $0.697 Å .
Coulombic and van der Waal interaction energies
The van der Waal and Coulomb energies are two important contributors to the stabilization of any biomolecule. Table 1 describes the van der Waal and Coulomb interaction energies (kcal.mol À1 ) of G-quadruplex with selfatoms, solvent atoms (water models), and solute atoms of ligands and ions. The Coulomb energy is the major contributor for the stabilization of the drug-bound state of G-quadruplex (Table 1). The Coulomb energy for the model favors binding to the ligand compared with the free-state of the model. The strain of ligand molecules makes the aromatic nuclei of guanine bases stack over each other, which can be judged by the average torsion angle deviations (Table S2). Furthermore, the protonation by the ligand compensates for the electronegativity of O6 by restricting its fluctuations. When shared, the electron orbitals of O6 might become limited to a certain number of degrees of freedom, which ultimately limits the deviation in geometry of the guanine core. This type of effect was shown by comparing the rmsd of O6 specifically over time in all three models, as shown in Figure 8. The central N atom of molecule A is able to Figure S1 in supporting information). The same difference can be observed in the corresponding energy contributions shown in Table 1. In molecule A, there are three methoxy groups and one of these methyl groups interacts with the π cloud of the guanine ring. This type of σ-π contact is very favorable and is another reason for stable binding of molecule A-G-quadruplex. A diagrammatic explanation is given in the supplementary data ( Figure S2 in the supporting information). The Coulomb and van der Waal contributions to the stacking energy are almost comparable for molecules A and B. As in the free state, there is no ligand interaction with the G-quadruplex; the contribution from Coulomb interaction is almost zero. Therefore, the total energy comes from the van der Waal interactions between two K + ions and the G-core. The majority of the effects of the continuum system of the TIP3P water model are from the Coulomb interactions rather than the van der Waal interactions. This interaction is more pronounced in the absence of a ligand. A similar type of explanation is given using a Poisson equation describing the multi-pole representations of the electronic distribution in Figure 9.
Hydrogen bonding analysis
Hoogsteen type H-bonding is the key factor behind the topology of the G-quadruplex. The average inter-atomic distances between N1H-O6 and N2H-N7 are shown in Table 2. The hydrophobic core of the first quartet is the most suitable region for the binding of stabilizing agents. Searching for structural deviations in the inter-atomic distances of polar atoms can provide valuable information regarding the interaction of DNA with small molecules. One terminal of molecule B was buried sideways between the first and second quartet, binding it to the model with a strong H-bond, and the other terminal was above the structure ( Figure S3 in the supporting information). The Hoogsteen type H-bonding plays a vital role in the stabilization of guanine structures. The planarity and uniformity of the H-bonds involved in Hoogsteen G-pairing is well controlled by the position of the O6 atom. A plot of rmsd (O6) vs. time for the drug-bound states of the G-quadruplex showed that the displacement of the O6 atom from the plane of the Hoogsteen base pairing is negligible during the 5 ns of simulation ( Figure 8). For the ligand-receptor complex, we found that the Hoogsteen H-bonding in the G-pairing and H-bonding between the ligand and receptor play a key role in stabilizing the complex; πÀπ stacking interactions also enable the correct positioning of the ligands over the G-quadruplex for the entire time scale. As indicated in Figure 4, the atoms of ligands that are emphasized with the arrow H-bond to the macromolecule. The two hydrogen atoms that have a central protonated N atom (H-bond donor) in molecule A make hydrogen bonds with the O6 of guanines (G5 and G23) ( Figure S1 in the supporting information). In contrast, one terminal N atom, a H-bond acceptor, from a cyanide (-CN) group of molecule B makes a H-bond with the amine group (-N(2)H) of a guanine residue (G4) in the second quartet ( Figure S1 in the supporting information) (below the facial quartet).
The average H-bond distances computed for molecules A and B were $2.54 and $2.45 Å, respectively, which emphasizes that H-bonds are the main contributor to the binding of both ligands to the G-quadruplex. The strength of the H-bonding for the H-bond acceptor in molecule B is much greater than that of the H-bond donor of molecule A. This finding can be seen in a histogram plot (Figure 10), which counts the frequency of H-bond distances for both molecules. The Hbonding pattern provides a steric restriction on the rotational freedom of the C-C bond in the ligands. This restriction in the rotational degrees of freedom reinforces the stacking of the aromatic moiety of the ligand over the G-core. The H-bonding arrangement of molecule B is biased toward the side of the molecule in which the geometry is most affected by the conformational constraints of the ligand-DNA complex. This bias comprises the flexibility to its other half; the dynamicity is shown in a plot of distance vs. time (Supplementary data Figure S3). Examining the overall structure indicates that a small molecule should be rationally designed with a N atom that could be utilized as a donor (molecule A) or acceptor (molecule B) to form a contact with the G-core; doing so can strengthen strong binding in the system.
Electrostatic Iso-surface by APBS
A standard practice in molecular biophysics is the evaluation of the electrostatic properties of a ligand-macromolecule complex. The APBS, developed by the group of Nathan Baker, is a routine using the PBE to illustrate the electrostatic attributes of a biomolecular complex; thus, the molecular electrostatic surface can be understood and used to develop a good stabilizing agent for G-quadruplexes. Figure 9(A) shows the electrostatic isosurface contours of the G-quadruplex model with positive and negative iso-surfaces at ± 4 kT/e. If we carefully analyze the G-quadruplex model, the macromolecule can be assumed to be a system with highly anionic characteristics. For the ease of comparison, a top view of the G-quadruplex model is shown with the phosphorous atoms in sphere representation (Figure 9(B)), indicating the relative position of the phosphate groups, sugars, and bases in the quadruplex topology. In the G-quadruplex structure, the electronegative charge is diffused in two ways: (1) the negative charges of the backbone phosphates surround the nucleobases and sugar moieties and (2) the lone pair electrons of guanine O6 atoms are directed toward the center, increasing the electron density in the G-core. Because of the dense cloud caused by negative charge diffusion, the iso-surface contours of the G-quadruplex form a large dome shape. The π-electrons of the purine rings also add to the overall electronegativity of the system. When we compared the electrostatic properties of the G-quadruplex system bound to the drug, as shown in Figure 9(C) and (D), we found a reduced overall charge in the system in the hydrophobic region. Because of this reduction, a G-quadruplex can only bind to (and thus recognize) a molecule if it has sufficient positive electrostatic fields. In molecule A, the methyl protons also interact with the aromatic system of the preceding guanine base, which tightens the binding of molecule A with its host. The electrostatic iso-surfaces of the G-quadruplex bound to both molecules A and B were obtained at ± 4 kT/e. The electrostatic iso-surfaces of both ligand systems are shown in Figure 11, where the iso-surfaces were achieved at ± 2 kT/e, to illustrate the broad range of electropositive values. The partial atomic charges of individual molecules were assigned according to the OPLS force-field and are shown in Figure 4.
Solvation free energy
The solvation energies of molecules A and B were computed in pure solvent. The annihilation of the compounds resulted in a total solvent energy of À48.7 ± 0.3 kcal. mol À1 and À74.5 ± 0.6 kcal.mol À1 for molecules A and B, respectively. These values are the lowest terms of the 11 λ windows that were separately annihilated to find the various possible low energy conformations in which the ligands were expected to bind G-quadruplexes. Because the ligands contained no intra-perturbed groups for the solvation free energy calculation, the interaction of both of the small molecules with the solvent must be favorable. In molecule A, the central charged amine group is bound to an aromatic system (pyridyl group) from one side, which hinders the solvation shell of the amino group. Similarly for molecule B, the two charged central amino groups are affected by a benzylidene group that utilizes the lone pair availability of the N atom and thus increases the solvent accessibility. This effect can be understood in terms of the solvation free energy difference between molecules A and B. Graphical plots To the best of our knowledge, there is no report of a study emphasizing the calculation of the standard molecular solvation free energy of Gquadruplex stabilizing agents. Without this experimental value, it is very difficult to validate our theoretical data. When analyzing the plots for both ligands for the solvation free energy with the same time scale, molecule B shows a relatively higher free energy. This finding Figure 11. Iso-surface contours for molecule A and B calculated at ± 2kT/e. Positive and negative isosurfaces were represented by blue and red colors, respectively.
indicates that molecule B is more solvated than molecule A. These data are based on the larger solvent accessible surface area (SASA) of B and attributed to the higher degree of freedom in the state bound to the G-quadruplex. The solvation free energy calculation is also accompanied by a correction factor, which is typically known as a long-range dispersion energy correction. The correction values were À3.0 and À4.2 for molecules A and B, respectively. The details of the solvation free energy calculations are given in Table 3. Few of the other relevant properties computed from QikProp can be correlated with the discussion of solvation free energy in Table 5. Both compounds are Central Nervous System (CNS) -inactive; therefore, it can be hypothesized that the molecules have no central nervous system activity. All of the other predicted properties described in Table 4 appear to be within the permissible range.
Binding of the G-quadruplex with small molecules measured using fluorescence spectroscopy To validate the theoretical study, we used the intrinsic fluorescence of the G-quadruplex to monitor its binding with molecules A and B. A significant hypsochromic shift or blue shift in the fluorescence emission provides strong evidence for whether a drug/ligand molecule binds to the G-quadruplex. The fluorescence experiment was performed by titrating molecules A and B (0-50 μM) with the G-quadruplex, 5′-TAGGG (TTAGGG) 3 (10 μM). The characteristic emission of the spectrum is shown in Figure 13. A total of 10-18 nm hypsochromic shifts in the G-quadruplex emission spectra were observed when either molecule A or B was bound to the G-quadruplex (Figure 13(A) and (C)). The changes in the fluorescence emission maxima of the Gquadruplex as a function of the concentration of the ligands, molecules A and B, yields equilibrium dissociation constants (K d ) of 137 ± 1.1 and 31 ± 1.2 μM, respectively ( Figure 13(B) and (D)). Molecule B is a better binder to the G-quadruplex than molecule A. A control experiment was performed for a GC-rich DNA duplex, 5 0 À GCGCATGCTACGCG À 3 0 3 0 À CGCGTACGATGCGC À 5 0 in the presence of molecules A and B. Interestingly, no blue shifts of the emission spectra of the DNA duplex occurred with the addition of the ligands (Figures 13(E) and (F)), but a significant increase in the intensity in the fluorescence spectra upon the addition of the ligands was observed. To determine the cause of the unexpected increase in fluorescence intensity, we measured the fluorescence spectra of only the ligands, A and B, without any interference of the host, the G-quadruplex ( Figure S4 in the supporting information). We observed an increase in fluorescence intensity in the emission spectra at $340 nm for both molecules when they were excited at 260 nm ( Figure S4 in the supporting information). This finding explains why the G-quadruplex and DNA duplex showed increases in fluorescence intensity upon the addition of ligands A and B. However, experimental evidence such as blue shift of G-quadruplex upon addition to the molecules A and B clearly showed that molecules A and B bind much more selectively to the G-quadruplex structures than to the DNA duplex.
Water ligand observed via gradient spectroscopy (WaterLOGSY) NMR experiments Water ligand observed via gradient spectroscopy (Water-LOGSY) NMR experiments allow the detection of binding of small molecules to protein targets with dissociation constants in the low micromolar to millimolar range (Dalvit et al., 2000;Dalvit, Foqliatto, Stewart, Veronesi, & Stockman, 2001). Compared with a saturation transfer difference NMR experiment (Bhunia, Saravanan, Mohanram, Mangoni, & Bhattacharjya, 2011;Mayer & Meyer, 1999), which is also an important technique for measuring the binding of small molecules to receptor proteins, WaterLOGSY is more effective for studying the interactions of nucleic acids and small molecules (Bhunia, Bhattacharjya, & Chatterjee, 2012;Lepre, Moore, & Peng, 2004). The WaterLOGSY signal is produced by the selective transfer of bulk water magnetization to the ligand in solution via the macromolecule-ligand complex. In the resulting spectra, the ligands that bind generally have an opposite sign to those of compounds that do not bind. Figure14 shows the one-dimensional WaterLOGSY NMR spectra of the small molecules, A and B, in the presence of the G-quadruplex. Interestingly, both molecules bind to the G-quadruplex, as indicated by the positive signals. The aromatic ring protons and the methyl protons of molecule A show positive signals in the presence of the G-quadruplex. The simulation data suggests that the three methoxy groups attached to the benzene moiety are in close proximity to the G5 and G23 groups of the G-quadruplex. The aromatic ring protons of benzene and the pyridyl groups of molecule A severely overlap at 7.35 and 7.52 ppm (Figure14B). However, the MD simulation data indicate that the aromatic rings, benzene and pyridyl, are stacked to G11, G5, and G17, respectively. The putative binding site and the MD simu-lation data correlate well with the WaterLOGSY data ( Figure 14(C)). In contrast, the intensity of the peaks of molecule B is significantly stronger than that of molecule A, when binding to the G-quadruplex. Interestingly, all of the protons, except for those in two ethylene groups of molecule B, show positive WaterLOGSY signals. This result demonstrates that the ethylene groups (3.12 and 3.82 ppm) do not interact with the G-quadruplex as indicated by the negative signals (Figure 14(E)). This finding is not surprising because the MD simulation data clearly show that one terminal of molecule B is quite flexible and that the ethylene groups point toward the solution (Figure 14(F)). Both N-Me groups (3.67 and 3.38 ppm) and both aromatic ring protons (6.90 and 7.83 ppm) and the flexible linkers in the center show strong Water-LOGSY signals. In the simulated model, G17 and G23 are in close proximity to one terminal of molecule B, whereas G5, G10, and G23 are very close to the other terminal of the G-quadruplex (Figure 14(F)). The flexible linker, attached to both aromatic ring protons of molecule B, stacked to the G-faces, such as G11, G17, and G5 of G-quadruplex. As a control experiment, we measured the WaterLOGSY signals of molecules A and B in the absence and presence of GC-rich duplex DNA (Figure S5 in the supporting information). None of the molecules bound to duplex DNA structures. The WaterLOGSY NMR experiments in conjunction with the MD simulation characterize the binding epitope of both molecules, which bind to the G-quadruplex at an atomic resolution.
A final hypothesis: requirements for the rational design of G-quadruplex stabilizing agents: From previous literature and this study, we elaborate some of the key features, which can be implemented for the future rational design of a G-quadruplex stabilizing agent. In designing a suitable stabilizing agent for guanine-rich secondary structures, the primary concern is to understand the structural criteria needed to complement the G-quadruplex. Some of the criteria for a rational design of novel G-quadruplex stabilizing agents are as follows: (1) A molecule with enhanced aromaticity is essential to effectively interact with G-quadruplexes. π-π stacking interactions are key requirements for any molecule to be attached to the facial guanine quartets.
(2) H-bond acceptability is the main parameter for a ligand to bridge with the loop surrounding the Gcore. In molecule B, nitrogen can act as a Lewis base and donate its lone pair of electrons to form a hydrogen bond, which helps place molecule B above the quadruplex structure. This property is important because it establishes a criterion to develop stabilizing agents irrespective of the facial topology of G-quadruplex. (3) In contrast, the donation of a H-bond to the quartets by a ligand molecule is a prerequisite for the facial G-quadruplex stabilization. Ligand molecules with this characteristic will enable the utilization of the lone pairs available from the O6 of guanines. This effect is more pronounced when the H-bond donor moiety is found in the central region of the ligand molecule, as in molecule A. Apart from this, the protonation to this moiety provides a positive potential directed toward the central hydrophobic region. (4) The methyl-π type interaction also helps to stabilize the ligand and G-quartet assembly. Ligands with a methyl moiety localized on the top of the central part of the G-quadruplex try to stabilize the host-guest assembly with methyl-π type dipole-induced dipole interactions, as is the case for molecule A. This type of nonpolar interaction from the mixing of orbitals of the methyl-proton with the π electron in the purine ring of guanine is of higher order. This interaction can provide a very stable and strong binding affinity with a minimal margin of excluded volumes to any organic core molecule. (5) The radius of gyration of the molecule is another important criterion. The average distance between the guanine residues of a quartet from all types of G-quadruplex structures is within a range of 15-17 Å. If the radius of gyration of the molecule falls in this range, it can successfully find a conformational orientation over the hydrophobic core of the G-quadruplex. Additionally, the middle portion of the molecule should have rotational degrees of rotation with respect to the C-C bond to be able to adjust to the planarity of the guanine face. The radius of gyration of a molecule is related to the dimension of the central core and not of the functional groups attached to the core. (6) Electrostatic interaction has a more significant role than any other physical descriptor. In our study, we found the G-quadruplex structure to be highly anionic with abundant electronic density surrounding the structure from all directions. The incoming ligand must have sufficient cationic characteristics. In other words, the incoming ligand should have adequate electron-withdrawing effects to stay and bind over the planarity of the quadruplex.
Conclusion
In this study, we revealed that the two molecules, A and B, are pharmacologically potent in arresting G-quadruplex structures in DNA. The data from MD simulations indicate that these screened analogs are capable of binding the quadruplex within a reasonable time. Fluorescence experiments were performed by titrating molecules A and B (0-50 μM) with the quartet DNA of sequence, 5′-TAGGG (TTAGGG) 3 (10 μM). A large blue shift was found in the emission spectra of the G-quadruplex (the fluorescence emission of the G-quartet is at 337 nm) with stepwise increase in the ligand molecule concentration over the concentration of the G-quadruplex. Molecules A and B give emission blue shifts of 10-18 nm, respectively, demonstrating that both molecules bind to the Gface of the quadruplex. A one-dimensional WaterLOGSY experiment further proved at atomic resolution that both molecules, A and B, bind to the G-quadruplex structure; the structural information of the bound complex is well correlated with the MDs simulation. Our experimental investigation is underway with these molecules. The framework of these two molecules can be the platform to design novel anticancerous therapeutics in the near future.
Supplementary material
The supplementary material for this paper is available online at http://dx.doi.10.1080/07391102.2012.742246. | 2018-04-03T05:26:13.646Z | 2013-10-24T00:00:00.000 | {
"year": 2013,
"sha1": "f8b68f25ea0e589dd54f9bcad2c0980af52e5216",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/journal_contribution/Novel_G_quadruplex_stabilizing_agents_in_silico_approach_and_dynamics/825923/2/files/1256718.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "253cc92a056df5be47b91e4d1d471726ea5f6fa9",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
213167746 | pes2o/s2orc | v3-fos-license | Biological evaluation of novel thiomaltol-based organometallic complexes as topoisomerase IIα inhibitors.
Abstract Topoisomerase IIα (topo2α) is an essential nuclear enzyme involved in DNA replication, transcription, recombination, chromosome condensation, and highly expressed in many tumors. Thus, topo2α-targeting has become a very efficient and well-established anticancer strategy. Herein, we investigate the cytotoxic and DNA-damaging activity of thiomaltol-containing ruthenium-, osmium-, rhodium- and iridium-based organometallic complexes in human mammary carcinoma cell lines by means of several biological assays, including knockdown of topo2α expression levels by RNA interference. Results suggest that inhibition of topo2α is a key process in the cytotoxic mechanism for some of the compounds, whereas direct induction of DNA double-strand breaks or other DNA damage is mostly rather minor. In addition, molecular modeling studies performed for two of the compounds (with Ru(II) as the metal center) evinces that these complexes are able to access the DNA-binding pocket of the enzyme, where the hydrophilic environment favors the interaction with highly polar complexes. These findings substantiate the potential of these compounds for application as antitumor metallopharmaceuticals. Graphic abstract Electronic supplementary material The online version of this article (10.1007/s00775-020-01775-2) contains supplementary material, which is available to authorized users.
Introduction
Since the discovery of the cytotoxic properties of cisplatin by Rosenberg in 1965 [1], the field of metal-based chemotherapeutic agents has been rapidly growing and developing every year [2]. Cisplatin, carboplatin, and oxaliplatin are the only FDA approved platinum drugs used worldwide for the treatment of various cancer types; however, there are many cases of resistance of tumors to platinum drugs. Drug resistance is a well-known phenomenon that results when diseases become tolerant to pharmaceutical treatments [3]. Some mechanisms of drug resistance are disease-specific, while others, such as the drug efflux observed in microbes and in human drug-resistant cancers, are evolutionarily conserved. Although many types of cancer are initially susceptible to chemotherapy, over time they can develop resistance through different mechanisms, such as DNA mutations and metabolic changes that promote drug inhibition and degradation [3]. The processes by which cells develop resistance to antitumor platinum drugs have been the subject of intense research [4] as it is a major obstacle for their clinical use. A large body of experimental evidence suggests that the antitumor activity of platinum complexes stems from their ability to form with DNA various types of covalent adducts [5,6]; as a result, research on DNA modifications by these drugs and their cellular processing has predominated. The resistance of tumor cells to platinum drugs has been attributed to various processes, such as reduced platinum accumulation, intracellular inactivation, blocking of the induction of apoptosis, an increased repair of platinum-DNA adducts or a combination thereof [4,5]. To overcome these obstacles, many different transition metal complexes have been synthesized and investigated [7]. In recent years, ruthenium-based molecules have emerged as promising antitumor agents. Certain ruthenium complexes possess unique biochemical features allowing them to accumulate preferentially in neoplastic tissues and/or to convert to their active state under specific pathophysiological conditions. In this respect, ruthenium (and osmium) compounds with their different mechanisms of action are promising candidates as anticancer drugs [8][9][10]. In the recent past, drug development has mainly relied on organic chemistry. This can be attributed to the lack of knowledge on the mechanisms and binding modes of metal-based drugs with biomolecular targets other than DNA. Traditionally, it was believed that the majority of metal-based drugs targeted DNA, but considerable evidence has accumulated that metal-based drugs are also able to bind to protein targets. Recent trends involve metal complexes that inhibit protein and lipid kinases, matrix metalloproteases, telomerases, topoisomerases, glutathione-S-transferases, and histone deacetylases [11][12][13]. In the present study, we investigate topoisomerases as targets for metal-based drugs. Topoisomerases are key nuclear enzymes that control the topological states of DNA by generating transient strand breakage. They are, therefore, involved in key processes such as DNA replication and transcription, as well as chromosome formation, enrichment, and separation (Fig. 1).
Mammalian cells encode two type II isozymes -topoisomerase IIα (topo2α) and β (topo2β), which have highly identical N-terminal ATPase and central core domains. However, they differ in their C-termini as well as in their expression patterns and cannot compensate for each other in vivo, i.e., if one of the forms is downregulated the other one cannot replace its functions equivalently. Moreover, the α isoform is produced primarily in the late S-phase, it remains associated with chromosomes at mitosis and plays a dominant role in this context, which makes it apparently more sensitive to pharmaceutical agents [15]. Well-known topo1-targeting drugs include camptothecin and its derivatives, while topo2-targeting drugs include doxorubicin, etoposide, and mitoxantrone. There are different types of topo2-targeting drugs, namely inhibitors and poisons. Topo2 poisons inhibit the enzyme's activity by stabilization of the enzyme-DNA covalent complex, which leads to the accumulation of DNA double-strand breaks. In contrast, inhibitors of topo2 catalytic activity may stop the cycle by preventing binding of the enzyme to DNA, by blocking the ATP-binding site of the enzyme or by inhibiting the cleavage reaction. This kind of topo2-targeting drugs have become established in clinical practice -many studies have shown the efficacy of irinotecan (CPT-11, a derivative of camptothecin), which is approved by the FDA for use in colorectal cancer [14,[16][17][18][19]. The topo2α enzyme has been shown to be a proliferation marker associated with tumor grade and cell proliferation index [20]. The prognostic effect of topo2α seems different among different subtypes of breast cancer, but, in general, patients with high topo2α expression showed a significantly higher rate of distant metastasis and shorter distant metastasis free survival compared with patients with low topo2α expression [20]. Cancer cell lines expressing different levels of the enzyme would be a powerful tool in the investigation of topo2 behavior and responsiveness to treatment. The understanding of the molecular biology of cancer cell progression, invasion, metastasis, and failure to undergo cell death has led to a new generation of systemic anticancer therapies, which target specific cellular defects in malignant cells. Unlike classic cytotoxic chemotherapy, this new generation of drugs tends to work in tumors with specific genetic defects, allowing personalized treatment [21]. In this new era of therapies, topo2 targeting remains an indispensible anticancer strategy.
Here we investigate the cytotoxicity of several thiomaltolbased metal complexes (Fig. 2) [22]. The chlorido complexes 1a-1d hydrolyze quickly to the corresponding aqua species which is assumed to be the active form of these organometallics. As already reported, replacement of the labile chlorido ligand by N-methylimidazole remarkably increased the stability of 2a-2d. However, the N-donor can be cleaved under acidic conditions, again yielding the more reactive aqua complex. All complexes exhibit remarkable cytotoxicity. Based on our findings in the aforementioned study, we assumed that inhibition of topoisomerase IIα might play a crucial role in the mode of action of organometallic thiomaltol complexes [22]. Within this work, we substantiate that topo2 is involved in the anticancer mode of action of some of these complexes and we unveil the interaction mechanism between the drugs and the enzyme by means of biological assays complemented with computational modeling. Given that topoisomerases are among the most wellestablished targets for anticancer therapy, this opens up new avenues for the development of this compound class.
Results and discussion
Recently published findings [22] on the anticancer properties of novel Ru II , Os II , Rh III , and Ir III thiomaltol complexes showed that they act as inhibitors of topo2 catalytic activity and have a significantly higher enzyme inhibitory capacity than the free ligand. In cell cycle studies the Ru II and Ir III methylimidazole-substituted complexes caused the highest Fig. 1 Catalytic activity of topo1 and topo2 enzymes-topo1 cuts one strand of the DNA double helix, whereas topo2 cuts both strands, requiring ATP (adenosine triphosphate), and is able to religate strands at the end of each cycle (designed based on a publication from John Nitiss) [14] S-phase accumulation (up to 43% and 45%, respectively), which was consistent with enzyme inhibition. Previously acquired data implied that the introduction of 1-methylimidazole as a leaving group substantially increased the stability in aqueous solution. It was suggested that this modification may allow accumulation of the intact complexes and controlled activation at the lower pH values within tumor tissues. The cytotoxicity of thiomaltol ligand L1 and respective complexes 1a-d and 2a-d in A549, CH1/PA-1 and SW480 cells has already been reported [22]; however, new data for SK-BR-3, T47D, and MDA-MB468 (mammary carcinoma cells) were acquired within this work. In general, all complexes showed IC 50 values in the low micromolar to high nanomolar range (Table 1). Remarkably, Os II complexes are less active than the other complexes. The monodentate leaving group has only a marginal impact on the activity of the Os II complexes, as the chlorido complex 1b is only about 1.5 times more active than the 1-methylimidazole analog 2b in most of the tested cell lines. In contrast, Rh III complex 2c was more active than its chlorido counterpart by factors of up to 2.6, depending on the cell line. The P-gp expressing cell line SW480 is mostly as sensitive to all the complexes as the broadly chemosensitive CH1/PA-1 cell line, whereas IC 50 and MDA-MB468) expressing different levels of topo2 enzyme are in a similar range with a few exceptions-Os II complexes turned out to be up to tenfold less active and Ir III complex 1d is about twofold less active than in the former cell lines. In this respect, Ru II analogs are in an intermediate position-they are more active than Os II complexes but less active than the Ir III -or Rh III -based complexes with some deviations. In addition, the two Ru II complexes differ from each other, with up to 4 times more activity for complex 2a with respect to 1a.
Taking into account that we consider topo2 to be a target of these drugs, we had to make sure that there is no direct interaction with DNA. The ability to alter the secondary structure of DNA in cell-free experiments was studied by the use of an electrophoretic double-stranded DNA plasmid assay ( Figure S1). In general, these metal-based complexes and the corresponding ligand L1 did not show any impact on DNA mobility and its secondary structure; however, we saw an unusual behavior of plasmid DNA in the presence of some complexes-either the intensity of open circular (OC) band increased over the time, or additional bands between OC and supercoiled (SC) bands appeared. To make sure that this unusual pattern is not a result of the DNA breakage, we performed an H2AX assay that allows monitoring and accurately measuring phospho-specific histone H2AX activation in a population of cells. Histone H2AX functions downstream of the DNA damage kinase signaling cascade, and phosphorylation of this histone at serine 139 is an important indicator of DNA damage. As the level of DNA damage increases, the level of phospho-histone H2AX (also known as γH2AX) increases, accumulating at the sites of DNA damage. This accumulation is often used to indicate the level of DNA damage present within the cell [23,24]. The evaluation of a formation of γ-H2AX in response to DNA double-strand breaks in SW480, SK-BR-3, T47D and MDA-MB468 cell lines after 48 h of exposure to the studied compounds showed no tremendous increase in DNA damage ( Figure S2). Only Rh III complexes 1c and 2c were clearly harmful for DNA and generated double-strand breaks in up to 33% of MDA-MB468 and up to 52% of SW480 cells (the latter cell line was the second most chemosensitive towards the tested substances over all cancer cell lines used). Ru II complexes 1a and 2a generated a maximum of DNA damage in up to 13% of T47D cells. Probably due to the expression level of topo2α, T47D cells were relatively sensitive to all tested substances (9-30%) except ligand L1 (DNA damage in 4% of cell populations). For comparison, DNA doublestrand breaks were present in the negative control with 1-2% and in positive control (100 µM of etoposide) with 41-86%. It should be mentioned that in a previous publication the potential of these complexes to raise cellular ROS levels had been investigated. One of the well-known damaging effects of ROS is DNA damage, including double-stand breaks [25].
The obtained data indicated a slight increase in ROS levels for Rh III complex 2c by a factor 2.7, which may be a reason for the higher percentage of double-strand breaks in cells treated with this complex. To substantiate this finding, we investigated the ability of the Ru and Rh compounds to activate programmed cell death. Apoptosis/necrosis induction in SW480, SK-BR-3, T47D and MDA-MB468 cells after 48 h of exposure to the studied compounds and ligand, measured by flow cytometry using annexin V-FITC/propidium iodide double staining, demonstrated the ability of the complexes to induce cell death ( Figure S3). Ru II complexes 1a and 2a effectively induced apoptosis in SW480 and SK-BR-3 cells with 40-50 and 70-80%, respectively. Rh III complexes 1c and 2c induced programmed cell death predominantly in SK-BR-3 cells with 70 and 88%, respectively. Surprisingly, ligand L1 was able to induce apoptosis in SK-BR-3 and MDA-MB468 cells with 51 and 66%, respectively. For comparison, the positive control (160 µM of merbarone, as well-known inhibitor of topo2 catalytic activity) induced apoptosis only with up to 33%. The percentage of necrosis measured by this assay on average was no more than 12%. The fact that replacement of the chlorido ligand with methylimidazole (complex 2a) increases both the stability of the complex in the presence of biomolecules and its solubility might explain why 2a is more active than its analog 1a. These two complexes also displayed different capacities of topo2α inhibition (as published previously)-2a complex was able to inhibit the enzyme activity at a concentration 4 times lower than the one required for its analog 1a (2.5 vs 10 µM). Moreover, S-phase accumulation in the cell cycle studies (up to 45%) was consistent with enzyme inhibition experiments, and the capacity to induce apoptosis was 1.7fold higher for 2a than for 1a [22]. Thus, we can conclude that rhodium complexes induce ROS activation that leads to DNA damage and apoptosis, while ruthenium complexes rather induce cell death via enzyme inhibition.
Furthermore, we established methods to characterize anticancer properties of metal-based compounds with respect to the role of topo2α inhibition. One of the ideas that attracted our attention was that topo2α expression levels affect the cell response to drug treatment. Previous studies suggested that we may expect a different cell response to treatment in case of up/downregulation of topo2α expression levels. Some authors who have studied the effect of reducing the level of expression have shown that this can lead to either drug resistance or a higher susceptibility to drugs [25][26][27][28]. To investigate how different levels of topo2α expression may affect the cellular response to the treatment with the compounds studied here, we not only compared the cell lines with intrinsically different levels of topo2α expression but induced a knockdown of the enzyme expression by RNA interference (RNAi). RNAi is a biological process in which RNA molecules inhibit gene expression, typically by causing the destruction of specific mRNA molecules. This system is used to investigate the functions of different genes [29]. We used BLOCK-iT Expression Vector Kits that combine the advantages of traditional RNAi vectors-a stable expression and the ability to use viral delivery with capabilities for tissue-specific expression and multiple target knockdown from the same transcript. The pcDNA vectors of this kit are designed to express artificial miRNAs which are engineered to have 100% homology to the target sequence and will result in target cleavage. Since the complete inhibition of topo2α production in the cell leads to division failure and cell death [30, 31], we established several combinations of cell lines and miRNA sequences to reduce the production of the protein but not to inhibit it completely-the topo2α gene sequence was divided into four short sequences to produce RNA sequences complementary to different parts of the gene (see Scheme 1). Thus, we avoided complete inhibition of topo2α protein expression, as well as cell death. Still, topo2α knockdown variants of SK-BR-3 and MDA-MB468 cells stopped dividing or died on the following days. Therefore, only T47D-kn628 and T47D-kn630 cells were used for further experiments.
The effect of RNAi was verified at the protein level by Western blotting and these experiments clearly showed and MDA-MB468 standard cell lines as well as in the topo2α knockdown T47D cell lines (primers Hmi417628 and Hmi417630 -T47D-kn628 and T47D-kn630); β-actin was used as a loading control the successful reduction of topo2α expression in one of the three variants -T47D-kn630 cells (Fig. 3). Then, the antiproliferative activity of the metal compounds was evaluated in the cell line T47D and its topo2α knockdown variant to investigate the possible role of the enzyme in the drug response ( Table 2). The cell line with reduced level of topo2α expression T47D-kn630 turned out to be 2.3-fold more resistant to complex 2a and 2.4-fold more resistant to ligand L1. These changes in response to treatment compared with the parental cell line T47D are more likely meaningful than the minor changes observed with other the compounds.
Previously, the sensitivity to different anticancer drugs (amsacrine, doxorubicin, mitoxantrone, etoposide) was studied in a panel of breast cancer cell lines by Houlbrook and co-workers [15]. When the cells were ranked according to their sensitivity to one of the drugs, it was found that the ranking for other compounds did not follow the same pattern (with discrepancies from 2-6 times). Later experiments of Burgess and co-workers identified the topo2α expression level as the major determinant of response to the topoisomerase II poison doxorubicin and showed that suppression of the enzyme produces resistance to doxorubicin in vitro and in vivo [32]. They also mentioned the observation that the effects of topo2α knockdown were specific to topoisomerase II poisons, with shTop2A causing resistance to etoposide. A publication by Soubeyrand and co-workers reported that topoII siRNA ablation showed that etoposide cytotoxicity correlates with the inability of cells to correct topo2α-initiated DNA damage-these results linked the lethality of etoposide to the generation of persistent topo2αdependent DNA defects within topologically open chromatin domains [33]. Logically, topo2α knockdown would be expected to confer resistance to a poison's cytotoxicity, as decreased topoisomerase expression would produce fewer of the lethal DNA double-strand breaks that result from cleavage complexes. In case of inhibitors of topo2 catalytic activity (such as merbarone), it is not so easy to make any assumptions due to a limited number of published data. However, based on the IC 50 values of Table 2, we can confirm that the sensitivity of the cells to the different drugs depends on the level of enzyme expression. Therefore, it is very likely that the topo2α enzyme is the target biomolecule of some of the compounds investigated here. However, the lack of a clear trend suggests that the interaction of metal complexes with topo2α is not the only cytotoxic mechanism and/or that the protein might play different roles in the mode of action of the drugs.
Finally, we investigated the interaction of topo2α and the metal-based complexes by performing theoretical simulations. The first step of our simulations was to investigate the possible binding pockets of the DNA-binding domain of type II topoisomerase, where the thiomaltol ligand can be non-covalently bonded, by means of classical MC simulations. The calculation of the binding free energy along the MC simulation identified favorable binding sites. Figure 4a shows the binding sites of thiomaltol (represented by the red surface) on the surface of the protein for which the (absolute) binding free energy is larger than 15 kcal/ mol. As can be seen, two main binding sites are revealed, namely pocket 1 and pocket 2, with the former being the DNA-binding site. A better characterization of both binding pockets can be obtained by computing the binding free energy for both pockets. For that purpose, the snapshots for which thiomaltol is in each of the binding pockets need to be identified; therefore, the region of the protein where the binding pockets are located must be (arbitrarily) defined. We have selected the residues PHE280 and THR247, which are shown in Fig. 4b by blue and magenta van der Waals spheres, respectively, as reference residues along the MC simulation. Then, if for a MC snapshot the center of mass of thiomaltol is separated from the center of mass of residue PHE280 (THR247) by less than 15 Å, it is considered that the ligand binds to pocket 1 (pocket 2). By employing this definition of the binding pockets, we have identified 199 and 205 binding events into pockets 1 and 2, respectively, with average binding free energies of − 16.9 and − 15.0 kcal/mol, as shown in Fig. 4b. This result indicates that the binding of thiomaltol to topoisomerase can occur in both pockets 1 and 2. However, the binding to pocket 1, which is the DNAbinding site, is slightly more thermodynamically favored.
In the next step, the two MC snapshots with the largest binding free energy for the pockets 1 and 2 were selected. For these two snapshots, the thiomaltol ligand is replaced by the Ru complexes 1a and 2a in both binding pockets. Thus, it is assumed that the binding pocket of the metal complexes is the same as that of the thiomaltol ligand. Then, classical MD simulations followed by QM/MM MD simulations are evolved for each of the four complex/pocket combinations. The binding affinity between the complexes and the protein could be analyzed by computing the intermolecular potential energy provided by the force field along the classical MD simulation. However, as discussed above, the force field employed here is not accurate enough for that purpose. Alternatively, the interaction energy between the complex and the protein could be calculated from the energy calculations along the QM/MM MD simulation. However, the present QM/MM computations do not allow the decomposition of the total interaction energy into Ru-complex/solvent and Ru-complex/protein contributions. Therefore, we analyze the binding affinity in terms of intermolecular contacts between the Ru species and the protein along the QM/MM MD simulation. The number of contacts, defined here as the number of interatomic distances smaller than 3 Å between the complexes and the protein, are listed in Table 3 for each binding pocket. As can be seen, in both binding pockets the total number of contacts for complex 2a (61 for pocket 1 and 59 for pocket 2) is larger than that for complex 1a (56 for pocket 1 and 51 for pocket 2). This suggests that intermolecular interactions between the Ru complexes and the protein are stronger for complex 2a than for complex 1a. In addition, the number of contacts per metal-complex atom, defined as the total number of contacts divided by the number of atoms of the metal complex, provides an estimation of the intermolecular interaction undergone by each atom of the complex. As Table 3 shows, the number of contacts per atom is ca. 1.2 for both complexes and for both binding pockets, indicating that the strength of each Ru-complex/protein interatomic interaction is very similar in both complexes and pockets. By comparing the number of contacts for a particular complex in the different pockets, it is apparent that the intermolecular interactions are stronger for pocket 1 (56 contacts for complex 1a and 62 for complex 2a) than for pocket 2 (51 contacts for complex 1a and 59 for complex 2a). Therefore, based solely on the analysis of intermolecular contacts, the binding of complex 2a into pocket 1 is the most favorable binding event.
The visualization of the QM/MM MD simulations reveals that the complexes embedded into the binding pockets are accompanied by a large solvation sphere, as can be seen in the representative snapshots displayed in Fig. 5 for the complex 2a inside the pockets 1 and 2. The solvation sphere stabilizes the positive charge of the complexes and interacts with the polar amino acids of the protein, favoring the binding process. For both Ru compounds, the solvation sphere presents a larger number of water molecules in pocket 1 than in pocket 2. Specifically, considering a solvation shell of 3 Å Fig. 4 a Binding sites found along the classical MC simulation with free energies for the drug/protein binding process larger (in absolute value) than -15 kcal/mol represented by the red surface. Pockets 1 and 2 are schematically highlighted by blue and pink circles. b Binding pockets 1 and 2 represented by the cyan and magenta surfaces defined as the collection of binding sites located at a distance smaller than 15 Å from the residues PHE280 (blue van der Waals representation) and THR247 (magenta van der Waals representation), respectively. Binding free energies for each pocket in kcal/mol are also displayed. The protein is represented in silver color (18) water molecules in pockets 1 and 2, respectively. Therefore, the desolvation process of the complexes, when going from the bulk solvent to the protein pocket, which contributes unfavorably to the binding process, is more important for pocket 2. The different degree of solvation in the two pockets can be understood by attending to the protein residues that surrounds the solvated complexes into the two binding sites. As Fig. 5a shows, pocket 1 presents a large number of polar amino acids (represented by the orange surfaces) at the binding site that create a polar environment that favorably interacts with the solvated complexes. Contrary, pocket 2 presents a more hydrophobic character at the binding site, allowing the penetration of fewer water molecules inside the pocket (Fig. 5b). This means that the energy penalty associated with the desolvation process is larger for pocket 2 than for pocket 1. Therefore, pocket 1 is the preferred binding site for the two Ru complexes. This means that complexes 1a and 2a could potentially prevent the binding of DNA to the DNA-binding site (pocket 1) of topo2α and inhibit the enzymatic activity of the protein.
Next, the different binding affinities between complexes 1a and 2a and the protein domain are examined by comparing the solvation spheres of both Ru compounds. In the preferred pocket 1, complex 2a is solvated by a larger solvation sphere (33 water molecules) than complex 1a (23 water molecules). This could seem an obvious conclusion since complex 2a has a larger size than complex 1a and, thus, a larger number of water molecules can interact with it. However, if the solvation spheres are compared in terms of number of water molecules per solute atom to eliminate the system-size dependency, the same conclusion is drawn. The solvation sphere of complex 2a has 0.65 water molecules per solute atom, while that of complex 1a has only 0.54 water molecules per solute atom when the compounds are embedded in pocket 1 ( Table 3). The only structural difference between the two complexes is the substitution of water (complex 1a) by 1-methylimidazole (complex 2a), as shown in Fig. 2 (one should notice that the chlorido group of compounds 1a-d is replaced by water after hydrolysis). The dipole moment of both ligands was computed by B3LYP/6-31G* using the optimized geometries at the same level of theory. 1-Methylimidazole presents a larger dipole moment (3.9 D) than water (1.9 D). Therefore, complex 2a should be better solvated, in agreement with what is observed in the MD simulations. Moreover, the better solvated complex 2a interacts stronger than complex 1a with the DNA-binding site of topoisomerase, which presents a larger hydrophilicity than pocket 2. The hydrophobic environment found in pocket 2 induces a desolvation effect in both Ru species, which present solvation spheres of only 17 water molecules. Thus, the intermolecular contacts between the metal complexes and the protein and, most importantly, the analysis of the solvation spheres of the two complexes indicate that pocket 1 is the most likely binding site of the protein domain. In addition, complex 2a interacts stronger with topoisomerase than complex 1a and, thus, should be able to inhibit the DNA binding in a more effective way. This behavior is in consonance with the generally higher cytotoxicity experimentally observed for the complexes with the 1-methylimidazole ligand.
Fig. 5
Polar and nonpolar amino acids represented in orange and green, respectively, that are located at a distance smaller than 6 Å from the metal complex 2a in a pocket 1 and b pocket 2. The drug is displayed in tube representation with C in cyan, S in yellow, O in red, Ru in pink, and H in white. The water molecules solvating the complex located at a distance smaller than 3 Å from the complex are represented by balls and tubes, where oxygen is red and hydrogen is white. The protein is represented in silver color
Conclusion
The cytotoxic mechanism of several thiomaltol-based Ru, Os, Rh and Ir complexes has been investigated by means of biological assays and (for Ru complexes) theoretical simulations. Electrophoretic DNA plasmid and H2AX assays have demonstrated that some of the drugs do not induce a relevant degree of damage to the DNA structure and, thus, DNA is most likely not the main target of these drugs. On the other hand, the changes of IC 50 values observed upon lowering topo2α levels by means of RNAi are consistent with the assumption that this enzyme is the main target at least for part of the compounds investigated here, primarily complex 2a (in accordance with its higher topo2α inhibitory capacity shown previously [22]). Monte Carlo and molecular dynamics simulations have revealed that the metal compounds are able to bind into the DNAbinding pocket of the enzyme. In addition, the drugs enter the binding pocket with their solvation shell to maximize the interactions with the hydrophilic environment of the pocket. This indicates that the interactions between the metal complexes and the protein are not strong enough to compensate the energy penalty associated with the desolvation process of the drugs when they enter the binding pocket. Therefore, functionalization of the compounds with highly polar ligands favors their binding to the protein. This behavior is in agreement with the enhanced inhibition of cancer cell growth upon substitution of the labile chlorido ligand (which is readily exchanged for an aqua ligand in solution) by 1-methylimidazole. Our experiments also showed that inhibition of topo2α may not be the only way to inhibit cancer for this type of complexes. Thus, more research is required to validate the role of topo2α expression levels and investigate additional cytotoxic mechanisms. Nevertheless, these compounds may serve as potential pharmacophores for further rational design of inhibitors of topo2 catalytic activity. This adds a new aspect to the modes of action of experimental organometallics, which may allow overcoming some of the drawbacks of clinically applied metal-based anticancer agents.
Cell lines and culture conditions
SK-BR-3, T47D and MDA-MB468 (mammary carcinoma human cells) were kindly provided by Evelyn Dittrich (Department of Medicine I, Medical University of Vienna, Austria) and authenticated via STR profiling by Multiplexion, Heidelberg, Germany. All cell culture media and supplements were purchased from Sigma-Aldrich and plastic ware from Starlab. SK-BR-3 and SW480 cells were grown in 75 cm 2 culture flasks in complete medium (i.e., Minimum Essential Medium supplemented with 10% heat-inactivated fetal bovine serum, 1 mM sodium pyruvate, 4 mM l-glutamine and 1% non-essential amino acids from 100 × stock), while T47D and MDA-MB468 cells in RPMI 1640 medium (supplemented with 10% heat-inactivated fetal bovine serum and 4 mM L-glutamine) as adherent monolayer cultures. After successful transfection the cell line T47D-kn was maintained in RPMI 1640 medium with 1.5% of antibiotic (blastidin) according to the manufacturer protocol (Invitrogen by life technologies). Cultures were grown at 37 °C under a humidified atmosphere containing 5% CO 2 and 95% air.
Antiproliferative activity assay (MTT)
Antiproliferative activity in vitro was determined by the MTT assay (MTT = 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2H-tetrazolium bromide). For this purpose, cells were harvested from culture flasks by use of trypsin and seeded in appropriative medium (100 µL/well) into 96-well plates in densities of 5 × 10 3 (SK-BR-3), 8 × 10 3 (T47D) and 2 × 10 3 (MDA-MB468) as well as 24 × 10 3 (T47D-kn) cells per well (growth kinetic tests were performed to estimate and confirm the required number of cells for further experiments). Cells were allowed for 24 h to settle and resume proliferation. Test compounds were dissolved in DMSO first, diluted in appropriate medium and instantly added to the plates (100 µL/well), where the DMSO content did not exceed 0.5%. After exposure for 96 h, the medium was removed and replaced with 100 μL/well of a 1:7 MTT/RPMI 1640 solution (MTT solution, 5 mg/mL of MTT reagent in phosphate-buffered saline; RPMI 1640 medium) and incubated for 4 h at 37 °C. Subsequently, the MTT/RPMI 1640 solution was removed, and the formazan product formed by viable cells was dissolved in DMSO (150 µL/well). Optical densities were measured with a microplate reader (BioTek ELx808) at 550 nm (and a reference wavelength of 690 nm) to yield relative quantities of viable cells as percentages of untreated controls, and 50% inhibitory concentrations (IC 50 ) were calculated by interpolation. Evaluation is based on at least three independent experiments with triplicates for each concentration level.
Flow cytometric detection of apoptotic/necrotic cells
Induction of cell death was analyzed by flow cytometry using FITC-conjugated annexin V (BioVision, USA) and propidium iodide (PI, Fluka) double staining. SK-BR-3, T47D, MDA-MB468 and SW480 cells were seeded into 12-well plates in a density of 5-13 × 10 4 cells per well in complete medium and allowed to settle for 24 h. The cells were exposed to test compounds in different concentrations for 48 h at 37 °C. Merbarone (Sigma-Aldrich) was used as a positive control at a concentration of 160 µM. After incubation, cells were gently trypsinized, washed with PBS, and suspended with FITC-conjugated annexin V (0.25 μg/mL) and PI (1 μg/mL) in binding buffer (
DNA damage detection assay (γH2AX)
Evaluation of a formation of γ-H2AX in response to DNA double stranded breaks was performed with the Flow Cel-lect™ Histone H2A.X Phosphorylation Assay Kit (Millipore). SK-BR-3, T47D, MDA-MB468 and SW480 cells were seeded into 6-well plates in a density of 15-25 × 10 4 cells per well in complete medium and allowed to settle for 24 h. The cells were exposed to test compounds at different concentrations for 48 h at 37 °C. Etoposide (Sigma-Aldrich) was used as a positive control at a concentration of 100 µM. All following steps for DNA damage evaluation were performed according to the manufacturer's protocol. Numbers of cells and their viability state were determined with a Guava 8HT EasyCyte flow cytometer (Millipore) using ViaCount software. Results are based on three independent experiments.
Knockdown of topo2α expression level by RNAi
Single-stranded oligonucleotides specific for the topo2α gene were purchased from Invitrogen and the knockdown of topo2α expression level by RNAi was performed according to manufacturer's protocol. Details of the single-stranded oligonucleotides specific for the topo2α gene: [Homo sapiens] topoisomerase (DNA) II alpha 170 kDa, TOP2A -TOP2A BLOCK-IT™ miR RNAi Select Hmi417628; Hmi417629; Hmi417630; Hmi417631. Double-stranded oligonucleotides were generated and checked for integrity, then ligation reaction and transformation of One Shot TOP10 E. coli was performed and transformants analyzed. Target plasmids were extracted, rapid BP/LR recombination reactions performed, expression clones linearized, One Shot Stbl3 E. coli transformed and transformants analyzed. Plasmid DNA was isolated, the sequence confirmed and transfection reaction in target cells SK-BR-3, T47D and MDA-MB468 performed. The gene knockdown was performed as described in the manufacturer's protocols and illustrated in Scheme 1.
In the next few days topo2α knockdown versions of SK-BR-3 and MDA-MB468 cells stopped division or died. T47D cells with sequences Hmi417628 (T47D-kn628) and Hmi417630 (T47D-kn630) were checked for the verification of topo2α knockdown via Western blotting.
Western blotting
The visualization of topo2α protein level in SW480, SK-BR-3, T47D and MDA-MB468 cells as well as the verification of topo2α knockdown in T47D-kn628/630 cells was performed by Western blotting. Cells were seeded in densities of 1.5-2 × 10 5 cells per well into 6-well plates (Starlab) and allowed to resume proliferation for 24 h. After the cells were washed with PBS and lysed by adding 100 µL per well of RIPA lysis buffer (150 mM sodium chloride, 1.0% Triton X-100, 0.5% sodium deoxycholate, 0.1% SDS (sodium dodecyl sulfate), 50 mM Tris, pH 8.0). Cells were carefully scratched and sonicated for 10 s to shear DNA and reduce sample viscosity. The protein content of lysates was measured with the Pierce Micro BCA Protein Assay (Thermo Scientific). The appropriate volume of cell lysates (20 µg protein content per gel pocket) was mixed with 6ʹ loading buffer (12% w/v SDS, 30% 2-mercaptoethanol, 60% glycerol, 0.012% bromophenol blue, 0.375 M Tris HCL, pH 6.8) and heated to 95 °C for 5 min. The proteins were separated by 8% SDS-polyacrylamide gel electrophoresis and subsequently transferred onto nitrocellulose membranes (Millipore) using a semi-dry blotting apparatus (Biorad). The membranes were blocked with blocking buffer (1´ TBS, 0.1% Tween-20 with 5% BSA) and immunoblotted with the relevant primary topo2α rabbit antibody diluted in TBS/T buffer in a ratio of 1:1000 (Cell Signaling Technology). Primary antibodies were detected using anti-rabbit IgG HRP-linked antibody diluted in TBS/T buffer in a ratio of 1:3000 (Cell Signaling Technology) and visualized with the chemiluminescence detection system Fusion SL (Vilber Lourmat) using SuperSignal West Pico Chemiluminescent Substrate (Thermo Scientific). Afterwards, the membranes were cleaned by stripping buffer (15 g glycine, 1 g SDS, 10 mL Tween 20, pH 2.2) and immunoblotted with the β-actin rabbit antibody diluted in TBS/T buffer in a ratio of 1:3000 (Cell Signaling Technology) to check the quality of gel loading. Results are based on two independent experiments.
Computational details of binding of thiomaltol complexes to topo2α
Since all metal complexes investigated here are positively charged (the chlorido ones after hydrolysis), they may potentially bind to topo2α into a pocket with large polarity, as it is the case of the DNA-binding domain of the protein.
To explore this possibility, the binding affinity between the complexes 1a and 2a (see Fig. 2) and the DNA-binding domain of topo2α (PDB code 3L4K) [22] has been theoretically investigated by means of Monte Carlo (MC) and Molecular Dynamics (MD) simulations following the protocol explained below.
First, the DNA helix that is present in the crystal structure was manually removed. Then, the protein was protonated and solvated by a periodic truncated octahedral box of water molecules extended to a distance of 12 Å from any solute atom by the leap module of AmberTools15 [35]. This results in a solvated protein with one positive charge that was neutralized with one chloride ion. The solvated protein was minimized for 20,000 steps, where the first 10,000 steps were driven by a steepest-descent algorithm, and the last 10,000 steps by a conjugated-gradient algorithm. The system was then heated in the canonical ensemble (NVT ensemble) from 0 to 300 K by a classical MD simulation for 1 ns using a time step of 2 fs. During the heating process the motion of the protein was restrained with a harmonic force constant of 10.0 kcal/(molÅ 2 ). Then, a classical MD simulation was run in the NVT ensemble at 300 K for 1 ns without applying any restrains to the motion of the protein. After heating, the density of the solvent and the structure of the protein was equilibrated in the isothermal-isobaric ensemble (NPT) during a 50 ns simulation with a time step of 2 fs. The Berendsen barostat with a pressure relaxation time of 2 ps was used to maintain a pressure of 1 bar. The Langevin thermostat with a collision frequency gamma of 1.0 ps −1 was used in both heating and equilibration steps to control the temperature. Moreover, the bond distances involving H atoms were restrained by the SHAKE algorithm [36]. Along the whole protocol the Coulomb and van der Waals interactions were truncated at 10 Å. The Coulomb interactions were evaluated by the particle mesh Ewald method [37] using a grid spacing of 1 Å in each direction for the charge grid, in which the reciprocal sums are computed by a fourth-order interpolation, and a direct sum tolerance of 10 -5 . The whole system, i.e., the protein, water molecules, and chloride ion were described by a force field [38][39][40]. All these steps were run by the Amber14 package [35].
The last snapshot from the previous classical MD simulations, for which the solvated protein is well equilibrated, was selected and the water molecules and the chloride ion were removed. Then, the different binding pockets of the equilibrated protein have been explored by a classical Monte Carlo (MC) simulation of 4000 steps using the Protein Energy Landscape Exploration (PELE) algorithm [41]. Since accurate force fields for Ru-based complexes are not available in the literature, the binding pockets of the protein have been explored by the thiomaltol ligand, which is present in all the metal complexes investigated here. Therefore, it was assumed that the binding mode of the metal complexes is the same as that of thiomaltol. Due to its small size and rigidity, the thiomaltol ligand was treated as a rigid body. The OPLS-AA force field [42] was used to describe topoisomerase and thiomaltol, while the aqueous solvent was modeled by the implicit surface-general Born continuum model [43]. The default values for the MC parameters implemented in the PELE algorithm [41] were used. The calculation of the binding free energy between thiomaltol and the protein for each of the 4000 MC steps allowed the identification of two favorable binding pockets (see below). These pockets will be named here pocket 1 and pocket 2, with the first one being the DNA-binding pocket.
In the next step, the thiomaltol ligand bound to the two favorable binding pockets has been replaced by the hydrolyzed Ru complex 1a and by the Ru complex 2a such that the thiomaltol ligands of the complexes are aligned with the isolated thiomaltol ligand inside the pockets to keep the same binding pose. This results in the generation of four different systems: complex-1a/pocket-1, complex-1a/pocket-2, complex-2a/pocket-1, and complex-2a/pocket-2. The metalcomplex/topoisomerase systems were neutralized by adding two chloride ions and solvated by a periodic truncated octahedral box of water molecules extended to a distance of 12 Å from any solute atom by the leap module of AmberTools15 [35]. Then, the system was classically minimized for 5000 steepest-descent and 5000 subsequent conjugated-gradient steps. After minimization, a classical MD simulation in the NVT ensemble was run for 20 ps with a time step of 1 fs to heat the system up to 300 K. Afterwards, a simulation in the NPT ensemble was run for 1 ns to equilibrate the density of the solvent and to accommodate the metal complexes inside the binding pockets of the protein. As mentioned above, accurate force fields are not available in the literature for Ru complexes, especially when metal/π interactions are important, as it is the case here. Therefore, the internal coordinates of the complex were frozen during the classical simulations. This means that intramolecular parameters describing the internal motion of the Ru species are not necessary. However, point charges and Lennard-Jones parameters are still needed to describe the non-bonded interactions between the Ru complexes and the solvated protein environment. The Lennard-Jones parameters for the Ru atom were taken from an MM3 force field developed for Ru(II)-polypyridyl complexes [44], while the Lennard-Jones parameters for the remaining metal-complex atoms were taken from the General Amber Force Field [45]. The charges were derived by, first, optimizing the geometries of the complexes 1a and 2a quantum mechanically by density-functional theory (DFT) using the Gaussian09 program [46]. Specifically, the B3LYP functional [47][48][49][50] with the D3BJ dispersion correction [51] and the LanL2DZ effective-core potential [52] for Ru and the 6-31G* basis set [53] for the remaining atoms were employed. Then, the Mulliken charges were computed for the optimized geometries at the same level of theory.
In the last step of our protocol, the last snapshot of each of the four previous classical MD simulations (one for each complex/pocket combination) were taken as initial conditions for quantum mechanics/molecular mechanics (QM/ MM) MD simulations in the NPT ensemble for 10 ps with a time step of 2 fs. The same thermostat and barostat used in the classical simulations were used here. The metal complexes 1a and 2a were described quantum mechanically by the same density functional, basis set, and effectivecore potential specified above using TeraChem1.9 [54,55] through the interface to external QM programs implemented in Amber14 [35] and run at GPUs. Dispersion corrections were considered by the D3 parameterizations of Grimme and coworkers [56]. The protein, water molecules, and chloride ions were treated by the force field mentioned above. The interaction between the QM and MM region layers was computed by an electrostatic embedding [57]. The Coulomb and van der Waals interactions were truncated at 8 Å and a realspace cutoff was used to compute the long-range QM-QM and QM-MM electrostatic interactions. These simulations allow the relaxation of the structure of the Ru complexes at QM level and, thus, provide a better description of the vibrational motion of the complexes and their interaction with the binding pockets of topoisomerase and with the solvent. From the QM/MM MD simulations 2000 snapshots were printed for analysis. The number of contacts between the Ru complexes and the protein and the number of water molecules in the solvation shells of the Ru complexes (see below) were obtained by the CPPTRAJ module [58] implemented in AmberTools15 [35]. The visualization of the MC and MD simulations and the graphical representations of the systems were carried out by the Visual Molecular Dynamics (VMD) program [59]. | 2020-03-19T14:47:19.841Z | 2020-03-19T00:00:00.000 | {
"year": 2020,
"sha1": "0d8c1172cc29919852217b32adddf32f019e85fa",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00775-020-01775-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b35b4c03f6c4b8af74f058ef08090c2322b18f2a",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
214458535 | pes2o/s2orc | v3-fos-license | Electoral violence and the legacy of authoritarian rule in Kenya and Zambia
Why do the first multiparty elections after authoritarian rule turn violent in some countries but not in others? This article places legacies from the authoritarian past at the core of an explanation of when democratic openings become associated with electoral violence in multi-ethnic states, and complement existing research focused on the immediate conditions surrounding the elections. We argue that authoritarian rule characterized by more exclusionary multi-ethnic coalitions creates legacies that amplify the risk of violent elections during the shift to multiparty politics. Through competitive and fragmented interethnic relations, exclusionary systems foreclose the forging of cross-ethnic elite coalitions and make hostile narratives a powerful tool for political mobilization. By contrast, regimes with a broad-based ethnic support base cultivate inclusive inter-elite bargaining, enable cross-ethnic coalitions, and reduce incentives for hostile ethnic mobilization, which lower the risk of violent elections. We explore this argument by comparing founding elections in Zambia (1991), which were largely peaceful, and Kenya (1992), with large-scale state-instigated electoral violence along ethnic lines. The analysis suggests that the type of authoritarian rule created political legacies that underpinned political competition and mobilization during the first multiparty elections, and made violence a more viable electoral strategy in Kenya than in Zambia.
Introduction
The shift to multiparty politics in sub-Saharan Africa in the early 1990s raised hopes for democratization across a continent where most countries had limited experience of competitive elections. The transition gave rise to 'founding' elections, viewed as critical junctures on the path to democracy. In some of these elections, as in Malawi and Benin, there were only isolated incidents of violence, while electoral contests were far from peaceful in, for instance, Côte d'Ivoire and Nigeria. Why do the first multiparty elections after authoritarian rule turn violent in some countries but not in others?
We argue that an important step for understanding this underexplored variation lies in the varying strategies chosen by political leaders in how to consolidate political support during authoritarian rule. The institutional setup of the authoritarian regimes across much of Africa looked similar, for example, in the dominance of a single party. Yet, how leaders sought to retain the support of a ruling coalition within the authoritarian framework differed significantly. One salient aspect is to what extent ethno-regional interests were represented in the ruling coalition. A majority of states resorted to some form of ethnic inclusion to accommodate ethnic grievances and undermine regime challengers (Rothchild & Foley, 1988). However, regimes pursued inclusion to different extents, and we argue that the degree of inclusion matters. Specifically, in the context of multi-ethnic societies, leaders that rule by means of a narrower ethnic support base, what we refer to as exclusionary approaches, create interethnic relations characterized by competition and fragmentation. With the introduction of multiparty elections, such dynamics heighten the risk of electoral violence by foreclosing the forging of cross-ethnic elite coalitions and by contributing to more competitive intercommunal relations. In this context, leaders can gain politically by resorting to electoral tactics that rest on hostile group narratives. By contrast, strategies that rely on a more broad-based ethnic support base, here called inclusionary approaches, are more likely to produce interethnic relations that alleviate the risks of violence in founding elections by fostering inclusive inter-elite bargaining and by contributing to more amicable intercommunal relations, thereby lowering incentives for hostile out-group mobilization.
We explore this argument by comparing founding elections in Kenya (1992) and Zambia (1991), ending authoritarian single-party rule in both countries. This comparison poses a puzzle. In both countries, the leaders warned about the dangers of reintroducing multipartyism: Zambia's president Kenneth Kaunda claimed that electoral competition would propel 'Zambia into the "Stone Age politics" of ethnic violence' (Bratton, 1992: 85); while Kenya's president Daniel arap Moi declared that multiparty elections would 'usher in tribal conflict and destroy national unity' (Barkan, 1993: 90). However, while Kenya experienced electoral violence resulting in at least 1,500 deaths, Zambia's elections were largely free from violence. Our analysis suggests that the different strategies used for building and maintaining a multi-ethnic ruling coalition during the authoritarian era represent one vital component for explaining this variation.
The contribution of this article is threefold. First, the focus on legacies from an authoritarian past advances research on electoral violence by explicating how electoral violence can have deep historical underpinnings. To date, most literature approaches the causes of electoral violence from a short-term analytical vantage point, focusing on the immediate dynamics or institutional context of electoral contests. 1 Second, we tally with an emerging literature highlighting how institutional choices during authoritarian rule generate legacies that structure political choice when democratic openings occur. 2 While previous research recognizes ethnic conflict as a critical challenge to democracy in multi-ethnic settings and unpacks different measures to counter such conflicts, it is less recognized that strategies to address conflict in heterogeneous societies were also prevalent during authoritarian rule. Third, by uncovering dynamics where the electoral contest remained peaceful, we contribute to the existing case study literature, which focuses mainly on violent elections. 3
Motivation
An extensive literature acknowledges the challenges of introducing elections in multi-ethnic societies with limited democratic experience (c.f. Mansfield & Snyder, 1995;Cederman, Gleditsch & Hug, 2013). Elections hold the potential for fueling violence in ethnically divided societies when leaders suddenly become reliant on securing popular support (Mann, 2005;Snyder, 2000), and the ballot box offers no institutional protection for the minority against majority rule. Electoral violence, levied to influence electoral results or to protest announced results, often erupts along ethnic lines as electoral dynamics tilt towards 'ethnic head counts' (Chandra, 2004;Kuhn, 2015). Many studies probe how institutional provisions may reduce the risk of violence in multi-ethnic states, such as formal power-sharing, minority guarantees, quotas, and electoral rules that secure broad political representation (Horowitz, 2000;Lijphart, 1977;Reilly, 2001;Sisk, 1996;Fjelde & Höglund, 2016). While this research recognizes the often deep historical roots of politicized ethnicity, it primarily explores institutional engineering within the 1 For example, type of electoral institutions (Fjelde & Höglund, 2016;Salehyan & Linebarger, 2014), institutional weakness (Salehyan & Linebarger, 2014;Hafner-Burton, Hyde & Jablonski, 2014), electoral uncertainty (Wilkinson, 2004), land insecurity (Klaus & Mitchell, 2015), or international observers (Daxecker, 2012;Smidt, 2016 Boone & Kriger, 2012;Klopp & Zuern, 2007), or within-case variations (Klaus & Mitchell, 2015). framework of formal democratic institutions. 4 We know less about how the political dimensions of interethnic relations evolve from the period of authoritarianism to electoral democracy, and what consequences they have for violent or peaceful electoral conduct.
Scholarship focusing more generally on regime characteristics and political violence primarily highlights the capacity of authoritarian regimes to repress dissent, including ethnic challengers (e.g. Muller & Weede, 1990). This literature posits that the potential for ethnic violence surged as democratic institutions placed significant constraints on the ability of regimes to coerce multi-ethnic populations into compliance (Mousseau, 2001;Saideman et al, 2002). Yet, the large variation in ethnic violence following the introduction of multiparty rule suggests that 'removing the repressive lid' provides only part of the story of how intergroup relations evolve from authoritarian rule to multiparty electoral competition.
Across different authoritarian regimes, significant variation exists in how leaders manage threats and how they stay in power through maintaining a dominant coalition. In multi-ethnic societies, these choices will be structured along ethnic lines. We argue that the choice of ruling strategy leaves interethnic relations in markedly different places at the transition to multiparty politics, producing different baseline risks for founding elections to turn violent. Below we focus on one salient aspect of how autocratic leaders manage interethnic relations: the degree to which various ethnic groups are represented in the ruling coalition. Next, we draw the line from a more exclusionary system to a higher risk of violent multiparty elections.
Violent elections as legacies from authoritarian rule
Research on authoritarianism highlights the challenge of controlling the majority of the population excluded from power, but dictators cannot rule on their own. Instead, political leaders across the spectrum of regime types rely on the maintenance of ruling coalitions, whose support is critical to ensure regime survival (e.g. North, Wallis & Weingast, 2009;Svolik, 2012). In democratic contexts, elections and electoral rules predominantly shape the nature and composition of these ruling coalitions. In authoritarian states, ruling coalitions are largely determined at the discretion of the elite (Svolik, 2012).
Where ethnicity is politically salient, the task of coalition-building tends to focus on inclusion or exclusion of ethnic groups (Wimmer, Cederman & Min, 2009;Bormann, 2019). Across Africa's post-colonial authoritarian regimes, building dominant coalitions required balancing the need for ethnic inclusion to ensure cooperation, with the risk of facing violent challenges from excluded groups (Roessler, 2016). In these weakly institutionalized states, elites sought to solidify their rule through political incorporation of ethnic groups that could otherwise destabilize the regime. By co-opting ethnic leaders, it was 'possible to reduce the scale and intensity of their demands' (Rothchild & Foley, 1988: 233). The ethnic configuration of the state, in turn, reflected elite bargains among a subset of all ethnic groups, agreeing to cooperate to share executive power and 'rents that come from controlling the state' (Roessler, 2016: 12).
Strategies to sustain ruling coalitions included the establishment of formal institutions, such as a strong regime party that could maintain a multi-ethnic support base through elections (e.g. Horowitz, 2000: 429). Most importantly, however, regimes rested on informal structures of patronage politics. At the top, regimes accommodated ethnic elites deemed central to their stability through the distribution of public offices and state resources (Arriola, 2013). These ethnic leaders, in turn, acted as brokers of patronage that could mobilize their own ethnic communities (Bayart, 1993;Bratton & van de Walle, 1994). The inter-elite bargains, thus, translated into societal stability through the ethnic intermediaries' embeddedness in patron-client networks that extend through society as a whole (North, Wallis & Weingast, 2009;Roessler, 2016).
In heterogeneous societies without a majority group able to rule on its own, incumbents depended on the support of the leadership of other ethnic groups to ensure regime survival. Almost all ruling coalitions in Africa involved the inclusion of some ethno-regional intermediaries, but the degree of ethnic inclusion (both the size and ethnic plurality) varied significantly. 5 Authoritarian regimes can thus be placed along a continuum depending on the degree of intra-elite accommodation at the center, and incorporation of various ethno-political interests into the ruling coalition 4 Exceptions include Bratton & van de Walle (1997), Horowitz (2000), and Rothchild (1997), but none focuses on electoral violence. 5 Strategies to forge ruling coalitions were shaped, for example, by the ethnic plurality of the state (Wimmer, Cederman & Min, 2009), whether social class followed ethnic membership (Horowitz, 2000: 23), and legacies from colonial rule (Levitsky & Way, 2012). (Horowitz, 2000;Rothchild & Foley, 1988). These can be referred to as more or less exclusionary or inclusionary, depending on how broad the ethnic powerbase of the regime was (Horowitz, 2000;Huntington, 1970;Rothchild, 1997).
At one end of the continuum are regimes that Rothchild & Foley (1988) label 'elite consensual systems', characterized by broadly incorporative grand coalitions, where most major ethnic groups were allowed some participation in decisionmaking processes, typically through various cabinet or executive appointments to ethnic representatives (Arriola, 2009). Senegal under president Senghor's one-party regime is a case in point, where the government relied on inclusion of a majority of Senegal's ethno-regional and religious groups, and no single ethnic group faced discrimination.
At the other end of the spectrum are regimes that effectively monopolized state executive power at the hand of a single ethnic group (Wimmer, Cederman & Min, 2009). Examples at the extreme end include Burundi (1966-88), where the Tutsi minority formed the ruling coalition while the Hutu majority were excluded under the military regime, and the settler oligarchy of South Africa, where the minority Afrikaners governed while excluding the black majority. Beyond this extreme, we find regimes that survived through exclusionary approaches without monopolizing the state. These relied on support from constituencies within the ruling coalition, while simultaneously restricting political activity and resources for excluded groups (Huntington, 1970). Kenya, as the empirical analysis demonstrates, fits this category.
We argue that how leaders formed ruling coalitions during the authoritarian era influenced inter-elite dynamics, the relationship between the broader ethnic groups, and the relationship between ethnic leaders and their ethnic communities. The approaches adopted during authoritarian rule had long-lasting legacies by giving rise to different political incentives, for both the incumbent and opposition, when formal institutions shifted with transitions to multiparty politics.
At the elite level, more inclusionary regimes facilitated the growth of a relatively cohesive ruling class, united by a shared interest in accessing state resources on which their positions depended (Bayart, 1993;Arriola, 2009). By extension of patron-client relations, a larger number of groups perceived themselves as 'within the ruling coalition', which promoted intra-elite accommodation and integration. In such systems, clientelism 'provide[s] the cement by which ethnic identities are amalgamated within the boundaries of a more inclusive political system' (Lemarchand, 1972: 70). What actually trickled down to the ethnic constituencies was not necessarily more in regimes with inclusive ruling coalitions. However, rejecting the predominance of a single ethnic cleavage, and inter-elite accommodation, still nurtured accommodative policies to manage intercommunal relationships.
By contrast, regimes that monopolized state power at the hands of a single group, or relied on a strategy of narrow elite co-optation, produced elite dynamics characterized by fragmentation and competition. Exclusionary regimes engineered ethnicity through 'divide and rule' strategies that induce collective action problems among those excluded from power (Acemoglu, Verdier & Robinson, 2004). Restricting access to state resources and repression reinforced the divide and armed leaders from excluded groups with a discourse of historical injustices, political discrimination, and distrust (LeBas, 2006).
The degree of inclusion in coalition-building fostered not only different legacies for inter-elite dynamics, but also how ethnic communities are mobilized and, in turn, their relationship to each other. The elites' bargaining power within the regime coalition largely depended on the strength of the constituencies mobilized. Elites therefore had strong incentives to cultivate their clientelist base. Relying on narratives that invoked out-group resentment and a sense of group entitlement was one way of doing this (Straus, 2015: 58). As individuals perceived access to state resources to depend on having a coethnic in power, they had incentives to rally behind their in-group coalition and request access to patronage as members of particular ethnic groups (Bates, 1983;Posner, 2005;Wimmer, 1997). Thus, both elitedriven top-down and community-driven bottom-up processes reinforced the political salience of ethnicity. These processes were mirrored by similar countermobilization efforts from ethnic groups outside the ruling coalition (Wimmer, Cederman & Min, 2009). The competitiveness of the mobilization was higher with more exclusionary coalitions. The stakes increased with the number of groups excluded from power, fostering a perception of politics as a zero-sum game. More inclusionary coalition-building did not feed into competitive mobilization of ethnic groups in the same way. The shares of groups in the regime coalitions were larger, reducing ethnic elites' political premium from marshalling their own constituents to signal strength.
What were the implications of these characteristics for the transition to multiparty rule? We argue that the political legacy of more exclusionary systems, with fragmented and competitive interethnic relations, made it difficult to counter the risk of violence in the transition, since strategies of 'divide and rule' closed the door for broad-based elite bargaining. The exclusionary context also provided fertile ground for politicians to capitalize on ethnic cleavages to solidify their electoral agenda. The strategically induced, competitive mobilization of ethnic groups under more exclusionary regimes fueled a perception of politics as a zero-sum struggle that persisted through the transition. These parameters shaped the perception of both threats and opportunities in the transition to multiparty rule, and thereby the opportunity structures for both incumbent and the opposition in terms of how political influence could be pursued (Kirschke, 2000;McAdam, Tarrow, & Tilly, 2001: 46-47). The legacy from exclusionary regimes allowed hostile ethnic narratives to become powerful tools for electoral mobilization. Elites had large payoffs from resorting to exclusionary appeals and extremist rhetoric focusing on out-group threats, which fed into the violent mobilization of ethnic constituencies (Horowitz, 2000; Rabushka & Shepsle, 1972).
In a polarized electoral environment, violence can take several forms (e.g. Staniland, 2014). First, ethnic rhetoric can legitimize harsh security crackdowns and repressive policing against members of other groups, measures that can be used to harass voters, target opposition candidates, and violently coerce voters along ethnic lines (e.g. Taylor, Pevehouse & Straus, 2017). Second, it can entail the formation of ethnic militia groups resorting to violence to intimidate and demobilize opposition supporters to change the ethnic powerbalance on the ground (e.g. Raleigh, 2014). Third, it can manifest as intergroup violence if opportunistic politicians provoke ethnic riots to solidify their own electoral base (e.g. Wilkinson, 2004).
More inclusionary regimes faced transition to multiparty politics with less competitive, antagonistic, and divisive patterns of interethnic relations both at the elite and communal levels. They were thus less likely to experience electoral violence. The opportunity structure engendered by authoritarian rule created more options for elites to forge cross-ethnic coalitions, as well as different incentives for intragroup mobilization. Specifically, the absence of blatant discrimination of certain groups reduced zero-sum perceptions of politics and opened up for interethnic elite negotiation. It also made it easier for elites to close political deals during transitions (Bratton & van de Walle, 1997: 87). Such dynamics reduced leaders' incentives to resort to ethnic outbidding and exclusionary appeals and made it easier for followers to resist such rhetoric. In turn, the conditions for violent ethnic mobilization were less ripe.
Case selection and sources 6 We explore this argument by comparing founding elections in Kenya (1992) and Zambia (1991). These elections display considerable variation in the main outcome of interest: the presence of electoral violence during founding multiparty elections.
Zambia and Kenya share several characteristics that make the comparison relevant. In both countries, the regime introduced single-party rule against the backdrop of escalating ethnic tensions and the 'inability of multiethnic parties' to maintain multi-ethnic support (Horowitz, 2000: 429). The two countries had shifted between periods of single-party rule and multiparty elections, with similar electoral systems (single-member plurality), and both are ethnically diverse with Kenya commonly described as having 42 different ethnic communities and Zambia 73 (Dresang, 1974;Posner, 2007). Ethnic groups live relatively segregated geographically and dominate specific regions. How ethnicity nests with electoral politics is the subject of this study; here, it is sufficient to note that ethnic identification is important in both countries. The cases also differ on important accounts (colonial experience, population density, and the land question). We return to these factors in the analysis.
The analysis builds on secondary sources and interviews with academics, politicians, government officials, NGO representatives, traditional leaders, and local residents. We conducted 40 interviews in Kenya (2011) Kenya's authoritarian legacy and violent 1992 elections 7 In December 1992, Kenya held its first competitive multiparty elections after three decades of single-party rule. While the election created a democratic opening, largescale violence (state-sponsored, but with clear ethnic connotations) killed approximately 1,500 people and displaced 300,000 (Africa Watch, 1993: 1). The ruling party, the Kenyan African National Union (KANU), 6 The Online appendix includes additional information about case selection and interviews. 7 The Kenya analysis partly draws on Fjelde & Höglund (2018). emerged as the sole winner, with Daniel arap Moi retaining the presidency.
Politics during single-party rule Kenya's first president Jomo Kenyatta established de facto one-party rule in 1969, which continued when Moi came to power in 1978. During this era, political leaders used ethnic coalition-building as a strategy to maintain their powerbase, but access to state power and resources was reserved for only a few ethnic groups. Ethnicity was a key asset in political mobilization and shaped voting patterns (Hydén & Leys, 1972). Overall, Kenyans perceive national politics as a rivalry between Kikuyu, Kalenjin, Luo, Luhya, and Coastal groups (Posner, 2007). 8 However, ethnic coalitions are not fixed and shifting alliances characterized the single-party era, where Kenyatta and subsequently Moi sought to promote their own ethnic interests using 'similar political methods of factional manipulation to achieve these ends' (Throup, 1987: 34; see also Omolo, 2002).
Exclusionary ethnic coalition-building was not a prevalent trait in the immediate post-independence period. In 1963, KANU won the first free elections, and was initially able to unite all ethnic groups in the ruling coalition. The main opposition party, the Kenyan National Democratic Union (KADU), was co-opted and eventually merged with KANU in 1964. While KANU and KADU had similar political agendas, their support bases were different: KANU drew support mainly from the Kikuyu community, based in Central Province and part of Rift Valley, and some communities in the Nyanza and Eastern Provinces. KADU's prime ethnic base was the Luhya (Western Province), Kalenjin and related groups (Rift Valley), and some communities from the Northeastern and Coast Provinces (Barkan, 1993). Thus, in the initial post-independence era, a broad multi-ethnic coalition formed the ruling coalition at the center (Throup, 1993).
Nevertheless, the Kikuyu community came to dominate the civil service, and state policies favored the Central Province economically (Barkan, 1993: 87). When Kenya became independent and Kenyatta (of Kikuyu origin) came to power in 1963, expectations were that the Kikuyu would gain a privileged position. Although the independence movement (the Mau Mau) was multi-ethnic, the Kikuyu dominated it and faced repression, regardless of whether they were part of the uprising or not (Throup, 1993). This underpinned a sense of entitlement among Kikuyu, seen in land claims in the Rift Valley made by Kikuyu migrant laborers (Kanyinga, 2009).
Land redistribution, particularly in the fertile Rift Valley region, soon became a major political issue. KADU, supported by white settlers, opposed landbuying programs which would accommodate Kikuyu land claims. In this context, the KANU-KADU merger was an interethnic elite 'pact', where 'in exchange for political power KADU dropped its opposition to settlement schemes in the Rift Valley' (Klopp, 2001: 477). Kenyatta's land-buying program led to the migration of many Kikuyu farmers to the region (Harbeson, 1973) and laid additional foundation for land conflict.
After the KANU-KADU merger, the ethnic ruling coalition grew increasingly exclusionary. In the process of consolidating power, the regime ostracized another community, the Luo, and singled them out as secondrank citizens, invoking smear campaigns and marginalization of Luo leaders (van Stapele, 2010: 115). The ethnic divide widened as Kenyatta demoted Vice-President Oginga Odinga (a Lou). 9 While the conflict originated in ideological disagreement, it intertwined with ethnicity (Ogot, 2003: 33). In 1969, the government banned Odinga's party, the Kenya People's Union (KPU), and one-party rule became de facto. Originally, both Luo and Kikuyu MPs supported KPU, but it became branded as a Luo party and members associated with it faced punishment, including loss of jobs, social rejection, and difficulties in obtaining government loans or licenses (Mueller, 2014: 5).
After Kenyatta's death in 1978, Moi came to power and ruled in a paranoid and kleptocratic manner (Barkan, 1993: 87-88;Throup, 1993: 385). Moi inherited a state in the hands of patrons that had been loyal to Kenyatta, and, as he was Kalenjin, he lacked support from a dominant ethnic group. To ensure control, Moi raised the level of state patronage transferred through individuals and made access to patronage conditional on loyalty to the executive. These strategies served to increase ethnic salience (Cheeseman, 2006: 79). More specifically, the regime's powerbase became increasingly exclusionary geographically and ethnically. Moi began to favor formerly 'disadvantaged groups', including Kalenjin and allied tribes, while many Kikuyu had to leave the civil service (Kahl, 1998: 113). Public investments were reoriented from the Central Province to Moi's main support base, primarily the Rift Valley (Throup, 1987: 34). Thus, 'over time, through reorganizing national alliances and patronage networks to ensure patrimonial control, KANU alienated many within Kikuyu and Luo constituencies' (Klopp, 2001: 277). However, the Kikuyu remained economically influential, placing Moi in a vulnerable position (Holmquist & Ford, 1995: 177).
A faltering economy gradually undermined Moi's patronage-based regime and contributed to increased marginalization of certain groups (Mueller, 2014;Throup, 1993). A coup attempt in 1982 (allegedly led by Luo officers, but also involving Kikuyu), further engrained the regime's repressive approach to dissent (Barkan, 1993;Mueller, 2014). In the wake of the coup, Moi disproportionally promoted Kalenjin officers within the security forces and reinforced the paramilitary police (Roessler, 2005: 213). In this context, a perception of 'Kalenjinization' grew and the Kalenjin became associated with the regime's repressive measures (Lynch, 2011: 140).
The election
The transition to multiparty politics took place in a context of rising inflation and food shortages. In August 1991, a wide coalition of clergy, lawyers, and opposition groups formed the Forum for the Restoration of Democracy (FORD), challenging the political and economic state of the country. The government responded with force and arrested several FORD leaders. The internal pressure, combined with donors' suspension of foreign aid (Kirschke, 2000), led to legalization of multiparty politics in December 1991.
Moi won the presidential election with 36% of the vote, and KANU secured 57% of the seats in parliament because the opposition was severely divided (Mueller, 2014). 10 FORD was initially a cross-ethnic coalition, including Luo and Kikuyu. Yet within months, several rival alliances were formed along ethnic lines, where each faction fielded its own presidential candidate. FORD broke into two factions: FORD Asili, led by a Kikuyu (Kenneth Matiba), and Ford Kenya, led by a Luo (Raila Odinga) (Oyugi, 1997: 47-48). Partly, the fragmentation of the opposition was a consequence of Moi's early strategy to channel patronage via powerful individuals serving as ethnic patrons to secure support. The approach had raised ethnic salience, fostered a perception of politics as zero-sum, and created strong center-locality relations where individual patrons had the capacity to mobilize mass support and deliver their ethnic constituencies (Cheeseman, 2006: 302-303). By being both wealthy and having an established ethnic support base, the main political leaders knew that 'even if they suffered electoral defeat, the strength of their personal support would ensure that they continued to be major players in Kenya's political landscape' (Cheeseman, 2006: 303).
Large-scale violence and coerced displacement accompanied the election. As a direct response to the democratic transition, violence began over a year prior to the elections. It intensified in the months preceding the election and continued after. While people were killed and displaced in several parts of Kenya, the violence centred on the Rift Valley Province (and along its border with Nyanza and Western Province), which saw the largest share of parliamentary seats and where mass support for Moi was crucial (Kahl, 1998: 111-112). Violence was levied strategically to establish KANU dominance, by playing on prevailing land conflicts between the 'indigenous' groups (Kalenjin, Maasai, Samburu, and Turkana) and more recent settlers in the region. The violence targeted mainly Kikuyu, Kisii, Luhya, and Luo -groups associated with the opposition. It served to appropriate the promised (land) resources for KANU-supporting communities, to secure the support of the Kalenjin community, and punish opposition voters (Boone, 2011(Boone, : 1328Throup & Hornsby, 1998: 195-197).
Explaining electoral violence
How did authoritarian rule influence Kenya's 1992 election? The mode of coalition-building and diversion of opposition during single-party rule contributed to electoral violence, via legacies relating to the character of interethnic relations at the elite and communal level. These shaped political elites' options for political mobilization. Two processes were particularly important.
First, the legacies from authoritarian rule created fragmented and competitive elite relations. The opposition came primarily from outside of Moi's main circle of power and was initially united by having been excluded from power and state patronage (due to their ethnicity). But it quickly disintegrated along ethnic lines. Because 'ethnic brokerage' was central for how leaders mobilized political support, it was neither feasible nor desirable for the opposition to strive for cross-ethnic support (Cheeseman, 2006: 302;LeBas, 2011). In addition, no inclusive elite pact, which could have reduced the risks leaders faced, was on the table (Kirschke, 2000: 396). Instead, Moi continued a previous tradition of divide and rule to secure victory, and 'introduced new means of coercion to assert political control', where ethnic militias played a key role (Kirschke, 2000: 397). In this process, the security forces, where Kalenjin officers allegedly had been promoted disproportionately, remained loyal to Moi and did not halt the violence (Roessler, 2005: 214).
Second, legacies from the authoritarian era made electoral rhetoric exploiting ethnic cleavages a viable electoral tactic. 'Exclusionary ethnicities' had been entrenched by Moi's concentration of patronage to his support base and increased stigmatization of other groups (Lynch, 2011: 9;Mueller, 2014: 8). Following this logic, antagonistic, intergroup competition underpinned the electoral rhetoric and tied in with deep-seated land grievances, exploited by politicians to garner electoral support. 11 KANU politicians, seeking re-election, were the prime instigators. The purpose was to punish pro-opposition supporters or prevent them from voting through intimidation or displacement. After land was cleared, it was used as patronage to reward militants and secure electoral support (Boone, 2011(Boone, : 1328. In this process, the politicians resorted to ethnic outbidding: for instance, at political meetings politicians called for the eviction of 'aliens' and 'foreigners' (mostly Kikuyu, but also Luo) from the land, so that native ownership (to primarily Kalenjin and Maasai) could be restored (Africa Watch, 1993: 18;Klopp, 2001). These communities, in turn, feared expulsion if KANU lost the election (Kahl, 1998: 109). 12
Zambia's authoritarian legacy and peaceful 1991 elections
Zambia became a single-party state in 1972 when the government banned all parties except Kaunda's United National Independence Party (UNIP). In October 1991, the first multiparty elections in 30 years were held. Kaunda and UNIP were massively defeated, Kaunda stepped aside, and Zambia experienced a largely peaceful transition to multiparty politics (Bratton, 1992).
Politics during single-party rule After a relatively peaceful path to sovereignty, Zambia gained independence in 1964 and Kaunda became the first president. 13 The independence movement, led by Kaunda and UNIP, started with passive resistance, but later reverted to active resistance including the burning of bridges and government buildings (Larmer, 2011: 40;Rasmussen, 1974). Although regional divisions existed, no particular ethnic group dominated and UNIP was 'committed to obtaining total support from the whole country' (Roberts, 1976: 221).
Ethnicity has been important in political mobilization in Zambia and has shaped voting patterns since independence (Osei-Hwedi, 1998;Posner, 2005). At the center, political competition is primarily between four broad ethno-linguistic groups -Bemba, Nyanja, Tonga, and Lozi -but these are all composed of different ethnic groups. Political parties do not fully align with a region or group, but have generally been considered to represent a particular region and the ethnic group dominating it. While Zambians generally perceived UNIP as a Bembaleaning party at independence, it later became associated with the Nyanja (Posner, 2005: 107-108).
Post-independence, UNIP remained the political powerhouse. However, internal tensions of an ethnic nature ravaged UNIP and in 1966, Lozi speakers left and formed the Unity Party (UP). Factional fighting continued in 1967: the Bemba-Tonga alliance won over the Lozi-Nyanja alliance and Bemba took up many senior government positions. Despite allegations of Bemba hegemony, the government structures remained ethnically diverse and involved other groups (Dresang, 1974(Dresang, : 1609(Dresang, -1611.
Two years after its formation, UP was banned due to agitation for increased autonomy of Barotseland in the west. The ban signalled the government's increasing sensitivity to competition. To reduce the risk of ethnic contestation, the regime championed the motto of 'One Zambia, One Nation', intended to foster national, not ethnic, identity (Burnell, 2005). Politics came to evolve around ethnic balancing: a range of policies, from land distribution to ministerial appointments, depended on balancing ethnic representation rather than on merit (Dresang, 1974(Dresang, : 1615. Single-party rule was a reaction to challenges arising from within UNIP (Gertzel, Baylies & Szeftel, 1984: 7). In 1971, the United Progressive Party (UPP) emerged as a UNIP-splinter group. UPP was a Bemba-dominated party, led by former vice-president Simon Kapwepwe. UPP threatened Kaunda, as it represented long-term UNIP representatives and drew considerable support from UNIP strongholds like the Bemba-dominated Copperbelt and the north (Burnell, 2001;Posner, 2005). This was in sharp contrast to the African National Congress (ANC), the main opposition party from independence until the introduction of single-party rule, which had its prime support base in the Tongadominated south. An increasing number of interparty violence incidents served as a pretext for UNIP to act on single-party rule. In February 1972, the government banned UPP and detained Kapwepwe together with over 100 leading party members. The same year, the regime co-opted the ANC leadership and a constitutional amendment passed in December 1972 made Zambia a de jure single-party state (Gertzel, Baylies, & Szeftel, 1984: 16-17).
During the first years of single-party rule, Zambia was riding high on revenues from copper. Kaunda began to promote national unity based on socialism and Christian values (Burdette, 1988: 77-78). For Kaunda, it was a political necessity to prevent ethnicity from becoming the only political identification, since he was partly an outsider to the ethnic game: he was born and raised in a Bemba-speaking area in northern Zambia, but his parents were from Malawi. 14 Under Kaunda's rule, the political elite was structured according to ethnic balancing. 15 Kaunda deliberately balanced ethnic and regional interests, including political compromises and power-sharing in political, economic, and military realms (Bratton, 1992: 83;Lindemann, 2011). 16 While his policy of 'appointing members of all the country's major ethno-regional groupings to the national structures [ . . . ] reinforced growing tensions within the party' (Larmer, 2011: 56), he intentionally deployed it to reduce the risk of any particular group feeling excluded and the political salience of ethnicity (Dresang, 1974(Dresang, : 1609. Furthermore, the regime introduced a rotational system, where public servants would never work in or represent their original constituency, but moved around to different parts of the country. Both these factors contributed to interactions within the political elite based on a culture of cooperation and coexistence, rather than rivalry founded on ethno-regional differences. 17 In contrast to Kenya, regional favoring was not evident during Zambian single-party rule. 18 Kaunda combined formal policies (like public spending on education) and informal neo-patrimonial interventions to uphold loyalties across ethno-political elites. These elites became part of a system where cooperation was sufficiently inclusive to foster integration (Burnell, 2005: 119). While economic recession in the mid-1970s increased the population's disillusionment with the regime, it did not skew regional distribution of state resources, which could have created ethnic grievances among disadvantaged groups. Instead, the different regions all took a reasonable part of the losses (Burdette, 1988: 124).
During single-party rule, UNIP remained a nationalist party and no major destabilizing ethno-regional political movements existed (Burdette, 1988: 160-161). Still, UNIP faced political dissent and one issue of discontent was alleged underrepresentation by some regions in the Central Committee (Gertzel, Baylies, & Szeftel, 1984: 101-102). Kaunda responded by reshuffling the Central Committee leadership. Although such efforts aimed to reduce imbalance, accusations of bias were common. Kaunda's early cabinets were perceived as Bembadominated, but later considered to favor Nyanjaspeakers (Posner, 2005: 98).
In the second half of the 1980s, UNIP further restricted political liberties, for example, by outlawing strikes (LeBas, 2011: 98). After food riots in the 1980s, Bemba were at a higher risk of clampdown by the security forces, but this was mainly due to protest starting in Bemba-dominated towns, not ethnic bias (Lindemann, 2011(Lindemann, : 1859Scarritt, 1993: 269).
The election
The transition originated in the 1980s, when discontent with the regime grew widespread. The trade unions became the main champions of the reintroduction of multiparty politics. Faced with widespread rallies in favor of change, Kaunda agreed to introduce multiparty elections in 1990. The Movement for Multiparty Democracy (MMD) registered as a political party, led by Frederick Chiluba, a Bemba-speaking Lunda from Luapula Province, who gained a prominent role due to his position as trade union leader (Cheeseman, 2006: 306). MMD drew support from many different groups in society, including labor and business, and several toplevel politicians left UNIP for MMD (Bratton, 1992: 92;Phiri, 2006: 167). The opposition was able to unite political leaders from all major ethnic groups, and the leaders refrained from appealing to ethnic and regional loyalties (Osei-Hwedi, 1998: 232).
While internal divisions existed within MMD, it remained united. One reason was that 'senior figures within the MMD had little to gain and everything to lose from splitting with the party' (Cheeseman, 2006: 306). UNIP's power had largely rested on indirect support from urban workers, which gave the unions a prominent position in Zambian politics. When the labor unions turned against UNIP in the late 1980s, individual political leaders held neither the networks nor economic power for mass mobilization (Cheeseman, 2006: 78).
Although the threat of violence was looming during the transition, the elections remained relatively peaceful. There were allegations from both parties about threats and intimidation, but only a few instances of violence occurred, such as youth groups harassing traders who did not have UNIP member cards. The regime deployed paramilitary forces to townships, but there was no real attempt to use the security forces for UNIP's purposes (LeBas, 2011: 217-218).
Much of the tension ahead of the election concerned disagreement over electoral rules and monitoring. Furthermore, the opposition accused the government of misusing state resources in the campaign (Bjornlund, Bratton & Gibson, 1992;Bratton, 1992: 90). While thousands of voters were excluded from the voters' registry, these flaws were caused by mismanagement, rather than deliberate manipulation (Rakner & Svåsand, 2005: 91).
The 1991 election resulted in a landslide victory for MMD. Chiluba gained almost 76% of the presidential vote and MMD 125 of 150 parliamentary seats (Burnell, 2001: 240). Fears that Kaunda, who controlled the security forces, would use the army to remain in power existed, but he handed over power peacefully to MMD. 19 Explaining the absence of electoral violence The political legacy from the period of single-party rule shaped political mobilization ahead of the 1991 elections and had implications for how the transition unfolded. Kaunda's deliberate policy to prevent ethnic dominance -politically, economically, and sociallywas essential in countering ethnically based electoral violence at the re-introduction of multiparty politics.
First, ethnic balancing during authoritarian rule fostered a culture of cooperation within the Zambian elite, rather than rivalry based on ethno-regional differences. Not only was elite bargaining common, it was also inclusive by involving elites from a broad set of ethnic groups (Burnell, 2005;Lindemann, 2011). The legacy of elite cooperation had been particularly strong during political crises. In 1991, Chiluba and Kaunda met in a series of church-led meetings (Bartlett, 2000) and followed a longstanding tradition of elite negotiations during turbulent times. These meetings settled many contentious issues. Most importantly, the incumbent and the opposition leader agreed to concede to electoral defeat (Bjornlund, Bratton & Gibson, 1992: 413).
Second, inclusionary governance during single-party rule made interethnic relations less fragmented. This contributed to the opposition's ability to form a broadbased coalition across ethnic divides, with leaders united in their interest in removing Kaunda from power, not an interest in placing their ethnic group in power (Osei-Hwedi, 1998: 232). In fact, 20 MMD candidates were 'former or sitting UNIP MPs and 12 had been cabinet ministers or central committee members' (Baylies & Szeftel, 1992: 83). Post-independence state policies that had created well-organized and centralized labor movements composed of different ethnic identities also contributed to the broad-based cross-ethnic opposition (LeBas, 2011: 246).
Third, the interethnic relations in place lowered incentives for violent ethnic mobilization. The political rhetoric did not revolve around exclusionary ethnic language, but widespread discontent with Kaunda and a need for change. Although the rhetoric was hardline, neither Kaunda nor MMD appealed to regional or ethnic loyalties, and both sides drew cross-ethnic support (Lindemann, 2011(Lindemann, : 1860Osei-Hwedie, 1998: 232). Kaunda's history of ethnic balancing, combined with his own fluid ethnic identity, meant that it was difficult for him to draw on ethnicity to consolidate electoral support. In fact, '[b]oth UNIP before the 1990s and [ . . . ] MMD [ . . . ] in the 1990s were successful in polling support in every province and from across the entire ethnic spectrum' (Burnell, 2005: 115). Thus, violent ethnic mobilization would have been a complete break with history, and it would have been difficult to win the elections with more narrow ethnic appeals. 20
Divergent outcomes of electoral violence
Why did Kenya's founding multiparty election turn violent, while Zambia's did not? Whereas the transitions in Kenya and Zambia diverged on many dimensions, Zambia also had a potential for violence. The prelude to the reintroduction of electoral politics faced several crises, and Kaunda (like Moi) warned that political liberalization would have violent consequences. To understand the different outcomes, our analysis highlights how the regime in each country used varying strategies to forge ruling coalitions during authoritarian rule. While Kenya's single-party rule contained a relatively exclusive approach, based on a narrow support-base and active suppression of those excluded from power, Zambia's single-party rule was more inclusive, based on a broader ethnic support base and with deliberate efforts to counter ethnic divisions. While Kenya was not at the extreme end of the exclusionary spectrum, it was significantly more exclusionary than Zambia. Importantly, the mode of coalition-building was also underpinned by different logics: '[w]hereas in Kenya political life has been characterized by open deployment of ethnic self-interest, in Zambia it has been more covert and intermingled with other forms of allegiance -for example, on material grounds' (Larmer, 2011: 274). Both approaches were strategies for regime survival and help to explain why electoral violence occurred in Kenya, but not in Zambia. In particular, the political legacies from authoritarian rule work through two main pathways.
First, legacies structure options for cross-ethnic coalition-building and cooperation, making electoral violence more or less likely. By emphasizing ethnicity over other political cleavages, exclusionary coalitionbuilding engenders interethnic relations characterized by fragmentation and competition. Thus, while Zambia's opposition drew support from all ethnic groups, Kenya's opposition was fragmented and polarized along ethnic lines. Furthermore, in Zambia the legacy of cooperative interethnic elite relations reduced the perceived risks associated with the transition and enabled inter-elite bargaining that solved several contentious issues.
Second, political legacies place constraints on electoral mobilization. Specifically, more exclusionary ruling coalitions create ethno-political dynamics where narratives exploiting ethnic cleavages become a powerful mode of political mobilization for elites in the pursuit of electoral gains. LeBas (2006: 422) notes how 'political incentives must align with existing identities, rather than encourage the formation of identities that transcend them'. Polarization and group boundaries are upheld and reproduced by political elites and create situations where space for bargaining shrinks (McAdam, Tarrow & Tilly, 2001). In Kenya, the electoral rhetoric played on historical injustices and ethnic divisions, and the violence served both to solidify the incumbent's support base and to punish opposition voters. In Zambia, the use of an ethnically hostile rhetoric would have signalled a break from the rhetoric used during authoritarian rule.
A key mechanism for explaining electoral violence, thus, is the propensity of exclusionary regimes to cultivate perceptions of politics as a zero-sum game. When such perceptions outlast the authoritarian era and transfer into the electoral arena, an elevated sense of threat fuels electoral violence.
Developments prior to single-party rule clearly influenced politics in Kenya and Zambia. Without victimization of one ethnic group in Zambia's independence 20 Interview Z1 (academic, Lusaka, 19 November 2014). struggle the policy of ethnic balancing was buttressed, whereas the Kikuyu victimization during Kenya's liberation laid the foundation for exclusive ethnic politics. Relatedly, appeals to land grievances rooted in Kenya's colonial and post-colonial policies, less prominent in Zambia, played a significant role in the 1992 violent elections and have contributed to a territorialization of ethnicity (Jenkins, 2012;Klaus, 2020). Such historical developments explain why the Rift Valley was the epicenter of the violence (Boone, 2011;Kahl, 1998).
Conclusion
In this article, we show how political legacies from distinct authoritarian regimes contribute to explaining violent versus peaceful multiparty elections in multi-ethnic states. We argue that policies enacted during authoritarian rule to uphold the regime's powerbase shape interethnic relations at the elite and communal levels, and their interaction. Different ethno-political dynamics following from more exclusionary versus inclusionary approaches influence the mobilization strategies leaders adopt when competitive, multiparty elections are (re)introduced. Our conclusions endorse research underscoring the significance of historical grievances in violent electoral mobilization (e.g. Klaus, 2020;Klaus & Mitchell, 2015), and complement studies that emphasize the importance of perceptions of politics as zero-sum (e.g. Fjelde & Höglund, 2016), leaders' threat perceptions (e.g. Hafner-Burton, Hyde & Jablonski, 2014), and political elites' instrumental use of ethnic cleavages (e.g. Berenschot, 2020;Wilkinson, 2004). While these studies focus on immediate conditions surrounding the elections, we highlight how historical legacies shape political mobilization in the era of multiparty politics by creating different ethno-political incentive structures during the transition.
For future research, key topics are the mechanisms explaining self-reinforcing tendencies of conflict, and how previous experiences with violence shape future electoral politics. In this regard, it is pertinent to analyze how legacies from founding elections influence subsequent elections. It is noteworthy that electoral violence has been a pervasive feature in most Kenyan elections after 1992, while Zambian elections were largely free from violence after 1991 and until 2015. Zambian politics has become more volatile, with instances of electoral violence in 2016 (Goldring & Wahman, 2016). Yet, violence remains at a low level with few fatalities.
This analysis is rooted in the historical and contemporary realities of sub-Saharan Africa, marked by the colonial and post-colonial experience, and characterized by a heterogeneous ethnic landscape, pervasive patronage politics, and weak formal institutions. The findings carry particular relevance for countries in the region. However, in other parts of the world electoral regimes face comparable challenges due to high social fragmentation and past authoritarian rule. Thus, the political dynamics created by prior governing strategies are also likely to be of consequence for electoral violence in unconsolidated regimes in the Middle East and Asia.
Replication data
The Online appendix, which includes additional information regarding the case selection and interviews, is available at http://www.prio.org/jpr/datasets. | 2019-12-05T09:23:11.906Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "0863e5b0b32e9cbf696615e10d7096d761450a2c",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0022343319884983",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "30f7bae8baa4605c41db48e79301859732770ef0",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
} |
222005015 | pes2o/s2orc | v3-fos-license | Tendencies in issuing decisions on the need for individual teaching in the Malopolska voivodship
Abstract The article is devoted to individual teaching as a form of education provided to pupils whose health condition makes it impossible or very difficult to attend school. The decision is issued by public psychological and pedagogical counselling centres at every stage of education from pre-school to upper secondary school. Due to the duration of research and collected data, the theoretical part refers not only to the acts and regulations of 2017, but also to the regulations no longer in force. The purpose of the dissertation is to try to answer the following questions: What is individual teaching? On what basis can this form of education be obtained? What are the possible consequences of individual teaching? In the research part, the author analyses data obtained from the resources of selected psychological and pedagogical counselling centres in the Małopolskie Voivodship. The analysis was compiled with the use of data of the Statistical Office in Krakow concerning the school year 2014/2015 and then discussed.
The basic determinant of education, as stressed by J. Bałachowicz 4 , is the realization of teaching focused on the child, leading to the maximization of his or her personality potential and development of his or her subjectivity. Such optimisation of development is possible only if appropriate educational conditions, based on the principle of individualisation of education, are ensured. This individualisation manifests itself, first of all, in adjusting the teaching and educational strategies used in everyday work to the needs and possibilities of the student. Individual teaching offers the possibility of selecting methods of achieving goals, adjusting the pace of classes, methods and forms of acquiring knowledge and skills to the varied possibilities and preferences of the student.
Individual teaching is a form of special education that requires specific organisation of learning and working methods. Its aim is to provide children with developmental disorders with the opportunity to pursue compulsory schooling. M. Pilch 5 points out that a student pursuing compulsory schooling cannot be required to make the effort of attending school if his or her disability or illness constitutes a real obstacle to this end, while overcoming this obstacle would impose a significant burden on the student. According to the Act on the Education System of 7 September 1991 6 "individual compulsory one-year pre-school preparation or individual teaching shall cover children and young people whose state of health prevents or significantly impedes them from attending kindergarten or school. In the new Act of 14 December 2016 -Education Law 7 with effect from 1 September 2017, the types of establishments in which children and young people may be subject to individual teaching have been extended to include other forms of pre-school education, as well as pre-school establishments in primary schools.
In order for a child to be covered by individual teaching, it is necessary to obtain: a decision on the need for individual one-year pre-school preparation of children attending kindergarten or other forms of pre-school education, or in the case of pupils a decision on the need for individual teaching of children and youth whose health condition makes school attendance impossible or significantly more difficult. The decision is issued by a public psychological and pedagogical counselling centre or a public specialist counselling centre. In accordance with the Regulation of the Minister of National Education of 18 September 2008 on decisions and opinions issued by adjudicating panels operating in public psychological and pedagog-________________ ical counselling centres 8 , the decision may be issued only at the request of the parent or legal guardian of the child. The application should be accompanied by a certificate on the child's state of health, in which the doctor specifies: • the period, however, not shorter than 30 days, during which the child's state of health makes attending kindergarten or school impossible or significantly impedes their attendance; • the diagnosis of the disease or other reason why the child's state of health makes attending kindergarten or school impossible or significantly hinders attending kindergarten or shool; • the extent to which a child whose state of health significantly hinders attending a kindergarten may participate in classes in which the core curriculum of pre-school education is implemented, organised with a group or individually in a separate room in the kindergarten; • the extent to which a pupil whose state of health makes it significantly more difficult to attend school can participate in compulsory educational activities organised with a class at school or individually in a separate room at school 9 . Currently, the new Regulation of the Minister of National Education of 7 September 2017 on decisions and opinions issued by adjudicating panels operating in public psychological and pedagogical counselling centres 10 , has changed the terminology: the decision on the need for individual compulsory one-year pre-school preparation and decision on the need for individual teaching. A certificate on the child's or pupil's state of health attached by the applicant may be issued by a specialist doctor or a general practitioner on the basis of medical documentation of specialist treatment. In the certificate, the doctor shall specify the following: • the anticipated period, however, not shorter than 30 days, during which the state of health of the child or pupil prevents or significantly impedes attending kindergarten or school;
________________
• the diagnosis of the disease or other health problem with an alphanumeric indication in accordance with the current International Statistical Classification of Diseases and Health Problems (ICD), • restrictions on the child's or pupil's functioning resulting from the disease or other health problem, which prevent or significantly impede attending kindergarten or school 11 . The decision on the need for individual compulsory one-year pre-school preparation and individual teaching shall be issued for a period not longer than one school year. In such decisions, the panel shall specify the following: • limitations in the functioning of the child and the pupil resulting from the course of the disease or therapeutic process; • the period during which there is a need for individual, one-year compulsory pre-school preparation and individual teaching; • the recommended conditions and forms of support to meet the individual's developmental and educational needs and the psychophysical abilities of the child and the pupil, including conditions for the development of its potential and strengths; • actions recommended to promote the child's integration in the pre-school and school environment and to facilitate the child's return to kindergarten and the pupil's return to school; • the recommended developmental and therapeutic goals, depending on the needs, to be implemented during the individual one-year compulsory pre-school preparation and individual teaching within the framework of psychological and pedagogical assistance provided (to the child and) to the pupil and, depending on the needs, to his/her parents by the kindergarten, school and counselling centre, together with an indication of recommended forms of psychological and pedagogical assistance; • in the case of a student of a school providing vocational education -also the possibility of further education in the pro-________________ fession, including the conditions for practical vocational training 12 . Until September 2017 the manner and mode of organizing individual teaching of children and youth was specified in the Regulation of the Minister of National Education of 28 August 2014 on individual compulsory one-year pre-school preparation of children and individual teaching of children and youth 13 . According to the cited document, teaching was organized for a definite period of time, in a way ensuring the implementation of the recommendations specified in the decision. Classes were conducted by one or several teachers who had individual and direct contact with the student. They took place in the child's place of residence, usually in the family home. They could also be organised in a kindergarten, in another form of pre-school education or in a school, if the decision indicated such a possibility, as well as in an institution, if it had a separate room in which the classes could be conducted.
Children and pupils who had been granted the decision on individual compulsory one-year pre-school preparation or individual teaching before 7 September 2017, when the Regulation on decisions and opinions issued by adjudicating panels operating at public psychological and pedagogical counselling centres entered into force 14 , may continue to benefit from this form of schooling. Following this date, decisions are issued on the basis of the new Regulation of the Minister of National Education of 9 August 2017 on individual compulsory one-year pre-school preparation of children and individual teaching of children and youth 15 . Pursuant to this act, individual pre-school preparation and individual teaching classes are conducted in the place of residence of the child or pupil, in particular in the family home and in special institutions: youth education centres, youth sociotherapy centres, special education and training centres, special education centres for children and young people requiring special organisation of schooling, methods of work and education, as well as remedial centres enabling children and young people with profound intellectual disabilities, as well as children and young people with multiple disabilities, of which one of the disabilities is intellectual disability, the realization of compulsory schooling and compulsory education, respectively 16 . Classes may also take place at a foster family, in a family orphanage, in a care and educational institution or in a regional care and therapy centre, as referred to in the Act of 9 June 2011 on Family Support and the System of Alternate Care 17 .
Current legislation does not allow for individual pre-school preparation or individual teaching on the premises of a kindergarten or school. Compulsory educational classes conducted as part of individual teaching result from the framework curriculum of a given type and type of school and are adapted to the developmental, educational and psychophysical needs of the student, as specified in the decision on the need for individual compulsory one-year pre-school preparation or teaching 18 .
Controversies around individual teaching
The school should not fulfil the objectives of an anonymous society, but should meet the aims, desires and aspirations of specific participants of the educational process. It should be an institution that creates opportunities for self-fulfilment, unrestricted development of personality, as well as enabling the achievement of individual life goals. In a school defined in this way, as emphasized by T. Lewowicki 19 , a uniform definition of objectives and tasks ceases to be in force, and the scope of freedom in creating the educational ________________ model and the individual's own participation in education are determined by the general social norms and the principle: do not act to the detriment of others.
Individual teaching conducted at home or at school allows to adapt the ways of learning and activating the student to his/her abilities and predispositions. A student covered by this type of edu- • loneliness • behavioural disorders: lack of cooperation, exploitation of third parties for one's own benefit, • dependence on the environment: lack of self-confidence, lack of independence, lower level of independent functioning • worse education: reduced activity, lack of self-fulfilment cation, as emphasised by K. Rzedzicka 21 , is in a seemingly comfortable position, because he/she usually learns in the home environment, has his/her own teacher who is entirely engaged in working with one pupil, and whose attention and time is focused on one pupil, unlike in a typical school classroom. However, the biggest disadvantage of this type of schooling is that it contributes to the social isolation of the pupils. It also deprives them of the possibility of establishing social relations typical of school age with people from outside the family, especially with their peers.
B. Jachimczak 22 listed the negative consequences for children in individual teaching, taking into account the child's functioning in the closest environment: family, school and out-of-school (Diagram 1).
The consequences of individual teaching, as indicated in diagram 1, affect the child not only in terms of functioning at school. Long-lasting, prolonged individual teaching can have a negative impact on the child's future life, both personal and professional. Limitation of social contacts, as well as difficulties in establishing them, may result in incorrect social development and lack of proper interpersonal relations. Furthermore, the reduction of educational requirements of individual teaching leads to a lower level of education and acquired skills. Methodologically limited classes reduce the possibility of comprehensive psychomotor development. J. Wyczesany 23 stresses that individual teaching is one of the most difficult forms of education for both children and young people. It lacks many valuable features of teaching and upbringing present in school institutions: generally accessible, inclusive or special institu- tions. Contact with peers and other teachers, which undoubtedly enhances students' knowledge and experience, should be particularly stressed.
Referring individual teaching to children with various disabilities, J. Wyczesany 24 distinguished three models of organisation of individual education, as well as the resulting heterogeneous consequences for the child's development (Diagram 2). one-year pre-school preparation of children and individual teaching of children and young people 29 , children and pupils do not have the opportunity of individual teaching at school. Therefore, only two of the models mentioned above can be implemented: separative and mixed, with teaching organised at home. The legislator provides for other solutions for the integrative model 30 , which, however, do not apply to a child with a decision on individual compulsory one-year pre-school preparation and a pupil with individual teaching, and as such are not the subject of this article. The integrative model, when the child can be with a school group and attend individual classes, is the most beneficial for the child and has the least negative consequences. On the other hand, the least beneficial model for the child's emotional, social and cognitive development is the separative model, when the student, while staying at home, is deprived of contacts with a peer group and loses the opportunity to broaden his/her knowledge and interests. The mixed model works well if the student is not able to participate in classroom activities due to his/her health condition. In view of the
Assumptions of own research
The main aim of the research was to determine the trends concerning the decisions on individual teaching, taking into account: the number of decisions issued for all students from a given district, the number of children covered by individual teaching at particular stages of education and the most frequent reasons for issuing such decisions. The obtained data was collected by means of a diagnostic survey, using a document analysis technique and supplemented by interviews with the directors of these institutions.
The research was conducted in five selected public psychological and pedagogical counselling centres, in three districts in the Małopolskie Voivodship: • wadowicki: Psychological and Pedagogical Counselling Centre in Wadowice and Andrychów, • myślenicki: Psychological and Pedagogical Counselling Centre in Myślenice and Dobczyce, • krakowski, the city of Krakow: Psychological and Pedagogical Counselling Centre No 2 in Krakow. The sampling was purposeful in view of the number of pupils covered by the centres in a given region. In the districts of Wadowice and Myślenice, data was collected in all public counselling centres, while in the case of the city of Krakow as a district, only one of the four counselling centres authorised to issue decisions on individual teaching was selected. The data for the school year 2015/2016 made available by the counselling centres, in order to illustrate the number of decisions on individual teaching issued in relation to the number of pupils at a given educational stage, was compared with the data of the Statistical Office in Krakow concerning education for the year 2014/2015 31 . ________________ 31 http://krakow.stat.gov.pl/statystyczne-vademecum-samorzadowca
Tendencies in the frequency and indications of decisions on the need for individual teaching -results of own research
The first aspect of individual teaching that has been analysed is the frequency of granting the decisions on the need for such form of education against the background of the total population of students attending educational institutions in the analysed districts (Table 1).
Comparing the quantitative data obtained in the analysed psychological and pedagogical centres with the data of the Statistical Office in Krakow, it can be stated that in the districts of Wadowice (0.94%) and Myślenice (0.81%), nearly 1% of children among all pupils are covered by individual teaching. In the city of Krakow, the analysed percentage of children is 0.45%, however, it should be noted that data from only one psychological and pedagogical counselling centre was analysed from among four centres authorised to issue such decisions.
With regard to the number of decisions issued in individual centres and when comparing them with the stage of education, it can be noted that both in the district of Wadowice (1.6%) and Myślenice (1.07%) the decision on individual teaching is most often granted to students of lower secondary schools. On the other hand, in the city of Krakow, such a decisions is most often granted to students of upper secondary schools (1.18%).
The next part of the study focused on the determination of the number of decisions on the need for individual teaching, taking into account the stage of education (Table 2).
When analysing the data taking into account the stage of education, it was observed that the majority of decisions on the need for individual teaching were granted to lower secondary school students (36.1%). Taking into account the three years of teaching at this level of education, it can be assumed that on average there were about 58 students in individual teaching per year of schooling. Also primary school pupils constituted a relatively high percentage Table 1. (34.3%) of pupils covered by this form of education. However, taking into account the six-year learning system, there were on average 27 such pupils per year of schooling. On the other hand, the number of young people covered by this type of education in upper secondary schools (25.6%) was slightly smaller when compared to the education stages described earlier, but taking into account the duration of this stage of education (3-4 years), there were on average 39 students per one year of schooling. The smallest number of children covered by individual education were children granted with the decision on individual compulsory one-year pre-school preparation who constituted 4.1%, i.e. 20 children per year. Taking into account the number of decisions issued at different stages of education, the reasons and indications according to which doctors direct children to individual teaching seem quite interesting (Table 3).
When analysing the data presented in Table 3, it can be seen that almost every second decision on the need for individual teaching regards students who have been diagnosed with various types of mental disorders by a doctor (48.7%). On the other hand, almost every fourth indication refers to a chronic disease which makes it impossible or significantly hinders attending kindergarten or school (24%). Only every tenth decision is related to surgery or an accident (9.7%), and every twentieth to cancer (4.9%) or pregnancy (3.1%).
Summary
Individual teaching is a form of special education that allows children whose health makes it impossible or very difficult to attend kindergarten or school to fulfil compulsory education. Due to its specificity, it allows to fully adjust education to the psychophysical possibilities of the student and his/her preferences in acquiring knowledge and skills. Most often, individual teaching takes place in the student's home, significantly limiting his/her contacts with peers, thus disrupting the individual's development. In justified cases it may be a positive measure -it allows for a flexible adaptation of the individual teaching model to a given student, his/her current state of health, mental condition at a specific moment of life, allows for better functioning. Unfortunately, it often becomes a "convenient" solution for schools struggling with "difficult students". Such social isolation has a negative impact on the individual and social development of young people, the basic condition of which is, as emphasized by W. Dykcik 32 , to enable the participation in social life in different scopes, situations and contexts.
Analysing the collected data from selected psychological and pedagogical counselling centres in the Małopolskie Voivodship, it can be stated that the decision on the need for individual teaching is granted to nearly 1% of all students, most often at the lower secondary school level. The most common reason and indication for granting such a decision are mental disorders and chronic diseases. It may be concluded that the decisions, granted in accordance with the current regulation, will be issued for the longest possible period, i.e. for one school year.
When analysing the relation between the specific educational stage and the reason for granting the decision, it may be assumed that the large number of decisions granted to lower secondary school students may be caused by late diagnosis, symptoms appearing only in the period of adolescence, or the type of deficits which make learning difficult only at a later stage of school education.
With regard to the empirical data collected, it is worth asking about the legitimacy of issuing such a large number of decisions on individual teaching and the quality of this form of education. The new regulations and the proposed solutions are intended to limit this form of education to a minimum. Will this succeed? It must be stressed that individual teaching should be one of the ultimate means of compulsory schooling. Exploiting this form of education is highly unfavourable from the point of view of the development of young people. It is worth considering why so many children use this form of education. The decisions on the need for individual teaching should be verified in order not to become a form of eliminating problems and difficulties resulting from working with students with special educational needs. | 2020-09-30T13:12:04.225Z | 2018-09-01T00:00:00.000 | {
"year": 2018,
"sha1": "743625576815a98706e39c2f4cdbd66b912807c7",
"oa_license": null,
"oa_url": "https://pressto.amu.edu.pl/index.php/ikps/article/download/21205/20465",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "743625576815a98706e39c2f4cdbd66b912807c7",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
252545109 | pes2o/s2orc | v3-fos-license | How to Sample From The Limiting Distribution of a Continuous-Time Quantum Walk
We introduce $\varepsilon$-projectors, using which we can sample from limiting distributions of continuous-time quantum walks. The standard algorithm for sampling from a distribution that is close to the limiting distribution of a given quantum walk is to run the quantum walk for a time chosen uniformly at random from a large interval, and measure the resulting quantum state. This approach usually results in an exponential running time. We show that, using $\varepsilon$-projectors, we can sample exactly from the limiting distribution. In the black-box setting, where we only have query access to the adjacency matrix of the graph, our sampling algorithm runs in time proportional to $\Delta^{-1}$, where $\Delta$ is the minimum spacing between the distinct eigenvalues of the graph. In the non-black-box setting, we give examples of graphs for which our algorithm runs exponentially faster than the standard sampling algorithm.
Introduction
Continuous-time quantum walks, first considered by Farhi and Gutmann [21], are quantum analogues of continuous-time classical random walks. The dynamics of a continuous-time classical walk on an undirected graph Γ is described by the differential equation where p(t) is the state of the walk at time t and L is the Laplacian of Γ. The entries of p(t) are indexed by the set of vertices of Γ. In the quantum walk on Γ, equation (1) is replaced by the Schrödinger equation where the Hamiltonian H = L, and |ψ t is a quantum state whose amplitudes encode a probability distribution. Quantum walks have found many applications in quantum computing and quantum information. It was shown by Childs [10] that universal quantum computation can be implemented using quantum walks on low degree graphs. There are many algorithms based on quantum walk that achieve polynomial speedup over classical algorithms, e.g., [13,20,3,4]. There are also black-box problems for which quantum walks achieve exponential speedup over classical algorithms [12,14]. A classical continuous-time random walk has a unique stationary distribution, assuming its underlying Markov chain is irreducible [34]. Regardless of the initial state, the walk converges to this stationary distribution as t → ∞, and therefore, the stationary distribution is the same as the limiting distribution. This, however, does not hold for quantum walks, since quantum evolutions are unitary and preserve distance. Nevertheless, one can define a time-averaged probability distribution of a quantum walk by choosing a time t ∈ [0, T ] uniformly at random, running the walk for a total time t and measuring the resulting state. When T → ∞, this distribution converges to a limiting distribution which is what we will consider in this paper. This limiting distribution generally depends on the initial state of the walk.
If the graph Γ is connected and simple, i.e., has no self loops and multiple edges, then the limiting distribution of the classical walk on Γ is always the uniform distribution. In contrast, the limiting distribution of the quantum walk on Γ depends on the initial state and is often not uniform. For example, the quantum walk on the hypercube [40] or the Symmetric group [24], or more generally G-circulant graphs [1] does not converge to the uniform distribution.
The mixing time M δ of a quantum (or classical) walk is the minimum time after which the distribution of the walk is within distance δ of the limiting distribution. The mixing time of a classical walk depends inversely on the spectral gap of the transition matrix of the walk, while for a quantum walk, the mixing time depends inversely on the minimum gap between all pairs of distinct eigenvalues of the Hamiltonian H. The mixing time of quantum walks have been studied for specific graphs such as hypercubes, cycles and lattices [40,22,41], and for Erdős-Rényi random graphs [9,8]. The bounds on the quantum mixing time for some graphs imply quadratic speedup over classical walks, while for some other graphs these bounds are larger than their classical counterparts.
The problem of sampling from the limiting distribution of a classical walk is an important problem and has been the focus of much research. The underlying randomness strategy in many algorithms of practical interest reduces to sampling from a limiting distribution. Much like in the classical case, sampling from the limiting distribution of a quantum walk is of both practical and theoretical interest. For example, Richter [41] proposed a "double-loop" quantum walk algorithm for sampling from a distribution that is close to the uniform distribution over a given graph Γ. The inner loop in Richter's algorithm samples from a distribution that is close to the limiting distribution Π of the quantum walk. Therefore, an efficient algorithm for sampling from Π results in an efficient algorithm for sampling uniformly from the vertex set of Γ. The black-box graph problem proposed by Childs et al. [12], which is interesting from a theoretical perspective, is based on sampling from the limiting distribution of a quantum walk. They prove that no classical algorithm can efficiently solve the proposed problem.
Given the importance of sampling from the limiting distribution of a quantum walk, it is natural to ask whether there are efficient algorithms for sampling from such distributions, at least for specific graphs. There has not been much research explicitly addressing this question. The standard way of sampling from the limiting distribution of a quantum walk is by mixing: set a large value for T , run the walk for a uniformly random time t ∈ [0, T ], and measure. Chakraborty et al. [8] considered sampling over the Erdős-Rényi graphs by mixing. They obtained an upper bound on the mixing time of these graphs through analyzing their spectrum. Their bound implies an exponential time sampling algorithm.
Sampling without mixing
In this work, we propose an algorithm for sampling from the limiting distribution of a given continuous-time quantum walk, that is not based on mixing. The idea behind our algorithm is to uniquely "tag" the eigenspaces of the adjacency matrix using polynomially long binary strings. More precisely, give a graph Γ with N vertices and adjacency matrix A, let |φ j be an eigenstate of A that belongs to an eigenspace X j of A. Then, we will see that sampling from the limiting distribution Π on Γ reduces to performing the transform where t j is a string of length poly(log N ) that uniquely identifies X j . To perform the transform (3), we introduce the general idea of ε-projectors. Informally, an εprojector for the adjacency matrix A is a set of hermitian matrices that have the same eigenspaces as A, and can be efficiently simulated as Hamiltonians. Moreover, the set of matrices in an εprojector have to satisfy a separation condition with respect to their eigenvalues. A set of matrices satisfying such a separation condition is called an ε-separated set. A specific ε-projector was first implicitly used by Kane, Sharif and Silverberg [29] for constructing a quantum money scheme based on quaternion algebras. Their ε-projector is a set of sparse Brandt matrices, which they use to verify an alleged bill.
Technique. Given an ε-projector A for A, we use phase estimation to store, in a separate register, an estimate of the eigenvalues of each operator in A. More precisely, we perform the transform |0 |φ j → |λ 1,j |λ 2,j · · · |λ r,j |φ j where theλ k,j are approximate eigenvalues corresponding to the eigenstate |φ j . It follows from the ε-separatedness of A that the vectors λ j = (λ 1,j , . . . ,λ r,j ) uniquely identify the eigenspaces of A. This means the binary representation of the λ j can be used as the binary strings t j in (3). Therefore, efficient sampling from the limiting distribution Π reduces to finding a good ε-projector for A. In general, a good ε-projector for A is an ε-projector for which ε −1 and r are both at most poly(log N ). In specific cases where the operators in the ε-projector can be simulated efficiently even for large powers, it is not necessary for ε −1 to be bounded by poly(log N ).
Black-box vs non-black-box. When the graph Γ is given as a black-box, we can only see the structure of Γ locally. In other words, we can only access the nonzero entries of each row of the adjacency matrix A of Γ through queries to an oracle. When we are restricted to query access to A, the global structure of Γ is not known, and we do not have any other information on A as an operator. In this case, we are essentially left with one choice for an ε-projector for A: A itself. Consequently, we always have ε ≤ ∆, where ∆ is the minimum distance between any two distinct eigenvalues of A. The complexity of our sampling algorithm will then be bounded below by a multiple of ∆ −1 .
In the non-black-box setting, we often have some knowledge of the global structure of Γ that enables us to find nontrivial ε-projectors for A. In this paper, we give two examples of graphs for which we can find good ε-projectors, Winnie Li graphs and Supersingular Isogeny graphs.
Winnie Li graphs [35] are special cases of quasi-abelian graphs which are a subclass of Cayley graphs. We give a general strategy for finding ε-projectors for quasi-abelian graphs, and apply it to Winnie Li graphs. Without the use of an ε-projector, one could sample from the limiting distribution of quantum walks on Winnie Li graphs using two different methods. The first method is by mixing, which takes exponential time because the eigenvalues of the (normalized) adjacency matrix are very close. The second method is to use the quantum Fourier transform. For that, we need to be able to approximate the eigenvalues of A efficiently. When the dimension of the underlying space is odd, these eigenvalues are multiples of some exponential sums called Kloosterman sums. There is no known efficient classical or quantum algorithm for approximating these sums. We will see that using a specific ε-projector, we can efficiently sample from the limiting distributions on these graphs.
A supersingular isogeny graphs is a regular graph in which the set of vertices is the set of supersingular elliptic curves and the edges are isogenies between these curves. Isogeny graphs have found many applications in cryptography [36,37]. The adjacency matrices of these graphs are called Hecke operators. The minimum distance between the eigenvalues of a Hecke operator is exponentially small, so the quantum mixing time for these graphs is exponentially large. It is known that the set of Hecke operators form a commutative algebra over C. Using this fact, and assuming some standard heuristics, we will see that a small set of these operators form an ε-projector with high probability. Using this ε-projector, we can efficiently sample from the limiting distribution on these graphs. As an application, the sampling algorithm can be used to generate honest hard curves. There is no known classical algorithm for efficiently generating such curves.
Continuous-time quantum walk
Let Γ = (V, E) be an undirected graph with N = |V | vertices, and let X = C V be the complex euclidean space with basis V . We will refer to this basis as the vertex basis and denote its elements by |v , v ∈ V . Let A be the adjacency matrix of Γ. The continuous-time quantum walk on Γ is described by the differential equation (2) where the Hamiltonian is the Laplacian of Γ. Another common choice (which we also use in this paper) for the Hamiltonian of the walk is the adjacency matrix A. Then, the continuous-time quantum walk on G at time t is defined by the operator W (t) = e iAt on X . For an initial quantum state |ψ 0 and a real number T > 0, define the following probability distribution on V : choose t ∈ [0, T ] uniformly at random, evolve the state |ψ 0 under W (t), i.e., compute W (t)|ψ 0 , and measure the resulting state in the vertex basis. The probability of measuring a vertex v ∈ V is Let {|φ j } 1≤j≤N be a set of eigenstates of A that form an orthonormal basis for X , and let {λ j } 1≤j≤N be the set of corresponding eigenvalues. Let {X j } 1≤j≤M , where M ≤ N , be the set of eigenspaces of A, and define I j = {k : |φ k ∈ X j }. Therefore, I j is the set of indices k for which the eigenstates |φ k correspond to the same eigenvalue. A straightforward calculation shows that Letting T → ∞, the second term in the above expansion will vanish, and we get the distribution This is called the limiting distribution of the quantum walk W (t). Given a real number δ ≥ 0, the mixing time M δ of the walk W (t), with respect to the initial state |ψ 0 , is defined as where P T (·|ψ 0 ) and P ∞ (·|ψ 0 ) are the probability vectors defined by (4) and (6), respectively. Denote by ∆ the minimum distance between all pairs of distinct eigenvalues of A, i.e., Using the same analysis as in [2], it can be shown that A proof of this bound is given in Appendix A for completeness. From the definition of M δ and the bound (9), we see that
Representation theory
For an introduction to representation theory see [15,45]. Let V be a C-vector space of finite dimension, and let GL(V ) be the group of automorphisms of V . Let G be a finite group. A linear representation of G in V is a homomorphism of groups ρ : G → GL(V ). The degree of ρ, denoted by d ρ , is the dimention of V as a C-vector space. The character of ρ is the function χ ρ : G → C defined by χ ρ (a) = Tr ρ(a). A morphism of representations ρ 1 : G → GL(V 1 ) and ρ 2 : commutes. The representations ρ 1 and ρ 2 are said to be isomorphic if φ is an isomorphism. A subrepresentation of ρ is a representation ρ W : A representation that has no subrepresentations except for W = 0, V is called an irreducible representation. We denote by G the set of isomorphism classes of irreducible representations of G. Any representation ρ of G can be decomposed as a direct sum of irreducible representations: If ̺ 1 , . . . , ̺ k is a complete set of irreducible representations of G then ρ = n 1 ̺ 1 ⊕ · · · ⊕ n k ̺ k for some integers n j ≥ 0. Here, n j ̺ j means a direct sum of n j copies of ̺ j .
which also shows that ̺ d 2 is a unitary matrix for all a ∈ G. Given any representation ρ of G, there always exists an inner product on V with respect to which ρ is unitary. Therefore, in this paper, we assume that all representations are unitary. In particular, any unitary representation can be decomposed as a sum of unitary irreducible representations.
The Fourier transform of a function f : G → C at a representation ̺ ∈ G is defined by The Fourier transform of f is given by ⊕ ̺ f (̺). The quantum Fourier transform of a state |ψ = x∈G α x |x is given by where α : G → C is defined by α(x) = α x , and α(̺) j,k is the (i, j) entry of the matrix α(̺).
Sampling Using ε-Projectors
Let Γ = (V, E) be a graph with N vertices and let A be the adjacency matrix of Γ. Assume the same notation as in Section 2.1. A closer look at the sum in (6) suggests the following simple approach to sampling from the limiting distribution P ∞ . Since {|φ j } is an orthonormal basis for X , given any initial state |ψ 0 , we can always write Suppose we have a quantum algorithm Q that can uniquely "tag" the eigenspaces of A in the above superposition, using an extra register. More precisely, Q performs the following operation where the strings t j are unique with respect to the eigenspaces of A. If we measure the second register in the vertex basis, we obtain a vertex v ∈ V with the probability given by (6). A naive choice for the tags t j are the eigenvalues of A. These eigenvalues can be approximated using phase estimation on the walk operator W (t). However, to be able to uniquely identify the eigenspaces of A, one might need to compute the eigenvalues with exponential accuracy. Any such computation generally takes exponential time unless W (t) can be applied efficiently for exponentially large t. In particular, if we treat A as a black-box, the complexity of performing (13) is going to be exponential in t. Therefore, any successful attempt at efficiently performing (13) will require some extra information or assumptions on A.
In the following we present the main idea of the paper, an algorithm for performing (13) that uses a specific set of operators that commute with A. We call such a set of operators an ε-projector. For many classes of graphs, we can find ε-projectors that enable us to efficiently perform (13). We need to adapt the definition of an ε-separated set from [29] to a set of operators.
Definition 3.1. For an integer r > 0, let A = {A j } 1≤j≤r be a set of hermitian operators, acting on X , that have the same eigenspaces. For an eigenstate |φ j , let λ 1,j , λ 2,j , . . . , λ r,j be the eigenvalues of the operators A 1 , A 2 , . . . , A r associated with |φ j , respectively. Define the vector λ j = (λ 1,j , λ 2,j , . . . , λ r,j ) for each j = 1, . . . , N . For a real number ε > 0, the set of operators A is said to be ε-separated if , and • A j has the same eignenspaces as A.
The function F (t) in Definition 3.2 determines how efficient the walks e iA j t can be performed for different values of t. For an ε-projector we require that F (t) be bounded above by a linear function in t. Also, here operations refer to elementary quantum gate operations. We now give an algorithm for sampling from the limiting distribution of the quantum walk on Γ. The algorithm takes as input an ε-projector for the adjacency matrix A.
Algorithm 1 (Sampling).
Input: An adjacency matrix A of a graph Γ = (V, E), an ε-projector A = {A j } 1≤j≤r for A, an initial state |ψ 0 ∈ X . Output: A sample from the limiting distribution of the walk W (t) = e iAt on Γ. 1. Perform phase estimation on the unitaries e iA 1 , . . . , e iAr and the input state |ψ 0 with accuracy ε/2 √ r, and store the approximate phases in extra registers. Denote byλ k,j the approximation of the eigenvalue λ k,j of A k corresponding to the eigenstate |φ j . Then the resulting state of this step is N j=1 φ j |ψ 0 |λ 1,j |λ 2,j · · · |λ r,j |φ j .
where |λ i,j −λ i,j | < ε/2 √ r for all i = 1, . . . , r. If we group the content of the first r registers as a vectorλ j then the state (14) can be written as 2. Measure the last register in the vertex basis. 3. Return the measured vertex.
Algorithm 1 proposes to use the (binary representations) of the phase vectorsλ j as the tags t j in (13). The correctness of the algorithm follows from the next lemma.
Lemma 3.3. The vectorsλ j uniquely determine the eigenspaces of A. More precisely,λ j =λ k if and only if j and k both belong to I ℓ for some ℓ.
Proof. Let λ j = (λ 1,j , λ 2,j , . . . , λ r,j ) be the vectors of the exact eigenvalues of A 1 , . . . , A r corresponding to the eigenstate |φ j . Then for all j = 1, . . . , N we have where the last inequality follows from the bound where k ∈ I h and j ∈ I ℓ and h = ℓ. Then (16) which contradicts the ε-separatedness of A.
The following theorem records the main result of this section. Proof. The time consuming part of the algorithm is the phase estimation for the operators W j = e iA j for j = 1, . . . , r. Each of these phase estimations is done with accuracy ε/2 √ r, and therefore requires O(F (2 √ rε −1 ) poly(log N )) operations [31,Chapter 7]. Since there are r phase estimations, the claimed running time follows.
From Theorem 3.4 we see that the complexity of Algorithm 1 is mostly determined by the "quality" of the given ε-projector A. If the number r of the operators in A and the separation parameter ε are poly(log N ) and 1/ poly(log N ), respectively, then the algorithm is efficient, i.e., runs in poly(log N ) operations. Otherwise, there is a natural trade-off between the sizes of the two parameters. Also note that, by definition, we always have F (t) ∈ O(t) for any ε-projector, so the running time in Theorem 3.4 is always upper bounded by O(r 3/2 ε −1 poly(log N )).
An immediate special case of Theorem 3.4 is when the ε-projector is a singleton set A = {B}, that is, when r = 1. If we only have black-box access to B we can only perform the walk e iBt with a running time that scales linearly in t. In this case, the running time of Algorithm 1 scales linearly in ε −1 . On the other hand, if we can perform the walk e iBt with a running time that scales polynomially in log t, then the running time of Algorithm 1 scales polynomially in log(ε −1 ). A lower bound for the running time of Algorithm 1 can be obtained using the fact that for any such ε-projector A we must have ε ≤ ∆, where ∆, defined in (8), is the minimum spacing between the distinct eigenvalues of A. Let us record these observations for the sake of referencing. In the black-box setting, we are usually given access to the adjacency matrix A of Γ such that we can apply the unitary e iA in time poly(log N ). In this setting, we can just take the ε-projector A = {A} for a small enough ε. This makes the complexity of sampling from the limiting distribution of the walk W (t) fundamentally dependent on ∆. If we use the naive way of running W (t) for a large t and measuring the resulting state, the bound (10) suggests that we should take T proportional to (2 ln M + 2)/δ∆ to be within distance δ of the limiting distribution. In comparison, Corollary 3.5 says we only need to run W (t) for t ≈ 1/∆ (and perform some other negligible operations) to sample exactly from the limiting distribution.
In the non-black-box setting, we have the opportunity to exploit some extra information on A to find ε-projectors that allow us to efficiently sample from the limiting distribution of W (t). In the following sections, we give examples of graphs for which we can find such ε-projectors.
Quasi-Abelian Graphs
As a first application of our sampling algorithm we consider a class of graphs, called quasi-abelian graphs, in this section. Quasi-abelian graphs, as we will see, are a potential source of concrete examples for which Algorithm 1 runs in polynomial time. In the following, we first review some general properties of quasi-abelian graphs, and then look more closely at a specific example called Winnie Li graphs.
Let G be a finite group of size N , and let the subset S ⊆ G be such that 1 / ∈ S. The Cayley digraph Γ = Γ(G, S) of the pair (G, S) is a directed graph in which the vertex set is the set of elements in G and the edge set is {(a, as) : a ∈ G, s ∈ S}. If S is symmetric, i.e., s ∈ S if and only if s −1 ∈ S, then Γ is an undirected graph called the Cayley graph. In this paper, we always assume that S generates the entire group G, which means that Γ is connected. Denote by f S : G → {0, 1} the characteristic function of S defined by f S (a) = 1 if a ∈ S and f S (a) = 0 if a / ∈ S. We will also denote by A(Γ) the adjacency matrix of Γ. A class function is a function f : G → C that is constant on conjugacy classes of G. It is not hard to show that the Fourier transform of a class function f is a diagonal matrix, e.g., see [17,Chapter 2]. From the definition of quasi-abelian graphs we see that f S is always a class function. For any irreducible representation ̺ ∈ G we obtain where d ̺ is the dimension of ̺ and χ ̺ is the character of ̺. The following proposition is partially proved in [17] and [42] with different notations. Here, we give a short proof consistent with our notations.
Theorem 4.2. Let Γ(G, S) be a quasi-abelian graph on a finite group G. Denote by F G the Fourier transform over G. Then (a) The adjacency matrix A(Γ) is diagonalized by F G , (b) The eigenvectors of A(Γ) are given by F * G |̺, j, k for ̺ ∈ G and 1 ≤ j, k ≤ d ̺ , (c) The eigenvalue corresponding to the eigenvector F * G |̺, j, k is given by From part (c) of the theorem we see that the eigenvalues of A(Γ) are only determined by the irreducible representations of G and the set S. Each irreducible representation ̺ corresponds to d 2 ̺ eigenvectors. This means an eigenvalue λ ̺ has multiplicity at least d 2 ̺ . If G is abelian, we always have d ̺ = 1, but that does not mean the eigenvalues λ ̺ are distinct for different ̺ ∈ G.
Proof of Theorem 4.2. Let ρ reg be the regular representation of G. Then we can write The Fourier transform F G decomposes ρ reg as It follows from this decomposition that where the last equality follows from the fact that f S is a class function. This proves (a). Parts (b) and (c) follow from the definition of the quantum Fourier transform (12) and the identity (17) for the characteristic function f S .
The expansion (19) suggest that we could perform the walk e iAt using the following three steps: 1. Apply the quantum Fourier transform F G , 2. Apply the phase operator U ̺ : |̺, j, k → e iλ̺t |̺, j, k , 3. Apply the inverse quantum Fourier transform F * G .
Of course, this is only efficient if both the Fourier transform F G and the phase operator U ̺ can be applied efficiently. Suppose we can apply F G efficiently for a given group G. Then a general strategy for constructing ε-projectors for A is as follows. For any function g : G → C the operator commutes with A. Suppose that g satisfies the following condition: Then A g has the same eigenspaces as A. If we can efficiently approximate g with exponential accuracy, then A = {A g } is an ε-projector that satisfies the conditions of Corollary 3.5 (b). If we can only approximate g with polynomial accuracy, then we might need many more of these functions g that satisfy (20). Ideally, we need to find g 1 , g 2 , . . . , g r : G → C, with r = poly(log N ), such that A = {A g j } 1≤j≤r is an ε-projector for A for some ε = 1/ poly(log N ). In this case, A satisfies the conditions of Theorem 3.4, and we can efficiently sample from the limiting distribution of the walk W (t) = e iAt using A.
Winnie Li graphs
Let F p be a finite field of characteristic p ≥ 3. For an extension F/F p of degree n, the norm map N F/Fp : F → F p is defined by N F/Fp (a) = a (p n −1)/(p−1) , which is a homomorphism between the multiplicative groups F × and F × p . Let S = ker(N F/Fp ), i.e., the set of elements of F of norm 1. The Winnie Li graph of F over F p is the Cayley digraph Γ(F, S) where the vertex set is F and the edge set is {(a, a + s) : a ∈ F, s ∈ S}.
Before we get into the specifics of the structures of these graphs, recall the quantum Fourier transform over F . The set of additive characters of F is given by where Tr F/Fp (x) = x + x p + · · · + x p n−1 is the trace map from F to F p . The quantum Fourier transform of the basis element |a , where a ∈ F , is given by For even n, the graph Γ is undirected, since N F/Fp (−a) = N F/Fp (a). Let us look at the simple case where n = 2. The extension F/F p is a quadratic extension, and if we assume that −1 is a quadratic nonresidue in F p , we can take F = F p (i) where i 2 = −1. The elements of F p (i) can be written in the form x + iy for x, y ∈ F p , so the norm map takes the simple form Therefore, the set S of elements of norm 1 is the set of F p -points of the circle x 2 + y 2 = 1. The Winnie Li graph Γ(F p (i), S) is a (p + 1)-regular graph of p 2 vertices. It follows from (18) that the adjacency matrix of Γ can be written as For an element a = u + iv ∈ F p (i), Theorem 4.2 (c) gives Here, K(a, b) is the Kloosterman sum with parameters a, b, which we will briefly talk about next.
Kloosterman sums. For a, b ∈ F p , the exponential sum is called the Kloosterman sum with respect to a, b. The last equality in (21) follows from the definition (22). Since K(a, b) = K(a, b), these sum are real numbers. A well known result of Weil [50] gives the bound |K(a, b)| ≤ 2 √ p. When p is large, estimating K(a, b) is an open problem; there are no known classical or quantum algorithms that can efficiently estimate K(a, b). For a multiplicative character χ of F × p , the χ-twisted Kloosterman sum is defined by Interestingly, when χ is the quadratic character of F × p , the sum (23) has a closed form, and is easy to estimate [43,7].
Euclidean graphs. The construction of the Winnie Li graph Γ(F p (i), S) can be directly generalized to obtain the so called Euclidean graphs [39]. Let n > 0 be an integer and b ∈ F p . Define the quadratic form Q(x) = x 2 1 + x 2 2 + · · · + x 2 n over F p . An Euclidean graph for n and b is the Cayley is the quantum Fourier transform of |a , and a, x = a 1 x 1 + · · · + a n x n for a = (a 1 , . . . , a n ) and x = (x 1 , . . . , x n ). The eigenvalues λ a are [39] where χ is the quadratic character of F × p , and where When n is odd, it is easy to approximate the eigenvalues λ a with exponential accuracy, since it is easy to approximate (23) when χ is the quadratic character. So, it is easy to perform the walk e iA(Γ)t = a∈F n p e iλat |â â| for exponentially large t. Therefore, we can easily sample from the limiting distribution of W (t) by running the walk for random values of t ∈ [0, T ] for a large T . However, when n is even, we do not know how to perform W (t) for large t. In fact, there is no known way to even perform W (1) = e iA(Γ) efficiently. We now use the general strategy introduced at the beginning of Section 4 to efficiently sample from the limiting distribution of W (t) when n is even. Note that since G = F n p is an abelian group, we have G ∼ = G. Therefore, we need to find functions g : G → C that satisfy the condition in (20). Proof. We first need to show that A g has the same eigenspaces as A(Γ). It is known that the Kloosterman sums K (1, b), b ∈ F p , are distinct [23]. Therefore, according to (24), two eigenvalues and since we can efficiently compute g with exponential accuracy, it follows that A g is a 1 p -projector for A(Γ). (b) There is a quantum algorithm that can sample from P ∞ (·|ψ 0 ) in poly(n log p) operations.
As another application of Algorithm 1 we consider a class of graphs, called isogeny graphs, in this section. Isogeny graphs have attracted much attention in last two decade mainly because of their applications in cryptography [36,37]. We will see in the following how the beautiful theory of Hecke algebras give us natural ε-projectors that make it possible to efficiently sample from the limiting distributions of quantum walks on these graphs.
The ε-projectors considered in this section, were first implicitly used by Kane, Sharif and Silverberg [29]. They considered the same set of operators we use here but in the context of quaternion algebras. In here, we consider the space of supersingular elliptic curves over the finite fields F p 2 , whereas in [29] they considered the space of ideals in an ideal class of the quaternion algebra B p,∞ . These two spaces are mathematically essentially the same, in a precise sense, but they are vastly different from a cryptographic perspective. In particular, the isogeny problem is easy in the latter space, but believed to be hard in the former [32]. Working with the ε-projectors in the space of elliptic curves yields a potential solution to the problem of generating an honest curve, as we will explain in Section 5.3.
Let F q = F p 2 be a finite field of characteristic p ≥ 5. An elliptic curve E over F q is a projective smooth curve of genus one. For the definition of these terms and an extensive introduction to elliptic curve see [28]. The affine version of E is usually written as the cubic y 2 = x 3 + ax + b, a, b ∈ F q , known as the Weierstrass equation of E. The set of points on E in any extension of F p form an abelian group. An elliptic curve E over F q is called supersingular if it has no nontrivial points of order p. It can be shown that any supersingular elliptic curves over the algebraic closure F p can be defined over F q , or, more precisely, is isomorphic to a curve over F q . Therefore, we always assume that any supersingular elliptic curve E has its coefficients a, b in F q .
An isogeny φ : E 1 → E 2 between two elliptic curves E 1 and E 2 is a rational function that is also a homomorphism of groups of points on E 1 and E 2 . An isogeny φ induces an embedding of function fields φ * : K(E 2 ) → K(E 1 ) defined by φ * (f ) = f • φ. The degree of φ is the degree of the extension K(E 1 )/φ * K(E 2 ). An isogeny of degree ℓ is called an ℓ-isogeny. For any isogeny φ there is a unique isogeny φ : E 2 → E 1 called the dual of φ. Let ℓ be a prime different from the characteristic p. Define a graph G ℓ with vertices the set of all F p -isomorphism classes of supersingular elliptic curves, and edges the set of ℓ-isogenies between the curves. G ℓ is called the supersingular ℓ-isogeny graph. Since the dual of an ℓ-isogeny is again an ℓ-isogeny but in the opposite direction, we usually consider G ℓ as an undirected graph. For simplicity, assume that p ≡ 1 mod 12. Then G ℓ is an (ℓ + 1)-regular graph with N = ⌊p/12⌋ vertices and no self loops. The adjacency matrix of G ℓ is a symmetric matrix denoted by T ℓ and is called the Hecke operator.
Simulating the Hecke operators
Let S be the set of vertices of G ℓ , i.e., the set of isomorphism classes of supersingular elliptic curves in characteristic p. The Hecke operator T ℓ acts on the formal abelian group M = E∈S ZE by sending each curve E to a sum of its neighbours in G ℓ . In the quantum setting, we consider the action of T ℓ on the complex Euclidean space X = M ⊗ Z C with the basis {|E } E∈S . The operators T ℓ , for different values of ℓ, are closely related to the Hecke operators acting on the space of modular forms [19,33,5], so the terminology we use here is mostly adopted from the theory of modular forms. For example, the trivial eigenvector of T ℓ is called the Eisenstein eigenform and corresponds to the eigenvalue λ E = ℓ+1. Deligne's proof of the Riemann hypothesis for function fields [16,30] implies that the nontrivial eigenvalues of T ℓ are contained in the interval [−2 √ ℓ, 2 √ ℓ]. We make the heuristic assumption that the eigenvalues of T ℓ are distinct. This assumption is in fact a consequence of (the well known) Maeda's conjecture [26] which states that the characteristic polynomial of T ℓ is irreducible over Q. We refer the reader to [38,46,25], and the references therein, for results on the computational verification of Maeda's conjecture.
An ℓ-isogeny can be computed in O(ℓ) operations over F q using the Vélu formulas [47]. When ℓ is small, i.e., ℓ = poly(log N ), it is easy to compute the list of all the neighbours of a given curve E in G ℓ . In particular, we can efficiently implement the isometry . , E ℓ are the ℓ + 1 neighbours of E. Therefore, we can use the existing Hamiltonian simulation techniques [11,49] to efficiently approximate the unitary e iT ℓ .
Proposition 5.1. For any prime ℓ = poly(log N ), the walk W (t) = e iT ℓ t on the ℓ-isogeny graph G ℓ can be performed in O(t poly(log N )) operations.
Distribution of eigenvalues
For different primes ℓ we have different isogeny graphs G ℓ over F q ; the set of vertices is always the same but the set of edges change with ℓ. Therefore, we get different Hecke operators T ℓ acting on the same space X . More generally, Hecke operators can be defined for any integer n > 0. The operator T n represent the adjacency matrix of the n-isogeny graph G n , although for non-prime n one needs to be more careful about some definitions. The algebra T Z = Z[{T n } n∈Z ], generated by all Hecke operators acting on X , is called the Hecke algebra. Let T = T Z ⊗ Z C be the Hecke algebra over C. It can be shown that T is a commutative ring [48,Chapter 41]. In particular, for any m, n, the operators T n and T m commute. This means all Hecke operators are simultaneously diagonalizable. It was proved by Serre [44] that for large p, the eigenvalues of the normalized Hecke operator T ℓ / √ ℓ are equidistributed in [−2, 2] with respect to the measure Let |φ 1 , |φ 2 , . . . , |φ N ∈ X be a simultaneous basis for all Hecke operators, and let ℓ 1 , ℓ 2 , . . . , ℓ r be a set of distinct primes each bounded by poly(log N ). For all 1 ≤ k ≤ r and 1 ≤ j ≤ N we have 1 √ ℓ k T ℓ k |φ j = λ j,k |φ j for some λ j,k ∈ [−2, 2]. Define λ j = (λ j,1 , λ j,2 , . . . , λ j,r ). It was also proved in [44] that for large p, the vectors λ j are equidistributed in [−2, 2] r with respect to the product measure µ = r k=1 µ ℓ k .
This means that asymptotically we can treat the λ j as independent samples from the distribution given by the measure µ. For large N , µ ℓ approaches the measure µ = 1 2π √ 4 − x 2 dx, the semicircle distribution on [−2, 2]. So, when the characteristic p is large enough, it is natural to assume that the vectors λ j are independent random samples from µ r . We now prove that for r ∈ O(log N ) and ε = 1/ √ log N , the set of Hecke operators T = {T ℓ k / √ ℓ k } 1≤k≤r is an ε-projector for any Hecke operator T ℓ with ℓ a prime number. The following proves the ε-separatedness of T .
Lemma 5.2 and Theorem 3.4 now give
Theorem 5.3. Let ℓ = p be prime, and let G ℓ be the ℓ-isogeny graph with vertices the set of supersingular elliptic curves over F p 2 . For a given initial state |ψ 0 we have (a) The limiting distribution of the walk W (t) = e iT ℓ t on G ℓ is given by There is a quantum algorithm that can sample from P ∞ (·|ψ 0 ) in poly(log p) operations.
Honest hard curves
A hard problem in isogeny based cryptography is to compute the endomorphism ring End(E) of a given supersingular elliptic E. The majority of other computational assumptions can be reduced to the endomorphism ring problem [51]. Informally, a hard curve is a random curve on G ℓ with an unknown endomorphism ring. To classically generate a random curve on G ℓ , one can do the following: 1. Start with a known curve E 0 on G ℓ , 2. Take a random walk of length at least 2 log p, and
Return the final curve
For small ℓ, taking classical random walks on G ℓ can be done very efficiently [18]. Also, it is well known that G ℓ has a small diameter, and taking a walk of length ≈ 2 log p is enough to get close to the uniform distribution on G ℓ . At the first glance, it seems that the above classical random walk on G ℓ can be used to efficiently generate a hard curve. However, there is an issue with this approach that seems to be unavoidable: the random walk explicitly generates a path on G ℓ . This path can be used to compute the endomorphism ring of the returned curve. More precisely, if the endomorphism ring of the initial curve E 0 is known and we are given a path φ : E 0 → E (which is an isogeny) then we can use φ to compute End(E). Any such path φ is called a backdoor for E. A hard curve without a backdoor is called an honest hard curve.
There is no known efficient classical solution for generating an honest hard curve. A potential solution using quantum walks was first discussed in [6]. The idea is to sample from the limiting distribution of the walk W (t) = e iT ℓ t on G ℓ . According to Theorem 5.3, this can be done efficiently, and in contrast to the classical walk, the quantum walk does not generate any path on G ℓ . Two important questions about the quantum walk solution in [6] remained unresolved. We briefly address those questions here.
The first question is weather the distribution (26) is close to uniform. Intuitively, there is no reason to believe that the eigenvectors |φ j are localized 1 . In particular, the action of the Hecke operator T ℓ on X implies that each entry of |φ j is an average of a set of ℓ+1 other entries. Since |φ j is an eigenvector for all Hecke operators, these averages involve all the other entries for large enough ℓ. Therefore, one would expect that the entries of |φ j are not too small or too large, and that the distribution P ∞ (·|ψ 0 ) is not concentrated on a small subset of vertices. In fact, it is not hard to show, using general techniques in the theory of Markov chains, that P ∞ (E 1 |E 2 ) ≥ N −2 for every two curves E 1 , E 2 [41]. Notwithstanding, a rigorous proof that P ∞ (·|ψ 0 ) is close to the uniform distribution does not seem straightforward. Maybe there are ways to analyze the amplitudes of the vectors |φ j through their close connection to complex modular forms, but we are not aware of any work regarding this in the literature. Instead, this question can be approached algorithmically using the double-loop technique of [41]. The double-loop algorithm for our case is: 1. Set E to a known curve E 0 on G ℓ 2. Repeat for k times (a) Run Algorithm 1 with initial state |E to obtain a curve E 1 (b) Set E ← E 1 3. Return E Assuming that the maximum column distance of P ∞ is bounded by a constant smaller than 1, we need to only set k = ⌈log 1/α δ −1 ⌉ for the distribution of the final curve E to be within distance δ of the uniform distribution. Again, the assumption that α is always bounded by a constant c < 1 is not rigorously proved, but it is more plausible than the assumption that P ∞ (·|φ 0 ) is close to uniform.
The second question is only specific to the sampling algorithm proposed in [6]. In that algorithm, after preparing the state (15), the first register is measured. The measurement outcome is a random vectorλ j , and the post-measurement state is the corresponding eigenstate |φ j . The state |φ j is then measured in the vertex basis to obtain a curve E. The question is whether the vectorλ j reveals any information about the endomorphism ring End(E) of the curve E. This situation is entirely avoided in Algorithm 1; we never measure the first register. The post-measurement state of Algorithm 1 is a superposition of all the vectorsλ j , j = 1, . . . , N , which does not seem to provide any useful information about End(E).
Remark 1.
In a cryptography scenario, if Alice presents Bob with an alleged hard curve E, there is no way for Bob to know whether E is an honest hard curve, or that Alice is in possession of a backdoor for E. In other words, Bob is unable to determine which algorithm Alice has used to generate E. This kind of trust issue is normally solved by a higher level cryptographic construction. | 2022-09-28T06:44:41.591Z | 2022-09-26T00:00:00.000 | {
"year": 2022,
"sha1": "17f391bee6ceebd5d76c3a524c62123905a58e76",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "17f391bee6ceebd5d76c3a524c62123905a58e76",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
} |
239630306 | pes2o/s2orc | v3-fos-license | CAB DIRECT IS THE FOCUS OF A SCIENTOMETRIC ANALYSIS FROM 2011 TO 2013: BEANS SCIENTIFIC RESEARCH ARTICLES
Bean researches from 2011 to 2013 were collected from the CAB Direct Online database using scientometric analysis. Between 2011 and 2013, 36 papers were written, according to the report, with 21 papers being highly published in 2011. The most common topic among scientists interested in beans research was Biology and breeding of food legumes, with 13 papers (36.1%), followed by Nutrient deficiencies of field crops: guide to diagnosis and management, with 5 papers (13.8%). Combating micronutrient deficiencies: food-based approaches; Crop plant anatomy; Natural products in plant pest management; and African vegetable production and marketing: socioeconomic research papers published in the same journal (5.55%). Indian scholars have written more papers on bean studies than authors from other countries. Gujarat, Jharkhand, Tamil Nadu, Karnataka, Uttar Pradesh, Andhra Pradesh, West Bengal, Bihar, Madhya Pradesh, and Chattisgarh are the major bean-growing states in India. Bean trade outnumbers all other crops combined in India, and global demand for Indian beans is increasing. in Sub-Saharan productivity;
INTRODUTION
Beans are the seeds of a variety of flowering plants in the Fabaceae family that are eaten as vegetables by both humans and animals. They can be prepared in a number of ways, including boiling, frying, and baking, and are used in many common dishes around the world. Beans are one of the world's oldest crops. Broad beans, also known as fava beans, were collected in their natural state, when they were the size of a small fingernail, in Afghanistan and the Himalayan foothills. They were grown in Thailand in a shape improved from naturally occurring forms from the early seventh millennium BCE, predating ceramics. They were buried with the dead in ancient Egypt. Until the second millennium BCE, cultivated, large-seeded broad beans did not appear in the Aegean, Iberia, or transalpine Europe. The Iliad discusses beans and chickpeas cast on the threshing floor in passing (8th century BCE). Beans have been an important source of protein in both the Old and New Worlds for centuries, and they remain so today. The earliest known domesticated beans in the Americas were discovered in the second millennium BCE in Peru's Guitarrero Cave. Genetic studies of the common bean Phaseolus, on the other hand, show that it originated in Mesoamerica and spread south with companion crops like maize and squash.
The majority of the widely eaten fresh or dried forms, those of the genus Phaseolus, originated in the Americas and were first seen by a European when Christopher Columbus found them emerging in fields while exploring what may have been the Bahamas. Five varieties of Phaseolus beans were domesticated by pre-Columbian cultures: common beans (P. vulgaris) grown from Chile to the northern part of what is now the United States, lima and sieva beans (P. lunatus), as well as the less widely distributed teparies (P. acutifolius), scarlet runner beans (P. coccineus), and polyanthus beans (P. polyanthus). One of the most well-known pre-Columbian uses of beans is the "Three Sisters" method of companion plant processing, which was used by pre-Columbian people as far north as the Atlantic coast. Beans were grown alongside maize (corn) and squash by many tribes in the New World. Instead of being planted in rows as in European agriculture, corn will be planted in a checkerboard/hex pattern around a field, in separate patches of one to six stalks each. Beans would be planted around the base of the rising stalks and would vine their way up as the stalks rose.
At the time, all American beans were vine plants, and "bush beans" had only recently been created. The cornstalks would serve as a trellis for the beans, which would provide much-needed nitrogen to the corn. Squash will be planted in the gaps between the corn patches in the field. Since their coarse, hairy vines and wide, stiff leaves are difficult for animals like deer and raccoons to walk through, crows to land on, and so on, the corn and beans can provide some sun protection, shade the soil and reduce evaporation, and deter many animals from attacking the corn and beans. Both Old World (fava beans) and New World (kidney, black, cranberry, pinto, navy/haricot) large bean varieties are used to make dry beans. Beans are heliotropic species, which means that during the day, their leaves tilt to face the sun. At night, they transform into a "sleep" position.
OBJECTIVES OF THE STUDY
The primary goal of this study is to examine the results of bean research as expressed in the CAB Direct Online database's publications from 2011 to 2013. The study focuses in great detail on the following goals: 1) CAB Direct Online database supported for the period 2011-2013 in order to examine the overall range of publications output on beans cereals research analysis.
3) Studying the top 10 journals publishing more research papers on analysis of beans. 4) Identify the top 10 authors in the beans analysis field. 5) To identify the highest rank-wise countries in the analysis of beans.
6) Identify the language distribution of an analysis of beans.
METHODOLOGY
The data for the three years (2011-2013) was retrieved from the CAB Direct Online database by searching for the keyword "beans" in the title area. The CAB Direct Online database comprises 36 records in total.
RESULTS AND ANALYSIS
The data on beans research from the CAB Direct Online database was analyzed and presented using a variety of statistical methods, including tables.
GROWTH RATE AND DOUBLING TIME IN BEANS RESEARCH OUTPUT
A study of the growth rate of beans research production is important in the analysis of field research and development. As shown in Table 1, there are no publications on the relative growth rate of beans or study production over these years (2011 -2013). The quotes from the Relative Growth Rate and Doubling Time are extracted and described in Table 1. For the years mentioned, the relative publishing growth rate decreased and increased, but the doubling time [Dt(c)] increased from 2.31 to 2.77, and the mean doubling time for the three years was 2.49.
PREFERRED KINDS OF PUBLICATIONS
According to the findings, the most common form of publication covered by the CAB Direct Online database for bean research is a book chapter, which has 34 papers (94.4%), followed by a book post, which has one document (2.77 percent). Table 2 displays the top two categories of publications.
MOST POPULAR JOURNALS
The most common topic among scientists interested in beans research was Biology and breeding of food legumes, with 13 papers (36.1%), followed by Nutrient deficiencies of field crops:guide to diagnosis and management, with 5 papers 38 (13.8%). Combating micronutrient deficiencies: food-based approaches; Crop plant anatomy; Natural products in plant pest management; and African vegetable production and marketing: socioeconomic research papers published in the same journal (5.55%). Agrobiodiversity conservation: securing the diversity of crop wild relatives and landraces was published in the top five most prestigious bean researcher papers. In tropical Asia, arthropod pets of horticultural crops; Banana systems in Sub-Saharan Africa's humid highlands: increasing resilience and productivity; Production, physiology, and genetics of biofuel crops (2.77%). The top ten most influential journals, as well as the number of papers published in each, are listed in Table 3.
RANK-WISE COUNTRIES DISTRIBUTION OF PUBLICATIONS
South Africa is the leading country in bean research, according to the report, with three papers accounting for nearly (13.88%) of global bean research output, followed by Developing Countries, Kenya, Asia, Australia, Brazil, Canada, Congo Democratic Republic, Nicaragua, and Pernambuco (out of ten countries). Table 5 summarizes the top ten countries based on a range of sources.
PREDOMINANT LANGUAGES
As shown in Table 6, all of the bean research papers were written in the common language of English, with 36 papers (100%) written in English.
CONCLUSION
South Africa is the leading source of scientific research and growth, according to a scientometric analysis based on beans analysis from the CAB Direct Online database, with three publications accounting for around (13.88%) of the total production from the ten countries. Another intriguing feature is that Pratap, A. is the most prolific / Ranking authors of beans studies, with 15 papers (41.6%), followed by Kumar, J. and Kumar, P. with 14 papers (38.8%), and Kumar, J. and Kumar, P. with 13 papers (38.8%), respectively (27.2%). Sharma, M.K., Maiti, R., Rajkumar, D., Ramaswamy, A., and Satya, P. contributed a paper level of 15 to 2, with 4 papers (11.1%), followed by (5.55%), respectively. The most common topic among scientists interested in beans research was Biology and breeding of food legumes, with 13 papers (36.1%), followed by Nutrient deficiencies of field crops: guide to diagnosis and management, with 5 papers (13.8%). Combating micronutrient deficiencies: food-based approaches; Crop plant anatomy; Natural products in plant pest management; and African vegetable production and marketing: socioeconomic research papers published in the same journal (5.55%). Agrobiodiversity conservation: securing the diversity of crop wild relatives and landraces was published in the top five most prestigious bean researcher journals. In tropical Asia, arthropod pets of horticultural crops; Banana systems in Sub-Saharan Africa's humid highlands: increasing resilience and productivity; Production, physiology, and genetics of biofuel crops (2.77%). | 2021-10-25T20:55:43.645Z | 2021-08-31T00:00:00.000 | {
"year": 2021,
"sha1": "8d52d2f01f03192da68fca364eba7b1bf613d256",
"oa_license": "CCBY",
"oa_url": "https://www.granthaalayahpublication.org/journals/index.php/granthaalayah/article/download/4135/4241",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "fff2ac25f8791ab0ab02a25a2bf6bdf147fa3390",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
} |
10049257 | pes2o/s2orc | v3-fos-license | Treatment of obstructive sleep apnea with mandibular advancement appliance over prostheses: A case report
Treatment with a mandibular advancement device (MAD) is recommended for mild obstructive sleep apnea (OSA), primary snoring and as a secondary option for Continuous Positive Airway Pressure, because it has better adherence and acceptance. However, edentulous patients do not have supports to hold the MAD. This study aimed to present a possible to OSA treatment with MAD in over complete upper and partial lower dentures. The patient, a 38-year-old female with mild OSA, was treated with a MAD. The respiratory parameter, such as apnea–hypopnea index, arousal index and oxyhemoglobin saturation was improved after treatment.
Introduction
Obstructive Sleep Apnea (OSA) is a respiratory sleep disorder characterized by partial and/or complete obstruction of the upper airway [1]. OSA is highly prevalent, affecting up to 32.8% of the adult population in the São Paulo city [2]. The most common treatments for OSA include Continuous Positive Airway Pressure (CPAP) and oral appliances (OA). CPAP is used as the first option in severe OSA because it normalizes respiratory parameters [3]. However, oral appliances are recommended for milder cases, such as primary snoring and mild OSA, or as a secondary treatment for CPAP or alternative treatment in CPAP failures [4].
OAs can be divided into two groups: mandibular advancement devices (MADs), which are attached to the teeth and move the mandible anteriorly, and tongue retaining devices, which use suction to position the tongue anteriorly inside a bulb [5]. MADs have higher success and compliance rates than tongue retaining devices [5] and have more scientific support for their use [6].
MADs are anchored to the teeth; therefore, their efficacy is directly related to the retention of the device to the dental [7]. Contraindications for treatment with these appliances include oral conditions with less than 10 teeth per arch [4], because this condition leads to lower retention and consequently failure of the treatment.
Importantly, Brazil has a large number of lost teeth and edentulous people [8,9], and this population is contraindicated for MAD use due to total and/or partial tooth loss. This case report contains an alternative treatment using a MAD constructed on a complete upper and partial lower prosthesis in a patient with mild OSA.
2.
Case report Patient NPDSD, a 38-year-old female, was referred to the AFIP Dental Sleep Clinic with a diagnosis of mild OSA (AHI ¼12.5).
During anamnesis, the patient reported gagging and interrupted breathing during sleep. She complained of agitated sleep, tiredness after waking, and excessive daytime sleepiness and the Epwoth Sleepiness Scale (ESS) score was 8. Her main complaint was loud snoring. The full night polysommnography was performed using a digital system (EMBLA (R) S7000, Embla System, Inc., Broomfield, CO., USA) and according to Academy American Manual (2007) [10]. The scoring of hypopnea events was made according to the alternative rules [10].
During a clinical examination, the patient showed a body mass index (BMI) of 29.8 kg/m 2 and neck circumference of 38 cm. The oral examination showed complete upper denture, and a partial lower prosthesis constructed eight months before. The seven remaining teeth were a good periodontal condition and did not have caries. The alveolar ridge was good bone support and had a healthy aspect. The anchoring and stability of the prosthesis were evaluated and was satisfactory to do anchoring and stability of the MAD.
The patient was informed that the treatment would be a therapeutic trial, and there was a possibility that the treatment would fail. As the amount of force that the device would apply to the prosthesis was unknown we did not have sure if the MAD would move these prothesis. Fig. 1 shows the oral condition of the patient with and without the prosthesis installed.
We planned to confection the PM positioner appliance that it is a custom-made titratable MAD and does not have laterality, which could produce less rocking force on the prosthesis, leading to better stability.
Her prior dental history was requested, including photographs, panoramic x-rays, and lateral teleradiography. Next, molds were taken with the prosthesis in the mouth, and register the protrusion measured using a George Gauge. The maximum protrusion (from the maximum posterior contact to the maximum tolerable protrusion) was 7 mm. The appliance was set to 50% when it was installed and advanced by 1 mm per by week until the snoring complaint was over. The advanced was finish at 100% of maximum tolerable protrusion, at 7 mm. She was submitted to a polysomnography after 5 months with a maximum tolerable protrusion position. Fig. 2 shows the patient with the MAD appliance installed.
The treatment resulted in improvement on subjective sleepiness, ESS score was 4, on fatigue report and subjective and objective quality of sleep. On the polysomnogrpy parameters, the treatment decrease a sleep latency, REM latency, arousal index, AHI and increased of percentage of REM and N3 (Table 1).
Discussion
The present study reports a successful treatment using a MAD constructed on a complete upper and partial lower prosthesis. This report is important because it broadens the potential uses of MAD appliances. Treatment of edentulous patients with mild OSA is normally limited to CPAP or tongue retainers, which have low compliance [5,11]. The prevalence of OSA increases with age [2], and the prevalence of edentulous individuals is also high in the elderly population [8]. Using a MAD on a prosthesis is an alternative, as it is a low-cost treatment and does not require a power source [4]. Even though a tongue retainer does not require dental support and is normally recommended for edentulous individuals, it has low compliance and produces several side effects, including irritation of the soft tissues and excess drooling [5]. In a randomized crossover study by Deane et al. [5], 91% of the patients preferred a MAD over a tongue retaining device and were more satisfied with MAD use. In this study, tongue retainer had a compliance rate of 27.3% for use more than six hours per night, while MAD showed a compliance rate of 81.8%, [5].
There are few studies in the literature that address the treatment of edentulous individuals using a MAD, and those that do exist are limited to case reports. The majority of the studies support the device on the alveolar ridge [12][13][14] or on the tongue retainer flange [15]. Others build the device over a complete upper prosthesis [16] or on complete upper and lower prostheses [17]. Unlike previous studies, the present study was performed on a complete upper and partial lower prosthesis.
The ideal MAD treatment in patients with few or no teeth is to construct this device on previously installed implants, thus eliminating side effects related to occlusal changes and discomfort on the alveolar ridge. However, for the majority of the population, this treatment is impractical due to its high cost and because it is a surgical procedure.
A MAD installation on prosthesis should include frequent follow-up appointments to verify absorption of the edge and consequent loss of stability of the prosthesis. Periodic analyses are necessary, as the force exerted by the MAD may increase bone reabsorption, causing the OSA treatment to fail. When there are few teeth, dental side effects should be evaluated closely, because the force may be improve more dental side-effect. As there are few articles that address MAD use over a prosthesis, with none discussing long-term results. It becomes necessary.
Conclusion
Treatment with mandibular advancement device over prostheses may be possible. However it is necessary broad oral assessment, including mucosa, teeth and stability of prosthesis. Monitoring of possible side effects it is necessary to treatment at long time. S l e e p S c i e n c e 8 ( 2 0 1 5 ) 1 0 3 -1 0 6 | 2016-05-04T20:20:58.661Z | 2015-04-01T00:00:00.000 | {
"year": 2015,
"sha1": "13060686408a92b715431d9bd8edba5f9748b23e",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.slsci.2015.05.002",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "13060686408a92b715431d9bd8edba5f9748b23e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257665543 | pes2o/s2orc | v3-fos-license | Importance of Dietary Uptake for in Situ Bioaccumulation of Systemic Fungicides Using Gammarus pulex as a Model Organism
Bioaccumulation of organic contaminants from contaminated food sources might pose an underestimated risk toward shredding invertebrates. This assumption is substantiated by monitoring studies observing discrepancies of predicted tissue concentrations determined from laboratory‐based experiments compared with measured concentrations of systemic pesticides in gammarids. To elucidate the role of dietary uptake in bioaccumulation, gammarids were exposed to leaf material from trees treated with a systemic fungicide mixture (azoxystrobin, cyprodinil, fluopyram, and tebuconazole), simulating leaves entering surface waters in autumn. Leaf concentrations, spatial distribution, and leaching behavior of fungicides were characterized using liquid chromatography coupled with high‐resolution tandem mass spectrometry (LC‐HRMS/MS) and matrix‐assisted laser desorption ionization‐mass spectrometric imaging. The contribution of leached fungicides and fungicides taken up from feeding was assessed by assembling caged (no access) and uncaged (access to leaves) gammarids. The fungicide dynamics in the test system were analyzed using LC‐HRMS/MS and toxicokinetic modeling. In addition, a summer scenario was simulated where water was the initial source of contamination and leaves contaminated by sorption. The uptake, translocation, and biotransformation of systemic fungicides by trees were compound‐dependent. Internal fungicide concentrations of gammarids with access to leaves were much higher than in caged gammarids of the autumn scenario, but the difference was minimal in the summer scenario. In food choice and dissectioning experiments gammarids did not avoid contaminated leaves and efficiently assimilated contaminants from leaves, indicating the relevance of this exposure pathway in the field. The present study demonstrates the potential impact of dietary uptake on in situ bioaccumulation for shredders in autumn, outside the main application period. The toxicokinetic parameters obtained facilitate modeling of environmental exposure scenarios. The uncovered significance of dietary uptake for detritivores warrants further consideration from scientific as well as regulatory perspectives. Environ Toxicol Chem 2023;42:1993–2006. © 2023 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals LLC on behalf of SETAC.
INTRODUCTION
Potential systemic fungicide exposure routes for aquatic invertebrates (i.e., gammarids) and predicted body burdens based on water concentrations combined with toxicokinetic laboratory experiments (Lauper et al., 2021;Munz et al., 2018). This observation was dominantly made for systemic pesticides, including fungicides. Such pesticides are mobile in the environment because of their relatively high polarity (i.e., being water-soluble), facilitating their distribution to different matrices. In addition, their systemic properties allow them to be taken up by the plant root system and translocated into plant compartments aboveground Erwin, 1973). Thus, off-field plants at the field margins may also take up retained systemic pesticides, as simulated by Englert, Bakanov, et al. (2017). Plant material, such as leaf litter, can enter nearby water bodies after abscission by lateral transport (wind) or vertical fall in autumn (Abelho, 2001). This pathway can cause a time-delayed aqueous exposure for aquatic biota to systemic pesticides leaching from the leaves as well as a secondary poisoning scenario by consumption of contaminated leaf litter (Englert, Bakanov, et al., 2017;Englert, Zubrod, Pietz, et al., 2017;Kreutzweiser et al., 2007). The implications of contaminated leaf litter consumption are currently understudied because most monitoring studies of pesticides focus on the pesticide application season via water sample collection and analysis (Chow et al., 2020).
Current research gaps and challenges
While research on the potential direct and indirect toxic effects of systemic pesticides on decomposer-detritivore systems has received more attention in the last decade Zubrod et al., 2019), there are still many knowledge gaps to be filled. For example, (1) a large proportion of studies rely on dietary exposure scenarios that are based on sorption (partitioning) of the tested pesticides to leaf material. In the following, this will be called summer scenario because water concentrations and consequential sorption processes are expected to be highest around the times of pesticide application (summer season). Because of the high associated effort, only in a few studies was leaf material from systemically exposed plants tested (Englert, Zubrod, Link, et al., 2017;Englert, Zubrod, Pietz, et al., 2017;Kreutzweiser et al., 2007Kreutzweiser et al., , 2008Kreutzweiser et al., , 2009Newton et al., 2018), of which only Newton et al. (2018) tested systemic fungicides. This scenario describes a contamination of aquatic systems originating from previously contaminated plant material and will thus be called autumn scenario.
(2) In addition to this research gap, the vast majority of studies (regardless of systemic or sorption-driven exposure) utilized leaves that were preserved by freezing or drying after pesticide application. Such procedures may be necessary for sample conservation, but damage the leaf structures. As a consequence, the leaching kinetics of both contaminants and leaf constituents increases compared with natural scenarios (as summarized by Consolandi et al., 2021). To the best of our knowledge, studies with undamaged systemically exposed leaf litter were only performed by Kreutzweiser et al., (2007Kreutzweiser et al., ( , 2008Kreutzweiser et al., ( , 2009) and focused on neonicotinoid insecticides. (3) Lastly, existing literature rarely includes determination of internal concentrations (i.e., bioaccumulation potential) of pesticides in exposed shredders (Englert, Zubrod, Pietz, et al., 2017). Therefore, implementing toxicokinetic approaches could help in improving the understanding of the mechanisms behind bioaccumulation processes and adverse effects.
Research scope
To address the described research gaps, the present study investigated the in situ relevance of dietary uptake for bioaccumulation of systemic fungicides in the amphipod species Gammarus pulex (Linnaeus, 1758) via the evaluation of behavioral, physiological, and chemical endpoints. For this purpose, dietary exposure experiments were designed to simulate an autumn exposure scenario by using pristine (here defined as undamaged by drying or freezing) leaf material from horse chestnut (Aesculus hippocastanum) trees treated with a systemic fungicide mixture (azoxystrobin, cyprodinil, fluopyram, and tebuconazole). In addition, for comparative purposes, a summer exposure scenario was created, with the main contamination source being the aqueous phase, causing fungicides to be sorbed to uncontaminated leaves.
We hypothesized that systemic fungicides could be effectively assimilated into the tissue of G. pulex through dietary exposure. Furthermore, we expected that the dietary uptake of systemic fungicides from contaminated leaves might be of more importance in autumn than in a summer scenario with aqueous exposure during the application period. Consequently, dietary uptake of systemic fungicides from contaminated leaf litter by aquatic shredders, such as gammarids, could help explain the observed discrepancies between observed and predicted body burdens in the field, thus supporting bioaccumulation assessments.
Creation of the test material
To obtain leaf material from trees exposed to systemic fungicides, specimens of A. hippocastanum L. (horse chestnut) were reared under field conditions (iES Landau, Siebeldingen, Germany) following Newton et al. (2018). Chestnut trees are common in European riparian vegetation (Ravazzi & Caudullo, 2016), and their leaves are commonly used to feed gammarids (Consolandi et al., 2021). Trees were treated two times (middle of May and end of June) either at the recommended field application rates of a mixture consisting of four systemic fungicides (Table 1) or with received tap water as a control. Thereby a direct overspray of off-crop areas such as vegetated buffer zones was simulated as a worst-case scenario. The fungicide selection was based on environmental relevance (i.e., high concentrations measured in water and gammarid tissue in monitoring studies; Lauper et al., 2021) and potential co-occurrence (i.e., co-application in the same culture, i.e., cereals). Further details on the tree treatment are provided in Supporting Information, A1.
Leaves were then collected from senescent trees in autumn (October 2020), stored in airtight bags in the dark at 4°C, and used within the following month. Leaves used for mass spectrometry (MS) imaging experiments were snap-frozen as a whole in liquid nitrogen within 12 h after collection and stored at −80°C until further analysis. For use in gammarid biotests and sorption and leaching experiments, as well as chemical analysis, leaf discs were cut using a cork borer of 20 mm in diameter.
Characterization of the test material
In addition to the determination of total fungicide concentrations (described below), the spatial distribution of fungicides within the leaf tissue was assessed to evaluate their bioavailability (presence of fungicides in the lamina tissue consumed by shredders). MS-imaging facilitated by matrix-assisted laser desorption ionization (MALDI) was performed on cross sections obtained from control leaves, leaves from trees exposed to the systemic fungicide mixture, and leaves contaminated through sorption from a spiked test medium (Supporting Information, A2). Further details on the analyzed leaves are provided in Supporting Information, A3.
The preparation of leaf cryosections was based on Lorensen et al. (2023). Sections were created by cutting 16-µm-thick slices of embedded (2.5% carboxymethyl-cellulose) leaves on a cryomicrotome (−16°C; Leica CM3050S; Leica Microsystems). The sectioning was assisted by adhesive tape (Kawamoto Cryotape 2C[9], SECTION-LAB; Kawamoto & Kawamoto, 2021) to improve sample integrity and reproducibility. The sections were then attached to a standard microscope slide using a double-sided adhesive carbon tape (SPI Supplies) and dried in a vacuum desiccator prior to matrix application. A matrix solution of 7 mg mL −1 α-cyano-4-hydroxyconnamic acid (CHCA) in 50:50 acetonitrile:H 2 O (v/v) containing 1% trifluoroacetic acid was applied using an in-house-built spray apparatus (University of Copenhagen). A quantity of 300 μL of matrix solution was sprayed at a flow rate of 30 μL min −1 (nebulizer gas pressure was 2 bar) from a distance of 100 mm while the sample was rotating at 600 rpm. Sample integrity and quality (i.e., homogenous thickness) as well as matrix crystals were evaluated under a light microscope at ×400 magnification using reflected light.
The MALDI-MS-imaging experiments were performed at ambient conditions using an AP-SMALDI5 ion source (TransMIT) coupled with a QExactive Orbitrap mass spectrometer (Thermo Fisher Scientific). The scans were performed in positive ion mode with a resolving power of R = 140,000 at a mass-to-charge ratio (m/z) of 200, a mass range of m/z 140-980, and a scan speed of 1 pixel s −1 in two-dimensional line mode. Matrix peaks at m/z 190.04987 (CHCA [M + H] + ) and m/z 401.07440 (CHCA [2 M + Na] + ) were used as lock masses for internal mass calibration, ensuring a mass accuracy of 2 ppm or better. The x-y raster width was set to 30 μm. At least two replicates were analyzed for each of the three treatments (control, systemic uptake, and sorption). MS-imaging data analysis was performed as described by Lorensen et al. (2023). Wei et al. (2016). Biotransformation pathways are illustrated in Figure 2. MW = molecular weight; log D OW = partitioning coefficient between octanol and water (log K ow ) of neutral species at pH 7 obtained from Pubchem; AI = active ingredient; FR = recommended field rate by the product companies.
Leaching characteristics at different leaf conditions
Leaching experiments using leaf discs of different conditions were conducted to understand the influence of leaf conditions and integrity on leaching kinetics. Leaching behavior was studied for (I) pristine leaf discs from chestnut trees exposed to the systemic fungicide mixture, (II) pristine leaf discs with gammarid feeding, (III) leaf discs from chestnut trees that were frozen before being deployed for leaching, and (IV) pristine control leaves that were contaminated by sorption from water. The leaching potential was evaluated by sampling leaf discs at different time points, analyzing fungicide residues, and calculating leaching half-life times using a one-phase decay model (GraphPad Prism, Ver 9). Leaching behavior from previously frozen leaves was estimated by comparing water concentrations in the test vessels of the leaching experiment from pristine and frozen leaves. Further descriptions of these experiments are provided in Supporting Information, A12-A14.
Test specimens
Specimens of G. pulex were collected during autumn 2020 (water temperature of 9-11°C) from a pristine stream in a natural conservation area close to Zürich, Switzerland (Mönchaltdorfer Aa, 47. 2749°N, 8.7892°E). This population of G. pulex belongs to a clade distributed north of the Alps in eastern France, Switzerland, and Regensburg in Germany (National Center for Biotechnology Information sequences MF458710 and JF965940) as specified by Raths et al. (2023). Specimens were acclimated in the lab by gradually increasing the temperature to 16°C and replacing stream water with artificial pond water (APW; Naylor et al., 1989). Gammarids were passively separated into different size classes by using a stack of sieves, exploiting their negative phototactive response (Franke, 1977). Only male gammarids-separated based on the presence of large secondary gnathopods-with a size of 12-16 mm were used to reduce variance of feeding rates caused by size and behavior (i.e., mate guarding). Gammarids with visible parasitism (i.e., acanthocephalans; Fielding et al., 2003) were excluded. A gammarid lipid content of 0.8 ± 0.1 (n = 6) was determined gravimetrically (Raths et al., 2023) on a wet weight basis (see Supporting Information, A4). All experiments were performed at 16 ± 1°C in the dark, thus preventing alterations of feeding behavior due to light responses. Gammarids were fed ad libitum in all experiments. Leaf material used in biotests was soaked in the test vessels for 12 h before gammarids were inserted. Mortality during the experiments was monitored and did not exceed 15% for any experiment.
Exposure with leaves contaminated from systemic uptake (autumn scenario) The main experiment simulated an exposure scenario where the systemic fungicides are brought into the aquatic system by leaves from previously contaminated trees in the riparian area. This exposure pathway may be especially relevant outside the main pesticide application period in autumn; thus, it is referred to as the autumn scenario ( Figure 1A). In this case, the aqueous phase was contaminated only through leaching from the leaves.
To model bioconcentration (aqueous uptake) and biomagnification (dietary uptake) kinetics, gammarids were exposed in a glass tank ( Figure 1A) filled with 6 L of APW and 1.7 g (wet wt) of contaminated leaf discs (n = 40). First, the test system was left for 12 h for leaves to soak before gammarids (43/L) were introduced. One group (caged) of gammarids was inserted into cages built from sawed-off 50-mL Falcon tubes enclosed from both sides with a nylon net (mesh size = 1 mm), and thus had no access to the leaf discs. This group was exposed only by leaching of the systemic fungicides from the leaf discs into the medium (bioconcentration). The second group of gammarids was allowed to move freely (uncaged) within the test system, which included access to the contaminated leaf material. The uncaged group was therefore exposed from both the medium (bioconcentration) and diet (biomagnification). Gammarids were exposed to the contaminated leaves and medium for 1 day, which is generally sufficient to reach equilibrium conditions of accumulated polar organic contaminants such as the tested fungicides (Raths et al., 2023). Afterward, they were transferred for another day into an uncontaminated basin with control leaf discs fed to both groups. During the experiments, medium and gammarids were sampled at regular intervals to allow for toxicokinetic modeling. Each gammarid sample consisted of duplicates of four gammarids for every sampling point. Leaf samples were taken at the beginning and end of the exposure phase in triplicates of four leaf discs each. A mass loss control in a separate basin was used to correct for FIGURE 1: Illustration of the experimental setup of the autumn and summer scenarios (A) and the food choice assay (B). Referring to test guideline 305 (Organisation for Economic Co-operation and Development, 2012), we define bioconcentration as accumulation of fungicides following uptake from the water and biomagnification as accumulation following dietary uptake. non-feeding-related weight loss of the leaf discs. The feeding rate was determined according to Equation 1 by using the total gammarid weight corrected by the corresponding exposure time until euthanasia.
Aqueous exposure of leaves and gammarids (summer scenario)
This subsequent experiment simulated an exposure scenario where systemic fungicides are entering the water body shortly after application (i.e., through runoff, spray drift) and leaves are only contaminated by sorption processes. Because most pesticide applications and subsequent contaminations occur in late spring or summer, this is referred to as the summer scenario ( Figure 1A).
This experiment was conducted analogously to the autumn scenario with the difference that preexposed control leaf discs (5 days at 1 µg L −1 of the parent fungicides, which equals 2.5 nM azoxystrobin, 4.4 nM cyprodinil, 2.5 nM fluopyram, and 3.2 nM tebuconazole) were used as food. At the start of the 1-day exposure phase, leaf discs and gammarids (one caged and one uncaged group) were placed into a basin contaminated with the parent fungicide mixture. Gammarid, leaf, and medium samples were only taken at the beginning and after 1 day of exposure.
Food choice assay
This experiment was designed to investigate whether gammarids would feed on contaminated leaves if alternative food sources were available. In this way, it was possible to evaluate the in situ relevance of the present exposure pathways. To investigate the feeding preferences and selectivity of gammarids, a food choice assay was conducted with modifications from a previously described setup (Zubrod, Englert, Wolfram, et al., 2015).
Preweighed pristine leaf discs from a chestnut tree exposed to systemic fungicides and control leaf discs (2 cm diameter each) were mounted in a 9-cm-diameter crystallization dish (feeding arena; Figure 1B) containing 80 mL APW. Leaf discs were left soaking for 12 h before the medium was exchanged and gammarids were inserted into the test system. Gammarids (n = 49) were starved for 3 days in the dark, before being introduced into an individual arena. The starvation phase, chosen from previous studies (Consolandi et al., 2021), allowed for gut clearance and ensured feeding activity. Feeding arenas were set up in a randomized orientation, and specimens were allowed to feed for 24 h in the dark (16°C). At the end of the experiment, gammarids were dry-blotted, and their mass was determined as wet weight. Leaf discs were dried at 60°C overnight before the dry weight was determined. A mass loss control (n = 8) in arenas without gammarids was used to correct for non-feeding-related mass loss of the leaf discs by leaching or degradation. Eight replicates with no detectable leaf consumption were excluded from further analysis.
The consumed leaf amount was calculated by subtracting the weight of the leaf disc at the end of the experiment, L end (kg dw ), from the initial leaf disc weight, L start (kg ww ), corrected by the mass ratio of the mass loss control, c (kg dw kg ww −1 ). The feeding rate, k feed (kg dw kg ww −1 ), were determined using the consumed leaf amount; the mass of the gammarid, G (kg ww ); and the feeding duration, t (days): (1) The proportions of the feeding on the two leaf discs were compared using the Wilcoxon signed-rank test and a significance level of p = 0.05.
Dissecting
To evaluate the assimilation efficacy of the systemic pesticides, a similar exposure basin to the autumn scenario ( Figure 1) was set up. However, gammarids were only sampled after 1 day of exposure but dissected into three compartments (midgut, hindgut, and carcass (see Figure 6A). Concentrations of the three compartments were determined separately.
Determination of fungicide concentrations
Liquid extraction was performed on both leaf and gammarid tissue, after adding 300 mg of 1-mm-diameter zirconia/silica beads (BioSpec Products), 500 µL of methanol, and 100 µL of isotope-labeled internal standard (250 µg L −1 deuterated reference standards; Supporting Information, A6). Samples were homogenized using a FastPrep bead beater (two cycles of 15 s at 6 m s −1 ; MP Biomedicals) and centrifuged (6 min, 10 000 g, 4°C). The supernatant was collected using syringes and filtered through 0.45-µm regenerated cellulose filters. Subsequently, syringes and filters were washed with another 400 µL of methanol and combined with the supernatant.
Liquid extraction from leaf disc samples was performed as described above with slight deviation of the homogenization method. Because leaf disc homogenization required dry samples, leaf discs were sampled into preweighed centrifuge vials (1.5 mL) already containing the 300 mg of silica beads, weighed (fresh wt, only for fresh, nonsoaked leaves), and freeze-dried. The dry weight was then determined by subtracting the preweight of the silica bead-containing vials. Leaf material was homogenized to dry powder using a cooled tissue lyser (2 × 10 s, 6 m s −1 , 4°C; Bead Ruptor Elite, OMNI International). Medium samples were taken as 500 µL of medium combined with 400 µL of methanol and 100 µL of isotope-labeled internal standard. All samples were stored at −20°C until further analysis.
Chemical analysis was performed using an automated online solid-phase extraction system coupled with a reversedphase liquid chromatography and high-resolution tandem mass spectrometer (LC-HRMS/MS; Q Exactive; Thermo Fisher Scientific). Ionization was achieved using an electrospray interface. Full scan acquisition was performed with a resolution of 70,000 (at m/z 200) in polarity-switching mode, followed by data-dependent MS/MS scans (five scans at positive mode and two at negative mode) with a resolution of 17,500 (at m/z 200) and an isolation window of 1 m/z. Further details on instrumentation, quality control parameters, and quantification are provided in Supporting Information, A6. A suspect screening on biotransformation products (BTPs) was performed based on BTPs reported in the literature for plants or aquatic invertebrates. The suspect list and corresponding literature are provided in Supporting Information, A7.
Toxicokinetic modeling
Toxicokinetic parameters of both bioconcentration and biomagnification processes were determined by applying two one-compartment first-order toxicokinetic models and the data of the autumn scenario. The models were implemented in the Matlab (R2019b)-based scripts of the Acute Calanus package, Ver 1.1 (Jager et al., 2017), of the Build Your Own Model platform. For bioconcentration, the tissue concentration, C T (nmol kg ww −1 ), in the caged gammarids over time was described by the following ordinary differential equation: In Equation 2, C W is the medium concentration (nM); the uptake rate, k u (L kg ww −1 d −1 ), describes dermal and respiratory uptake; and the elimination rate, k e (d −1 ), integrates elimination of the parent compound by active and passive excretion as well as biotransformation. For the biomagnification model, the concentration in the leaves, C L (nmol kg dw −1 ); the experimentally determined feeding rate, k feed (kg dw kg ww −1 ); and the modeled assimilation factor, α, were used. To account for the simultaneous bioconcentration, average tissue concentrations of caged gammarids at a given time point were subtracted from the tissue concentrations of caged gammarids before these were used as a model input: The kinetic bioconcentration and biomagnification factors (BCF kin in L kg ww −1 and BMF kin in kg dw kg ww −1 ) were determined using the ratio of the uptake and elimination rates: During the uptake phase, continuous medium concentrations were estimated from measured concentrations using a linear fit. Continuous leaf concentrations were estimated by using a one-phase decay model. Corresponding model parameters are provided in Supporting Information, A11. Concentrations of both compartments were set to zero during the elimination phase, which was confirmed by the chemical analysis. All model parameters were fitted simultaneously to the internal concentrations using the analytical solution (Ashauer & Jager, 2018). Best-fit parameters and 95% confidence intervals, using profile likelihoods, were used for further data processing.
Information on the determination of elimination half-life times, t 1/2 , and time to reach 95% of the steady state, t ss (equilibrium condition), are provided in Supporting Information, A8. An earlier study demonstrated the bioconcentration of the present fungicides to be independent of lipid content (Raths et al., 2023); thus, BCF kin and BMF kin were not lipid-normalized.
Data of the summer scenario were used for validation of the previously determined toxicokinetic model parameters. Wet weight and dry weight conversion factors were obtained over the course of the experiments. The conversion factors were 5.4 ± 0.3 (ratio wet to dry, n = 3) for gammarids and 2.8 ± 0.2 (ratio wet to dry, n = 16) for leaf discs and can be used to transform the generated data.
Translocalization and transformation of systemic pesticides in chestnut leaves
Structures of the parent fungicides and corresponding BTPs quantified by LC-HRMS/MS are presented in Figure 2A. The residue concentrations in leafs from trees treated with systemic fungicides ( Figure 2B) are presented in Figure 2C. The parent fungicide concentrations ranged over two orders of magnitude, from 260 ± 90 nmol kg −1 (tebuconazole) and 870 ± 280 nmol kg −1 (azoxystrobin) up to 22,700 ± 1900 nmol kg −1 (fluopyram), despite similar application rates of the four fungicides (380-2000 nmol tree −1 ; Table 1). Cyprodinil concentrations were below the limit of quantification (LOQ; 9 nmol kg −1 ), but its main BTP, CGA 249287, was the compound with the second highest residue concentrations (18,200 ± 3700 nmol kg −1 ). The BTPs of azoxystrobin (AZ_M390a) and fluopyram (2-trifluoromethyl benzamide) were detected in concentrations of one and two orders of magnitude lower than their corresponding parent compounds. Many further BTPs with lower intensities were tentatively identified but not quantified (Supporting Information, A7). Fungicide residues in the control leaves were below the LOQ (Supporting Information, Table S9) except for fluopyram, which was found in concentrations slightly above the LOQ but three orders of magnitude lower than the treatment. No fluopyram was detectable in gammarids fed with control leaves. Thus, fluopyram contamination in the control was considered negligible.
The high differences in leaf fungicide concentrations could indicate differences in the translaminar properties (i.e., caused by different physicochemical properties) or different biotransformation capabilities. Biotransformation may have occurred in both soil and plant tissue. However, differences in soil-leaf transfer capabilities of the fungicides persisted even if soil degradation was considered by estimating soil concentrations using half-life times in soil (Supporting Information, A9). Thus, it appears likely that the observed differences in leaf concentrations were caused by toxicokinetics within the trees, rather than by soil degradation. The main BTPs were hydrolysis or dealkylation products and still contained the active moiety of the parent compound. Both transformations are common Phase I detoxification processes in plants (Bártíková et al., 2015). Further, strong biotransformation of the tested systemic fungicides in plants has been observed before (Gautam et al., 2018;Lv et al., 2017;Matadha et al., 2019;Robatscher et al., 2019;Sapp et al., 2004;Wei et al., 2016) and thus can be an important mechanism for detoxification and controlling leaf residues.
The MS-imaging of chestnut leaves exposed to systemic fungicides revealed a uniform distribution of the BTP CGA 249287 throughout the whole leaf cross section, similar to the membrane lipid phosphatidylcholine PC(32:0) which served as an orientation within the MS-image (Figure 3). However, fluopyram and tebuconazole were detected only in the laminar tissue and not in the vascular tissue of the veins. To an extent, both compounds were also affected by slight delocalization (detection outside of the sample area), potentially caused by leaching from leaf tissue into the embedding matrix. The other fungicides and BTPs could not be detected even when using leaves from trees exposed to 10 times the field rate. This was due to lower concentrations in the leaf tissue but also lower response factors of azoxystrobin, fluopyram, and trifluoromethyl (TFM)-benzamide compared with the other compounds. In the leaves exposed to highly contaminated water, all fungicides (parents and BTPs), except for TFMbenzamide, could be detected because of the much higher leaf concentrations (Supporting Information, Figures S3 and S4). In leaves contaminated by sorption, all compounds showed the same uniform distribution, similar to PC(32:0). Distinct distributions between lamina and vascular tissue of other xenobiotics in arboreal leaf cross sections have been observed before (Villette et al., 2019), but underlying mechanisms remain unexplained. The comparison of systemically and sorptionexposed leaf cross sections indicates that plant physiology, such as biotransformation, transport, and deposition mechanisms in the trees, is driving the spatial distribution.
Regarding gammarid exposure, MS-imaging of the leaf material validated the accessibility of the incorporated fungicides. Because shredders are known to feed on lamina tissue of leaf litter but avoid higher lignified structures such as the vascular tissue of the veins (Arsuffi & Suberkropp, 1985), gammarids may have even been exposed to slightly higher local fluopyram and tebuconazole concentrations than estimated from whole leaf extracts.
Impact of leaf conditions on leaching behavior
The determined leaching half-life times for leaf discs of different conditions are shown in Table 2. All half-life times were in the range of 1 up to several days, indicating considerable fungicide losses within the time frame of gammarid exposure. Half-life times of systemic fungicides were highest in leaf discs from pristine leaves (I). They were much lower for leaf discs with reduced tissue integrity (II and III) or that were contaminated through sorption (IV). For instance, the leaching halflife of fluopyram decreased by approximately 70% through gammarid feeding or in leaf discs that were previously frozen.
This observation may be explained by damages to the leaf structure by feeding activity or consumption of more highly contaminated leaf compartments by gammarids or damage to the leaf structure caused by freezing and thawing (explained by Consolandi et al., 2021). The half-life of fluopyram in leaves contaminated through sorption of fungicides from the aqueous phase (IV) was less than half that of fluopyram incorporated into leaves by systemic uptake (I). The difference may be caused by an incorporation of fungicides into leaf compartments such as the vacuole or cell wall of leaves (Bártíková et al., 2015).
It was demonstrated that the leaf contamination pathway, as well as leaf condition could influence the fungicide dynamics in the gammarid test systems and should be considered when designing and evaluating feeding experiments. The decisions on the used leaf conditions most likely shaped the outcome of the presented experiments. With the use of leaf discs from pristine leaves, we created a realistic worst-case scenario for dietary exposure.
Sorption and leaching parameters of 26 common organic contaminants (including systemic fungicides and insecticides as well as pharmaceuticals) are provided in Supporting Information, A14, and may help decision-making in future experimental designs.
n.a. The half-life times from III were extrapolated from the difference in medium concentration between pristine and frozen leaf discs and the pristine leaf leaching model. The 95% confidence intervals are provided in parentheses. An expanded version of this table is presented in Supporting Information, A13. t 1/2 = half-life time; n.a. = not available, because the biotransformation product was not identified or tested at the stage of the corresponding experiments.
concentrations in the autumn scenario ( Figure 1) were driven by leaching from the leaf material ( Figure 4A). The leaf concentrations of fluopyram and CGA 249287 were decreasing during the exposure phase, with half-life times of 1.3 and 0.8 days, respectively (Table 2). Consequently, the medium concentration of both compounds increased from 0.9 to 2.2 nM. The concentrations in the medium and leaf material of the elimination phase remained below the LOQ. Model fits for the determination of medium and leaf concentrations are provided in Supporting Information, A11.
The time-resolved tissues analysis in the autumn scenario revealed much higher internal concentrations in uncaged gammarids compared with caged gammarids ( Figure 4B). By the end of the uptake phase of the autumn scenario, the uncaged gammarids (bioconcentration and biomagnification) had three to nine times (fluopyram) and seven to eight times (CGA 249287) higher tissue concentrations than the caged gammarids (bioconcentration). The increased tissue concentrations in uncaged gammarids may be caused by a combination of fungicides assimilated from the diet and contaminated leaf material in the intestine ( Figure 6B). The toxicokinetics in caged gammarids could be very well described by the applied bioconcentration model. However, the internal concentrations in gammarids with access to the leaf material showed a very high variance, with duplicates differing by up to a factor of 2.5. Consequently, many measured values were outside the range of the confidence intervals of the toxicokinetic model that included biomagnification. Because the provided leaf material had a rather low variance in leaf concentrations (SD ∼10%, all leaf discs originated from the same tree), the high variance was most likely caused by variation in the individual feeding rates, as observed in the food choice assay. Furthermore, feeding rates of amphipods are known to vary not only on the individual level but also on a temporal scale. In both cases, feeding rates are also affected by abiotic parameters (i.e., light and temperature), leaf condition, physiological state (i.e. starvation), and interactions with other organisms or contaminants (Consolandi et al., 2021;Götz et al., 2021;Maltby et al., 2002). Thus, the assessment of feeding rates may be challenging in modeling biomagnification processes with amphipods because feeding cannot be as controlled as in more standardized guidelines with other organisms (e.g., test guideline 305 using fish; Organisation for Economic Co-operation and Development, 2012). In this context, the variability in feeding rates could be addressed by averaging out over a longer period or by pooling a larger number of animals for tissue analysis. In the present study, the toxicokinetic rates and BCFs (BCF kin ) for fluopyram were slightly higher than reported previously (Raths et al., 2023). No literature data were available for bioconcentration of CGA 249287 or biomagnification of the tested compounds. The parameters of the bioconcentration and biomagnification model calibration are provided in Table 3.
Despite the different uptake processes of bioconcentration (filtration and diffusion) and biomagnification (feeding and assimilation), similar elimination rates were observed. Studies investigating the mathematical relationship between BCFs and BMFs generally observe that BCFs are three to four orders of magnitude higher than the corresponding BMFs in fish (Grisoni et al., 2018;Inoue et al., 2012). In our study, the BCFs of fluopyram and CGA 249287 were 2900 and 800 times higher than the BMFs and fit into the lower range of the reported relationships.
Aqueous uptake drives fungicide bioaccumulation in a summer scenario
Medium concentrations in the summer scenario remained stable with <10% deviation from the nominal concentration. The measured medium concentration of fluopyram was 2.2 nM and thus similar to the medium concentration at the end The measured tissue concentrations of fluopyram under equilibrium conditions were 20.1 ± 1.7 in the caged and 22.5 ± 2.8 nmol kg −1 in the uncaged gammarids ( Figure 5). Because of similar medium concentrations, the internal fluopyram concentrations of caged gammarids were similar to those of caged gammarids in the autumn scenario. However, the internal concentration in uncaged gammarids of the summer scenario was approximately three to seven times lower than the measured tissue concentrations in gammarids of the autumn scenario. The toxicokinetic models calibrated on the autumn scenario could predict the internal concentration in both groups of the summer scenario properly, which validated the model parameters. Dietary uptake accounted for only 10% of the total tissue concentration in uncaged gammarids in the summer scenario, whereas dietary uptake accounted for >60% in the autumn scenario. This observation occurred despite the feeding rate being 1.8 times higher than in the autumn scenario (0.15 kg dw kg ww −1 d −1 ). The reported contribution of the dietary uptake to the total tissue concentration of gammarids in equilibrated systems ranged from 10% (azoxystrobin, fluopyram; present study), 30% (cyprodinil, tebuconazole; present study), and 30%-40% (lead and brominated diphenyl ether 47; Hadji et al., 2016;Lebrun et al., 2014) up to 60% (4-nonylphenol;Gross-Sorokin et al., 2003) and increased with sorption-driven partitioning (log K ow ) toward the leaves. With regard to in situ bioaccumulation of systemic fungicides in gammarids, the present results demonstrate that the concentration ratio of the two compartments, diet and water, determines the importance of their contribution to the whole body burden. For polar compounds, this ratio is usually in favor of bioconcentration. However, as observed in the autumn scenario, this is not necessarily the case when pesticides incorporated into the diet (i.e., leaves from a buffer stripe) are the initial contamination source of a system.
Behavioral bioavailability of contaminated leaves
Unexpectedly, G. pulex displayed a significant preference (Supporting Information, Figure S5; Wilcoxon's, p > 0.001) for leaf discs from exposed chestnut trees in the food choice assay. The median relative food consumption was 0.37 for the control versus 0.63 for the contaminated leaf discs. The average absolute feeding rate was 0.21 ± 0.09 kg dw kg ww −1 d −1 , but individual feeding rates showed a large variance, ranging from 0.05 to 0.46 kg dw kg ww −1 d −1 (Supporting Information, Figure S6). A high variance of individual feeding rates is common for feeding experiments because they strongly depend on individual physiological state and behavior (Consolandi et al., 2021;Götz et al., 2021;Maltby et al., 2009). In addition, this variance may have been increased by uncertainties in the leaf weight correction by mass loss and dry weight controls. Furthermore, the soaking time of 12 h was The 95% confidence intervals are provided in parentheses. k u = uptake rate; ww, wet weight; k e = elimination rate; BCF kin = kinetic bioconcentration factor; k feed = feeding rate; BMF kin = kinetic biomagnification factor; dw = dry weight. rather short and may have resulted in lower feeding rates compared with longer soaking periods and thus increased leaf palatability (Consolandi et al., 2021). However, a short presoaking period was chosen to minimize the contaminant loss by leaching. The observed inability of gammarids to discriminate contaminated from uncontaminated leaf discs has been reported for systemically exposed material before (Englert, Zubrod, Link, et al., 2017;Kreutzweiser et al., 2009;Newton et al., 2018). Consequently, it is likely that gammarids are also not able to avoid contaminated leaves in situ, despite uncontaminated food sources being available.
Physiological bioavailability (assimilation) of fungicides from contaminated leaves
Absolute fungicide concentrations in the gammarid carcass were two (CGA 249287) to four (fluopyram) times higher in uncaged than in caged gammarids. The highest absolute fungicide concentrations were found in the intestine of gammarids, with up to seven times and 21 times higher concentrations compared with the carcass in caged and uncaged gammarids, respectively.
The determined tissue contributions to the total recovered body burden ( Figure 6B) also showed a high contribution of the intestinal compartments to the total tissue concentrations. In caged gammarids, the intestinal compartment contributed 20%-34% to the total recovered fungicide amount. The relative contributions of the intestinal compartments to the total tissue concentrations were approximately 7%-10% for midgut and 23%-45% for hindgut in gammarids with access to contaminated leaves. A higher proportion of CGA 249287 was associated with the intestine, in comparison with fluopyram.
In conclusion, dissection of caged and uncaged gammarids revealed that CGA 249287 and fluopyram were bioavailable and efficiently assimilated into surrounding tissue from contaminated leaf material in the intestine, indicated by higher carcass concentrations in uncaged gammarids. These findings are further supported by the high assimilation factors (0.6 and 1) obtained from the biomagnification models presented above (Table 3). Overall, the higher concentrations in uncaged gammarids with access to leaves would be of toxicological relevance. In addition, it was demonstrated that the intestine tissue plays an important role in accumulation of waterborne organic contaminants in gammarids, as previously observed by Nyman et al. (2014).
Implications for risk assessment
At the end of the autumn scenario study, the medium concentrations were 0.9 and 0.3 µg L −1 (2.3 and 2.2 nM) for fluopyram and CGA 249287, respectively. The water concentrations of CGA 249287 remobilized from leaf material exceeded the chronic environmental quality standard (EQS) of the parent cyprodinil (0.2 µg L −1 ; Moschet et al., 2014). And CGA 249287 still contains the active moiety of cyprodinil, and thus may exert similar toxicological effects. For fluopyram, data for the EQS determination for surface waters are scarce because it was just introduced on the European market in 2013 (European Food Safety Authority, 2013). Li et al. (2020) indicated potential chronic EQS values for fluopyram in the 100 µg L −1 range.
Because EQS values determined for surface waters do not consider other exposure routes such as contaminated diet, a conversion of the gammarid tissue concentration to the equivalent water concentration may be applied to evaluate the corresponding risk (Inostroza et al., 2016). In this case internal concentrations of gammarids with access to leaves from trees exposed to systemic fungicides (autumn scenario) would be equivalent to 7.1 and 7.0 µg L −1 for fluopyram and CGA 249287, respectively. These hypothetical water concentrations are higher than most concentrations measured for fluopyram and cyprodinil during a summer monitoring study in a loworder agricultural stream (Lauper et al., 2021).
The ratio of leaves and water in the autumn scenario was approximately 5% of a typical amount reported for a first-order stream in central Germany (Benfield, 1997). This ratio was FIGURE 6: Dissected gammarid (A) and corresponding recovered fungicide proportions (B) in caged (no access to leaves) and uncaged (access to leaves) gammarids. Both intestine compartments were pooled for caged gammarids. Circle sizes are relative to the total tissue concentrations. Hatched blue indicates that midgut and hindgut were not separated for caged gammarids. Underlying data are provided in Supporting Information, B3. applied in a leaching study by Englert, Bakanov, et al. (2017; 600 g m −2 ), who demonstrated a strong effect of systemic pesticide remobilization from leaves of exposed trees. Thus, remobilization effects of fungicides from leaf material in the field could be potentially higher than observed in the present study. However, it is important to note that we tested a static system. Running water in streams may dilute the leached contaminants but also transport them to other, less contaminated sites. Thus, leaching from foliage may be an overlooked water contamination pathway because most monitoring studies focus on late spring and summer (Chow et al., 2020;Phillips & Bode, 2004).
Consequently, the present study indicates a potential seasonal extension of aquatic invertebrate exposure to systemic fungicides toward autumn. The exposure scenarios from the present study illustrate a pathway that may bypass riparian buffer stripes by retained fungicides entering the water body incorporated into leaves in autumn. Acute toxicity for systemic fungicides from dietary exposure appears unlikely, but chronic effects from secondary poisoning could be expected. Despite the common assumption that dietary uptake may only be relevant for less polar compounds, because they show higher sorption behavior toward organic matter, such as leaves, the present study revealed that systemic fungicides may be important dietary contaminants because of their translaminar properties. Even though the present scenarios covered a range of uptake mechanisms that are specific for the field (i.e., systemic uptake of the fungicides by trees, food selectivity), no monitoring studies have focused on such particular questions. Studies of this nature would be important to evaluate the risk originating from the elucidated mechanisms of seasonal exposure shift and the bypassing of riparian buffer stripes.
CONCLUSION
In the present study, we provided a deep insight into different contamination pathways of allochthonous food sources (leaf litter) and evaluated the relevance of the dietary uptake pathway for bioaccumulation across seasons. We conclude that the dietary uptake of systemic fungicides is generally of relatively low relevance, unless previously contaminated plant material enters a stream. Our study brings an important perspective to environmental risk assessment by illustrating a potential mechanism for systemic fungicides to bypass riparian buffer stripes. However, further research, such as monitoring studies, is needed to understand the consequences of the interconnectivity of terrestrial and aquatic ecosystems for systemic pesticide fluxes and risk of exposure.
Supporting Information-The Supporting Information is available on the Wiley Online Library at https://doi.org/10.1002/ etc.5615. | 2023-03-23T06:17:31.256Z | 2023-03-22T00:00:00.000 | {
"year": 2023,
"sha1": "5b720858264afe98dd6b28550edf84b64da71a61",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/etc.5615",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "cef2a9257ad39e39a471298b20321bec103749d5",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15065915 | pes2o/s2orc | v3-fos-license | Enzyme inhibition of dopamine metabolism alters 6-[18F]FDOPA uptake in orthotopic pancreatic adenocarcinoma
Background An unknown location hampers removal of pancreatic tumours. We studied the effects of enzyme inhibitors on the uptake of 6-[18F]fluoro-l-3,4-dihydroxyphenylalanine ([18F]FDOPA) in the pancreas, aiming at improved imaging of pancreatic adenocarcinoma. Methods Mice bearing orthotopic BxPC3 pancreatic adenocarcinoma were injected with 2-deoxy-2-[18F]fluoro-d-glucose ([18F]FDG) and scanned with positron emission tomography/computed tomography (PET/CT). For [18F]FDOPA studies, tumour-bearing mice and sham-operated controls were pretreated with enzyme inhibitors of aromatic amino acid decarboxylase (AADC), catechol-O-methyl transferase (COMT), monoamine oxidase A (MAO-A) or a combination of COMT and MAO-A. Mice were injected with [18F]FDOPA and scanned with PET/CT. The absolute [18F]FDOPA uptake was determined from selected tissues using a gamma counter. The intratumoural biodistribution of [18F]FDOPA was recorded by autoradiography. The main [18F]FDOPA metabolites present in the pancreata were determined with radio-high-performance liquid chromatography. Results [18F]FDG uptake was high in pancreatic tumours, while [18F]FDOPA uptake was highest in the healthy pancreas and significantly lower in tumours. [18F]FDOPA uptake in the pancreas was lowest with vehicle pretreatment and highest with pretreatment with the inhibitor of AADC. When mice received COMT + MAO-A inhibitors, the uptake was high in the healthy pancreas but low in the tumour-bearing pancreas. Conclusions Combined use of [18F]FDG and [18F]FDOPA is suitable for imaging pancreatic tumours. Unequal pancreatic uptake after the employed enzyme inhibitors is due to the blockade of metabolism and therefore increased availability of [18F]FDOPA metabolites, in which uptake differs from that of [18F]FDOPA. Pretreatment with COMT + MAO-A inhibitors improved the differentiation of pancreas from the surrounding tissue and healthy pancreas from tumour. Similar advantage was not achieved using AADC enzyme inhibitor, carbidopa.
Background
Due to late diagnosis and lack of effective treatment, pancreatic adenocarcinoma has the worst prognosis of all of the gastrointestinal cancers [1]. Surgery is a possible curative approach if the cancer is detected early, but the exact location of the tumour is often difficult to determine. Current anatomy-based imaging procedures detect only indirect signs of invasive tumour growth such as pancreatic mass or ductal abnormalities. Therefore, there is a need for better functional imaging tools for the detection and localisation of pancreatic cancer.
The most frequently used positron emission tomography (PET) tracer for tumour imaging is the glucose analogue, 2-deoxy-2-[ 18 F]fluoro-D-glucose ([ 18 F]FDG). [ 18 F]FDG is taken up into cells by glucose transporters, where it subsequently undergoes phosphorylation by hexokinase-1 into [ 18 F]FDG-6-phosphate. This tracer is efficiently taken up by a variety of tumour cells and reflects increased glucose metabolism [2,3]. For pancreatic cancer, [ 18 F]FDG has been useful in the evaluation of indeterminate pancreatic masses, staging of pancreatic carcinoma, detection of metastatic disease and differentiation of viable tumours from posttherapeutic processes like necrosis or scar tissue [4,5]. However, the diagnostic value of [ 18 F]FDG in pancreatic cancer is limited since inflammatory processes such as pancreatitis and abscesses take up [ 18 F]FDG as well. Chronic pancreatitis is recognised as the most common reason for false-positive [ 18 F]FDG-PET tumour findings in the pancreas [2,6].
Investigation of the functional activity of the dopaminergic system is increasingly used in the evaluation of tumours of islet cell origin [7][8][9][10][11][12]. 6-[ 18 [7,8]. At present, the imaging of the pancreas using [ 18 F]FDOPA is focused on the neuroendocrine nature of pancreatic cells [13]; however, the exocrine pancreas also contains dopamine [14]. According to previous immunohistochemical studies, exocrine cells can take up amine precursors such as L-DOPA, transport them across the cell membrane, convert them into dopamine by aromatic L-amino-acid decarboxylase (AADC) and store them in vesicles [15][16][17].
In the periphery, [ Figure 1) [20]. Carbidopa is a potent inhibitor of AADC. Based on the metabolism of FDOPA, carbidopa and COMT inhibitor are routinely used in a clinic prior to [ 18 F]FDOPA injection into Parkinson's disease patients in order to minimise peripheral metabolism and to increase [ 18 F]FDOPA concentrations in the brain [21]. Carbidopa also improves imaging of neuroendocrine tumours of the pancreas [22,23]. However, recent studies have shown that the use of enzyme inhibitors for other cancers of the pancreas, such as islet cell tumours, β cell hyperplasias and insulinomas, may hamper pancreatic uptake of [ 18 F]FDOPA in addition to hindering its uptake by tumour cells [24].
The aim of this study was to improve the detection of pancreatic adenocarcinoma using PET. We used [ 18
Cell culture
Human ductal pancreatic adenocarcinoma cells (BxPC3 cells) were cultured in RPMI-1640 medium supplemented with 10% heat-inactivated foetal calf serum and 2 mM of Lglutamine (all from Sigma-Aldrich Chemicals, Steinheim, Germany). The cells were maintained at 37°C in a humid atmosphere with 5% CO 2 . For orthotopic inoculation, the cells were trypsinised and suspended in Matrigel (BD Biosciences, San Jose, CA, USA) at a concentration of 10 6 cells/ mL and stored on ice. The viability of the cells was confirmed by trypan blue staining before and after inoculation.
Orthotopic tumours
Five-week-old male immunodeficient nude mice (Athymic nude/nu Foxn1 mice, Harlan, The Netherlands) were used in this study. Animals were treated with analgesics (Temgesic W 0.1 mg/kg, mouse body weight 23.2 ± 1.9 g) and anaesthetised by isoflurane inhalation (3%, 200 mL/ min). After laparotomy, 30 μL of cell suspension (3 × 10 4 cells) was inoculated into the pancreatic body of the mice (n = 26). Mice were weighed twice per week, and their welfare was evaluated daily. Sham-operated nude mice (n = 22), which were inoculated only with Matrigel, were used as controls. Animals were sacrificed 35 or 42 days after cancer cell inoculation. All animal studies were approved by the Finnish Animal Ethics Committee. The institutional policy on animal experimentation fully meets the requirements defined in the National Institutes of Health's Guide for the Care and Use of Laboratory Animals (NIH Publication no. 85-23, revised 1985).
Small-animal PET/CT and image analysis
Six weeks after orthotopic inoculation of the BxPC3 cells, the uptake of [ 18 F]FDG into pancreatic tumours (n = 14) was evaluated using small animal PET/CT. [ 18 F]FDG (dose 9.0 ± 2.3 MBq) was injected into the tail veins of the animals under isoflurane anaesthesia. CT was used for attenuation correction of the PET data and anatomical reference following a 20-min static scan, which was acquired in list mode at 60 min after injection. The next day, selected mice (n = 4) were subjected to CT and a 20-min dynamic PET scan using [ 18 F]FDOPA tracer. The injected dose was 6.8 ± 0.1 MBq, and the injected mass was 16 ± 10 ng/g, as calculated from the known specific radioactivity at the time of injection. Images were reconstructed using a 2D filtered backprojection with a 0.5-mm ramp filter. Data were collected for 20 min after injection of [ 18 F] FDOPA, and the corresponding time-activity curves were calculated. Images were analysed using the Inveon Research Workplace software (v. 3.0). Volumes of interest were drawn manually on the pancreas, tumours and the left cardiac ventricle (blood). The [ 18 F]FDOPA uptake in tumours was expressed as time-activity curves of the pancreas and tumours normalised to blood radioactivity.
Uptake and intratumoural distribution of [ 18 F]FDOPA
The whole pancreata were exposed. Tumours were not dissected from the pancreata. Tumour volumes were calculated according to the formula π/6(d 1 × d 2 × d 3 ), where d 1 to d 3 are perpendicular tumour diameters (mm) [30]. The absolute 18 F radioactivity uptake was determined from blood and selected abdominal tissues (Table 1) using a gamma counter (3 in × 3 in NaI (Tl) crystal, Bicron 3 MW3/3P; Bicron Inc., Newbury, OH, USA) at 10 min after the injection of [ 18 F]DOPA. Tissues were weighed, counted for radioactivity and corrected for background radioactivity and radioactivity decay. The quantity of radioactivity was expressed as the percentage of injected dose per gram of tissue (% ID/g). In order to determine the intratumoural biodistribution pattern of 18 F radioactivity, tumours were rapidly frozen in dry ice/isopentane and cut with a cryomicrotome into 20-μm sections. Tissue sections were exposed to an imaging plate (Fuji BAS TR2025, Fuji Photo Film Co., Tokyo, Japan) for 4 ± 0.5 h. The spatial distribution of radioactivity was recorded with a phosphoimaging device (Fuji BAS 5000, with a resolution of 25 μm). Images were analysed for count density (photostimulated luminescence per unit area) with the Aida image analysis software (v. 4.22, Raytest Isotopenmessgeräte GmbH, Straubenhardt, Germany). The same tissue sections were stained with haematoxyline/eosin for histological analysis. Whole-tumour images were produced using a Zeiss AxioVert 200 M microscope with AxioCam MRc camera and the MosaiX option of AxioVision [31,32] were used to identify retention times (Rt) in a similar chromatographic system.
Statistical analysis
The mean weight of the sham-operated pancreas and pancreas with a tumour was compared with a two-sample t test. Radioactivity (ratios to non-target tissues and autoradiography) measurements were analysed using a twoway analysis of variance. Models included the main effects of pretreatment (vehicle, carbidopa and COMT + MAO-A) and group (sham-operated and tumour-bearing animals).
In further pairwise comparisons between pretreatments, the Tukey adjustment method was used. Interaction between pretreatment and group was also tested. Due to positively skewed distributions, log-transformed values were used in statistical analyses. SAS System for Windows was used in statistical computations (v. 9.2, SAS Institute Inc., Cary, NC, USA); p values less than 0.05 were considered as statistically significant.
Results and discussion
Tumour characterisation and ex vivo biodistribution of No differences were detected between the weights of sham-operated and tumour-bearing mice (data not shown). No signs of cachexia were detected, which indicates that the tumours were not very large (data not shown). The tumour occurrence was 100%, and the mean tumour volume was 50 ± 40 mm 3 at 35 days after tumour cell inoculation and 760 ± 1,300 mm 3 at 42 days after tumour cell inoculation. The mean weight of the sham-operated pancreas was 0.18 ± 0.03 g, while the weight of the pancreas with a tumour at 35 and 42 days after inoculation was 0.27 ± 0.08 g (p < 0.001 vs. shamoperated) and 0.73 ± 0.97 g (p < 0.001 vs. sham-operated), respectively. As expected, the weight of the pancreata increased in tumour-bearing mice compared with the sham-operated mice.
The ex vivo distribution of [ 18 F]FDOPA-derived activity was expressed as percentage of injected dose per gram of tissue ( Table 1). The highest amount of radioactivity was found in the pancreas (including BxPC3 tumours, as applicable), liver, kidneys and small intestine when vehicle alone was used as pretreatment (Table 1). Interestingly, the uptake was twofold higher in shamoperated pancreata than in tumour-bearing pancreata ( Table 1). Carbidopa pretreatment increased the uptake in the pancreas in sham-operated and tumour-bearing animals three-and fourfold, respectively, compared with vehicle pretreatment. However, no differences were detected between carbidopa pretreated sham-operated and tumourbearing animals ( Table 1). Combined administration of COMT and MAO-A inhibitors doubled the uptake of 18 F radioactivity by the pancreata of sham-operated animals compared with vehicle pretreated animals. The 18 F radioactivity uptake in the pancreas was threefold lower in tumour-bearing animals that received COMT + MAO-Ainhibitors in comparison with sham-operated animals with the same pretreatment. No major between-mice differences in other studied organs were detected. Administration of the MAO-A or COMT inhibitors alone had no effect on 18 F radioactivity uptake (Table 1), and therefore, carbidopa and COMT + MAO-A inhibitors were selected for further study. Figure 2 Ratios to non-target tissues (pancreas-to-muscle and pancreas-to-liver) from ex vivo radioactivity measurements (% ID/g). These were measured after [ 18 F]FDOPA injection. Significant differences were detected between sham-operated and BxPC3 tumour-bearing pancreata (pretreatment adjusted p < 0.001 in pancreas-to-muscle and pretreatment adjusted p < 0.05 in pancreas-to -liver ratios), but pretreatment had no effect on these differences (pretreatment × group interaction effect, p = 0.245 in pancreas-to -muscle and p = 0.612 in pancreas-to-liver ratios). Carbidopa pretreatment increased uptake in the pancreata of both shamoperated and tumour-bearing mice compared with vehicle-treated mice (group adjusted p < 0.05 in pancreas-to-muscle and group adjusted p < 0.001 in pancreas-to-liver ratios). The numbers of mice exposed to vehicle, carbidopa and COMT + MAO-A inhibitors were 6, 5 and 4 for sham-operated animals, respectively, and 5, 7 and 4 for tumour-bearing animals, respectively. Values are shown as mean and standard error.
Target-to-non-target ratios (pancreas-to-muscle and pancreas-to-liver) were calculated based on measured radioactivities (% ID/g), and they are presented in Figure 2. Our data revealed significant differences between shamoperated and BxPC3 tumour-bearing pancreata (pretreatment adjusted p < 0.0001 in pancreas-to-muscle and p = 0.026 in pancreas-to-liver ratios). However, these differences were not dependent on the used pretreatment (pretreatment × group interaction effect, p = 0.245 in pancreas-to-muscle and p = 0.612 in pancreas-to-liver ratios). Carbidopa pretreatment increased the uptake in the pancreata in both sham-operated and tumour-bearing mice compared with vehicle treatment (group adjusted p = 0.037 in pancreas-to-muscle and p < 0.0001 in pancreas-to-liver ratios), but the difference between the sham-operated and tumour-bearing mice was not high enough to separate the healthy pancreas from the tumourbearing pancreas (Figure 2). Pretreatment with COMT + MAO-A inhibitors increased the ratio further, especially the pancreas-to-muscle ratio, but due to the small number of observations and the larger variance in a measured radioactivity in the samples, the differences did not achieve any statistical significance (p = 0.133 in pancreas-to-muscle and p = 0.386 in pancreas-to-liver ratios, Figure 2).
[ 18 F]FDOPA PET imaging combined with [ 18 F]FDG reveals xenograft tumours in mouse pancreas
Mice were imaged with [ 18 F]FDG PET/CT 6 weeks after tumour cell inoculation. Orthotopic pancreatic tumours exhibited increased glucose uptake (Figure 3a). As expected, high uptake of [ 18 F]FDG was also observed in the heart, bladder, brain, kidneys and brown adipose tissue (Figure 3a). Next, the mice were imaged with [ 18 F]FDOPA PET/CT. The uptake of [ 18 F]FDOPA was very low in orthotopic BxPC3 tumours pretreated with vehicle ( Figure 3b). However, the uptake in the healthy part of the pancreas increased after pretreatment with COMT + MAO-A inhibitors (Figure 3c,d). Based on 20 min of dynamic scanning, the peak and plateau radioactivities in the pancreas were reached within 5 min (Figure 3d). The time-activity curves of the pancreata showed similar dynamics regardless of pretreatment (vehicle vs. COMT + MAO-A inhibitors, Figure 3d) or tumour status (sham-operated vs. tumour-bearing mice, Figure 3d).
Autoradiography identifies low [ 18 F]FDOPA uptake in the tumour part of the pancreas
Intratumoural distribution of 18 F radioactivity was determined by digital autoradiography. Histological images of the haematoxylin/eosin-stained slices were combined with autoradiography, and the tumour outlines were drawn. In intrapancreatic comparison between the healthy pancreas and the tumour, radioactivity uptake was on average tenfold lower in tumour areas than in the healthy pancreas following pretreatments with vehicle or carbidopa. This uptake was fivefold lower in tumours of COMT + MAO-A pretreated pancreata (Figure 4a,b,c,d). In healthy areas of the pancreas, uptake was dependent on pretreatment. The uptake was uniform in the pancreata of vehicle-or carbidopa-treated mice (Figure 4a,b), while pretreatment with COMT + MAO-A inhibitors led to a 2.4-fold increase in uptake in the head of the pancreas compared with uptake in the tail of the pancreas (Figure 4c). A representative histological image indicates tumour growth in the body of the pancreas of mouse pretreated with COMT + MAO-A inhibitors (Figure 4d). After normalising the uptake with the amount of the injected radioactivity for each pancreas, pretreatment had an effect on the difference between sham-operated and tumour-bearing animals (pretreatment × group interaction effect p = 0.039, Figure 4e). Lower uptake was detected in tumour-bearing pancreata compared with sham-operated pancreata in vehicle (p = 0.016) and COMT + MAO-A inhibitor-treated (p = 0.002) animals. In carbidopa-treated mice, no significant difference was detected between sham-operated and tumour-bearing animals (p = 0.733). These observations were in accordance with ex vivo biodistribution data (Table 1 and Figure 2) as well as PET/CT data ( Figure 3).
Pretreatment affects [ 18 F]FDOPA metabolism in pancreas
Metabolites were analysed in pancreatic samples of shamoperated and tumour-bearing mice using HPLC. HPLC radiochromatograms from a radiometabolite study are shown in Figure (e) Radioactivity measurements were corrected for the dose of injected radioactivity. Pretreatment had effect on the difference between shamoperated and tumour-bearing animals (pretreatment × group interaction effect, p < 0.05). Significant differences were detected between shamoperated and tumour-bearing animals in vehicle (asterisk, p < 0.05) and COMT + MAO-A (double asterisks, p < 0.01) pretreated pancreata. Values are shown as mean and standard error. At present, the treatment of pancreatic adenocarcinoma is difficult because the location of the tumour lesion is usually unknown. In clinical practice, pancreatic cancer is imaged using [ 18 F]FDG, which is the most widely used radiopharmaceutical for PET [33]. However, several factors may hamper imaging of the pancreas. Uptake of [ 18 F] FDG in the duodenum may cause false-positive results, and imaging of the upper abdomen in general is influenced by respiratory and gastrointestinal tract motion [34]. Inflammatory cells are usually present in malignant lesions, further contributing to [ 18 F]FDG uptake and leading to false-positive tumour-related findings [3]. [ 18 F] FDOPA is a commonly used PET tracer for investigating the activity of the dopaminergic system in neurological disorders. [ 18 F]FDOPA PET has also been successfully used to visualise neuroendocrine, carcinoid and glomus tumours as well as pheochromocytomas and medullary thyroid cancers [7,[35][36][37]. The objective of this study was to investigate the usefulness of [ 18 F]FDOPA also in the imaging of pancreatic adenocarcinoma and to assess the possible benefits of clinically available, widely used enzyme inhibitors of catecholamine neurotransmitters.
PET/CT was used to visualise whole-body anatomical and functional information from the tumour-bearing mice. Initially, orthotopic pancreatic cancer was imaged using [ 18 F]FDG. [ 18 F]FDG revealed increased glucose metabolism in the upper left abdomen of the tumour-bearing mice, and tumour lesions were determined based on the anatomical location (Figure 3a). Given the limitations of [ 18 F]FDG, the same animals were imaged the next day using [ 18 F]FDOPA in order to detect pancreatic tissue. As expected, [ 18 F]FDOPA uptake occurred in the pancreata of these mice, but the uptake in pancreatic BxPC3 tumours was modest. According to our observations, combining [ 18 [21,38]. In the present study, outlining of the pancreas improved, when the animals were treated with carbidopa or COMT and MAO-A inhibitors before [ 18 F] FDOPA injection. All of these enzyme inhibitors prevent the breakdown of catecholamine neurotransmitters (Figure 1). Inhibition of AADC with carbidopa leads to the formation of [ 18 F]3-OMFD ( Figure 5), which is easily transported to both tumour cells and pancreatic cells [39]. However, when COMT + MAO-A were inhibited, the main metabolite is [ 18 F]FDA ( Figure 5), which is not taken up by BxPC3 tumour cells. Therefore, the uptake of 18 F radioactivity is different between the healthy pancreas and tumour after pretreatment with different enzyme inhibitors.
Ex vivo distribution studies demonstrated major (twoto fourfold) differences in 18 F-uptake ratios between sham-operated and tumour-bearing mice, which were pretreated with vehicle or COMT and MAO-A inhibitors prior to [ 18 F]FDOPA injection. This observation can be explained by a change in [ 18 F]FDOPA metabolism compared with that following carbidopa pretreatment. This was confirmed with tumour autoradiography. Although uptake was highest in carbidopa-treated pancreata, the difference between sham-operated and tumour-bearing pancreata was largest in animals pretreated with COMT + MAO-A inhibitors.
Conclusions
Our study indicates that pretreatment of mice with carbidopa increases [ 18 F]FDOPA uptake in the pancreas and therefore facilitates the localisation of the pancreas, but it also impairs the observer's ability to separate the healthy pancreas from the tumour because, in addition to exocrine pancreatic cells, tumour cells are also able to uptake [ 18 F]3-OMFD, which is the main metabolite after carbidopa administration. COMT + MAO-A inhibitors increased the 18 F radioactivity uptake in pancreatic tissue, while the uptake in tumours is poor due to the formation of [ 18 F]FDA as main metabolite. Therefore, COMT + MAO-A inhibition improved the separation of the healthy pancreas from the tumour. However, the difference between the healthy pancreas and tumour was not clear in all cases after COMT + MAO-A administration, and the benefit over vehicle pretreatment remained modest. According to our observations, [ 18 F]FDOPA combined with [ 18 F]FDG imaging is a useful tool for detecting pancreatic adenocarcinoma, either alone or with COMT + MAO-A pretreatment, but not with carbidopa pretreatment. Since deducing the exact location of the tumour lesion is essential for a successful treatment, our data may have an important clinical value. | 2017-06-28T21:27:51.299Z | 2013-03-14T00:00:00.000 | {
"year": 2013,
"sha1": "db6bdad7eec60d0d85345e493c393aa45b0a71e5",
"oa_license": "CCBY",
"oa_url": "https://ejnmmires.springeropen.com/track/pdf/10.1186/2191-219X-3-18",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f97b8e4f2339c9085db18ab61a004642913c11b1",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18694367 | pes2o/s2orc | v3-fos-license | Altered β1,6-GlcNAc branched N-glycans impair TGF-β-mediated Epithelial-to-Mesenchymal Transition through Smad signalling pathway in human lung cancer
The change of oligosaccharide structure has been revealed to be crucial for glycoproteins' biological functions and cell biological characteristics. N-acetylglucosaminy transferase V (GnT-V), a key enzyme catalysing the reaction of adding β1, 6-N-acetylglucosamine (GlcNAc) on asparagine-linked oligosaccharides of cell proteins, has been implicated to a metastastic-promoting oncoprotein in some carcinomas. However, this correlation might not be subjected to all types of cancers, for example, in non-small cell lung cancers, low level of GnT-V expression is associated with relatively short survival time and poor prognosis. To explain the role of GnT-V in lung cancer progression, we studied the association of GnT-V expression with lung cancer EMT behaviour. We found that GnT-V expression was correlated with epithelial marker positively and mesenchymal marker negatively. GnT-V levels, as well as β1,6-GlcNAc branched N-glycans, were strongly reduced in TGF-β1-induced EMT of human lung adenocarcinoma A549 cells. Further studies showed that suppression of β1,6-GlcNAc branched N-glycans by inhibitor or GnT-V silencing in A549 cells could promote TGF-β1-induced EMT-like changes, cell migration and invasion. Meanwhile, overexpression of GnT-V impaired TGF-β1-induced EMT, migration and invasion. It suggests that GnT-V suppresses the EMT process of lung cancer cells through inhibiting the TGF-β/Smad signalling and its downstream transcription factors in a GnT-V catalytic activity–dependent manner. Taken together, the present study reveals a novel mechanism of GnT-V as a suppressor of both EMT and invasion in human lung cancer cells, which may be useful for fully understanding N-glycan's biological roles in lung cancer progression.
Introduction
The change of oligosaccharide structure has been revealed to be crucial for biological functions of glycoproteins. These oligosaccharides on glycoproteins often play important roles in the regulation of the biological characteristics of tumours, especially in regulation of tumour invasion and metastasis [1]. Each oligosaccharide of glycoproteins is synthesized via catalysis by a special glycosyltransferase, the expression of which affects specific functions of glycoproteins through glycosylation. Most of the cancer-associated changes of oligosaccharide structure of glycoproteins are because of the abnormal expression of glycosyltransferase genes, such as N-acetylglucosaminyltransferase V (GnT-V, also known as Mgat5) [2][3][4].
GnT-V catalyses the linkage of a GlcNAc to a core mannose of N-glycan to produce the GlcNAc-b1,6-Man branch to form tri-or tetra-antennary N-linked oligosaccharide chains [5]. In the Golgi compartment, GnT-V transfers a GlcNAc residue onto growing N-glycans of integrins, cadherins, growth factor receptors (GFRs) and cytokine receptors, so that subsequent glycosylation results in formation of 'multi-antennary' chains [6].
Some studies demonstrate that GnT-V expression and its products b1,6-GlcNAc branched N-glycans are commonly increased in malignancies, for example, in human mammary, colon and glial tumours. This increment of N-glycan branching is considered to be positively associated with tumours malignant transformation [2,[7][8][9][10]. However, these correlations do not apply to all types of tumours. In the case of hepatocellular carcinoma, it was reported that low level or absence of GnT-V expression is related to high recurrence rate [10]. Moreover, GnT-V and its product of b1,6-Glc-NAc branched N-glycans are closely related to low malignant potential and good prognosis of the patients in bladder, neuroblastoma, gastric and lung cancers [11][12][13][14]. These reports suggest that the clinical implication of GnT-V expression may differ in each kind of cancer, and the role of GnT-V in cancer progression could be tissue-specific. The functions of GnT-V and its products in human lung cancer progression, especially metastatic dissemination, remain to be investigated.
The epithelial-to-mesenchymal transition (EMT), which is characterized by the loss of epithelial adhesion and gain of mesenchymal features, is a fundamental biological process of embryonic development and cancer invasion and metastasis [15,16]. The invasive nature of each tumour subtype is dependent on epithelial cells increasing their capacity for migration through EMT. EMT can be induced or regulated by various growth and differentiation factors. Transforming growth factor-b (TGF-b), besides playing the important roles in the control of cell proliferation, differentiation, apoptosis and ageing, is one of the established EMT inducer [17][18][19].
Transforming growth factor-b functions by binding to type II TGFb receptors (TbRII), transphosphorylation of type I TGF-b receptors (TbRI), and subsequent phosphorylation of Smad2 and Smad3. Phosphorylated Smad2/3 forms a trimer with Smad4 that then translocates to the nucleus and interacts with transcription factors, co-activators and co-repressors to suppress epithelial genes and promote the expression of mesenchymal proteins [20]. In addition, non-Smad signalling through activation of the mitogen-activated protein (MAP) kinases (e.g. p38, ERK1/2 and JNK), the small GTP-binding proteins (e.g. Ras, Rho and Rac1) and the cell survival mediators (e.g. PI3K, AKT and mTOR) has also been implicated in TGF-b-induced EMT [21]. Furthermore, TGF-b signalling has extensive crosstalk with integrins/FAK signalling pathway [22].
There is evidence suggesting that the process of TGF-b-induced EMT can be regulated by various glycosyltransferases [23]. It is still unclear whether GnT-V and N-Glycan branching regulate the lung cancer progression and EMT behaviour. In this study, we demonstrated the relationship between the GnT-V and EMT behaviour in human lung cancer cells. The results suggest that GnT-V is a suppressive regulator of TGF-b in controlling EMT and cell migration mainly through enhancement of TGF-b/Smad signalling. These results reveal a distinct inhibitory role of b1,6-GlcNAc branched N-glycans in lung cancer invasive/metastatic potential.
Cell lines and cell culture
The human non-small cell lung cancer (NSCLC) cell line A549 was purchased from the Cell Bank of Type Culture Collection of Chinese Academy of Sciences (Shanghai, China). The other NSCLC cell lines H1299, H1975, H460 and Human embryonic kidney 293T were kindly provided by Prof. Xinyuan Liu (Chinese Academy of Sciences, Shanghai, China). The cells were cultured in RPMI-1640 medium or DMEM supplemented with 10% foetal bovine serum (FBS) and 1% L-glutamine. All cells were cultured at 37°C in 5% CO 2 humid atmosphere.
Kaplan-Meier Plotter online survival analysis
Kaplan-Meier Plotter online survival analysis was performed in a large combined lung cancer data set (www.kmplot.com/lung), which contains survival information of 1715 lung cancer patients and gene expression data obtained by using three different versions of Affymetrix gene microarrays [24,25]. We used the online-available metaanalysis tool, to test possible correlations between expression of GnT-V (Affymetrix ID: 212098-at) with overall survival. Lung cancer patients were grouped by all stage (n = 1406) and stage I (n = 440), and tumour samples were equally grouped into low and high GnT-V expression based on the mRNA levels. Significant differences in overall survival time were assessed with the Cox proportional hazard (log-rank) test.
available data sets of the gene expression profiles were chosen [26]. mRNA expression profiles for GnT-V, E-cadherin, N-cadherin and Slug were evaluated by using Mgat5/31313-at, CDH1/2082-s-at, CDH2/ 2053-at and Snail2/38288-at respectively. mRNA expression levels were displayed by using log2 median-centred ratio boxplots for lung cancer versus normal. Standardized normalization techniques and statistical calculations are provided on the Oncomine website and published [27].
RNA isolation and real-time PCR
RNA isolation, reverse transcription and real-time PCR (qRT-PCR) analysis were performed as previously described [28]. Primers used in the qRT-PCR analysis were as follows: human GnT-V (NM-002410.
Western blot and lectin blot
Cells were lysed in ice-cold lysis buffer containing SDS, protein assay buffer and blots were carried out as described previously [29]. When proteinblotted polyvinylidene (PVDF) membranes (Merck Millipore) were prepared, and blocked with 5% BSA in TBS, the membrane was incubated with primary antibodies or biotinylated lectins in TBS buffer containing 0.1% Tween-20 (TBST) overnight at 4°C. The membrane was washed with TBS and probed with HRP-conjugated anti-rabbit or antimouse IgG (1:3000) or R.T.U. Horseradish Peroxidase Streptavidin for 1 hr at room temperature. After the membrane was washed with TBST, immunoreactive bands were visualized by using ECL reagents of Pierce (Thermo Fisher Scientific, Rockford, IL, USA).
Flow cytometry analysis for cell surface N-glycans
Cultured cells were detached with 0.25% trypsin, and then collected by centrifugation at 300 g for 5 min. The cells were blocked with 1 mg/ml BSA in PBS, and successively incubated with biotinylated lectin-conjugated N-glycans of glycoprotein onto the cell plasma membrane for 1 hr at 4°C. All biotinylated lectin diluted to 1:250 with 0.5% BSA in PBS. Then the cells were incubated with Avidin D conjugated with Fluorescein at a dilution of 1:250 at 4°C for 45 min. in the dark. After washing with PBS, the cells were suspended in 300 ll PBS and analysed on the flow cytometry analysis (Beckman Coulter, Brea, CA, USA). A suspension of 1 9 10 4 cells was analysed for each sample, and each experiment was repeated at least three times.
Plasmids' construction, virus production and stable infection
Lentiviral recombinant vectors pCDH-puro-wtGnT-V and pCDH-puro-△cGnT-V were generated by PCR cloning the full length of human GnT-V (wtGnT-V) or C-terminal deletion mutant of GnT-V (△cGnT-V) from pcDNA3.1-GnT-V into pCDH-Puro, and were fully sequenced. The target sequence for GnT-V RNAi is 5 0 -CCTGGAAGCTATCGCAAAT-3 0 (1508), and the scramble (control) sequence is 5 0 -GA-ATTACTCCTAGAACCGC-3 0 . These sequences were synthesized as 58 bp stem-loop structures and constructed into pLK0.1-puro vector at AgeI and XbaI sides respectively. Lentiviruses were generated by cotransfection of subconfluent 293T cells, with one of the above recombinant plasmids and packaging plasmids (p△8.2 and pVSVG) by calcium phosphate transfection. To collect infectious lentiviruses, supernatants were centrifuged to remove cell debris and filtered through 0.45 lm filters (Merck Millipore). A549 cells were transduced with the Lentiviruses expressing wtGnT-V, △cGnT-V, shRNA against human GnT-V or scramble non-target shRNA. H1975 cells were transduced with lentiviruses containing GnT-V. For selecting the stable cells, puromycin (2 lg/ml) was added in cells after virus infections. For control transfection, the same protocol was performed with only the empty lentivirus expression vector.
Immunofluorescence staining and confocal microscopy
Cells were grown on glass coverslips in a 24-well plate, washed with cold PBS, fixed with 4% paraformaldehyde for 30 min. on ice and permeabilized with 0.1% Triton X-100 in PBS for 10 min. on ice. After blocking with 3% BSA in PBS, Coverslips were incubated with respective primary antibodies at 1:100 dilutions with 3% BSA in PBS overnight at 4°C. After washing three times with 3% BSA in PBS, Coverslips were incubated with fluorescein-conjugated secondary antibodies or F-actinspecific dye phalloidin at 1:500 dilutions with 3% BSA in PBS for 1 hr. Cells were then washed three times with PBS, mounted with medium containing 4,6-diamidino-2-phenylindole (DAPI; Vector Laboratories), analysed by using fluorescence microscopy (Olympus, Tokyo, Japan) or laser confocal microscope (Leica LAS AF Lite, Wetzlar, Germany).
Cell migration and invasion assays
Cells pre-treated with or without TGF-b1 for 24 hrs (GnT-V knockdown) or 48 hrs (GnT-V overexpression), harvested with 0.25% Trypsin, washed twice with RPMI 1640 medium, and resuspended with serumfree medium containing TGF-b1 or BSA at a density of 5 9 10 5 (A549) or 1 9 10 5 (H1975) cells/ml. For transwell migration assay, transwell was coated only on the bottom side with 10 lg/ml fibronectin (FN; Sigma-Aldrich, St. Louis, MO, USA) at room temperature for 1 hr; 200 ll of the cell suspension was plated in the upper chamber of a noncoated transwell insert. In the lower chamber, 500 ll of medium with 5% FBS was used as a chemoattractant to encourage cell migration.
For the transwell matrigel invasion assay, the bottom side of the transwell was coated with 10 lg/ml FN at room temperature for 1 hr, the upper chamber of the transwell insert was coated with 80 ll of 2.0 lg/ml matrigel, and 200 ll of the cell suspension was plated in the upper chamber of the matrigel-coated transwell insert. The lower chamber was filled with 500 ll of medium with 5% FBS.
Cells of both assays were incubated at 37°C in 5% CO 2 for 12 hrs. After incubation, those cells that did not migrate or invade were removed by scraping with a cotton swab. The membrane in the transwell was fixed with 4% paraformaldehyde for 30 min., dyed with 2.5% crystal violet staining for 15 min. and the cells that migrated to lower side under a light microscope were counted. We selected five random fields per filter to count the cells and each independent experiment was repeated at least three times.
Transfection and dual-luciferase assay
Cell transfection and dual-luciferase assay were performed as previously described [30]. Cells cultured in 48-well plates were transfected with Smad-driven-transcriptional luciferase and renilla reporter constructs. The following day, cells were serum-starved and exposed to TGF-b1 for 24 hrs; thereafter, cells were lysed and luciferase activity was measured by using the dual-luciferase reporter assay system of Promega (Madison, WI, USA).
Statistical analysis
Quantitative data from at least three experiments were expressed as means AE SD. Statistical significance was determined by Student's t-test. Differences were considered statistically significant at P < 0.05. The P-value of the groups in comparison were marked in the Figures by using asterisks, and indicated in the Figure legends as *: P < 0.05, **: P < 0.01, ***: P < 0.001.
Results
GnT-V expression is correlated with epithelial identities positively and mesenchymal identities negatively in human lung cancer To study the relationship between GnT-V expression and clinicopathological features of the patients with lung cancer, a Kaplan-Meier survival plot was generated and significance was computed. The tool can be accessed online at www.kmplot.com [25]. We used this integrative data analysis tool to validate the prognostic power of GnT-V (Affymetrix ID: 212098-at) in lung cancer by using overall survival. We found that the lung cancer patients with high GnT-V expression had significantly longer survival time than those with low GnT-V expression in all stage lung cancer patients (n = 1406; Fig. 1A left). In addition, we had also run the analysis for predicting overall survival in stage I lung cancer patients (n = 440) alone ( Fig. 1A right), and the result is similar. These findings are consistent with what Hirotoshi Dosaka-Akita reported [13], suggesting that GnT-V might be considered as a prognostic factor of the patients with lung cancer. Furthermore, to investigate the relevance of GnT-V expression to EMT, the mRNA levels of GnT-V and EMT markers were analysed in Bhattacharjee lung database including 186 lung tumour specimens and 17 normal lung specimens from Oncomine gene expression microarray data sets (www.oncomine.org) [26]. The levels of GnT-V mRNA were lower in lung adenocarcinomas, lung carcinoids, small cell lung carcinomas, except squamous cell lung carcinomas, than those in normal counterparts, whereas mRNA levels of N-cadherin and Slug were relatively higher in lung cancers as compared with normal counterparts (Fig. 1B). The relationship between GnT-V and EMT markers was analysed in mRNA levels. GnT-V mRNA expression was negatively correlated with mesenchymal marker N-cadherin and Slug, a repressor inhibiting the E-cadherin expression, and tended to be positively related to epithelial marker E-cadherin mRNA expression (Fig. 1C).
Furthermore, one normal lung cell line and four human lung cancer cell lines were subjected to the analysis of mRNA and protein levels of GnT-V and EMT markers by qRT-PCR and western blot. As shown in Figure 1D, in HBE, a lung normal bronchial epithelial cell line, and A549, a lung adenocarcinoma cell line, GnT-V protein levels were relatively high, which were correlated with E-cadherin expression positively and N-cadherin/Vimentin expression negatively. Meanwhile, we also observed those in lung adenocarcinoma H1975 and H1299, and large cell lung cancer H460 cell lines. The results showed that low expression of GnT-V was closely associated with relative lower E-cadherin and higher N-cadherin/Vimentin protein levels. Consistently, the protein levels of Snail and Slug in H1975 and H460 were significantly higher than those of A549 and HBE (Fig. 1D). In addition, similar results also showed the close correlation between GnT-V and EMT markers in mRNA levels, which were determined in lung cancer Fig. 1 High GnT-V expression is correlated with good prognosis and negatively correlated with EMT in human lung cancer. (A) Correlation between higher GnT-V (Affymetrix ID: 212098-at) expression and good overall survival (OS) in all stage (n = 1406; left) or stage I (n = 440; right) lung cancer patients. Plots were generated online by using a Kaplan-Meier Plotter based on signal intensity in microarray gene expression data from patients for whom OS data are available (www.kmplot.com/lung). Upper curve, red, indicates higher than median expression, and lower curve, black, lower than median expression. (B) Oncomine box plot RNA expression data for GnT-V and EMT markers (E-cadherin, N-cadherin and Slug) in normal tissue compared with cancer were shown within the Bhattacharjee lung database of publicly available Oncomine microarray data sets (www.oncomine. org). The log2 median-centred ratios for gene expression level are depicted in box-and-whisker plots. Dots represent maximum and minimum outliers from the main data set. For each plot, the following pathological subtypes were evaluated separately. Alterations of b1,6-GlcNAc branched N-glycans and GnT-V during TGF-b1-induced EMT in human lung cancer To explore the GnT-V expression and its biological functions during EMT, TGF-b1-induced EMT, a widely admitted EMT model, was employed. After TGF-b1 treatment, the islands of A549 cells turned from the cobblestone-like epithelial morphology into a diffused spindlelike mesenchymal phenotype in a dose-dependent manner, which become very pronounced after 10 ng/ml TGF-b1 treatment ( Fig. 2A top). TGF-b1 treatment also significantly reduced the E-cadherin protein levels, but increased the N-cadherin at the same time ( Fig. 2A bottom).
Next we sought to identify the effect of TGF-b1 treatment on changes in N-glycans' structure and branching status in A549 cells by using lectin-FASC methods. The results showed that hybrid and bi-antennary complex types of N-glycans, which could be specially bound by ConA lectin [31], were significantly attenuated during TGF-b1-induced EMT process in A549 cells. In contrast, the high-mannose type of N-glycans, which could be detected by GNA binding [32], seemed to be unchanged. L-PHA, DSA and WGA lectins could strongly recognize tri-and tetra-antennary complex type of N-glycans, especially L-PHA preferentially recognizes b1,6-GlcNAc branched N-glycans by which the cell surface b1,6 GlcNAc branching could be measured [33]. Our results showed that N-glycans' branching, especially GnT-V's products of b1,6-GlcNAc branched N-glycans, was dramatically suppressed by TGF-b1 treatment (Fig. 2B). The similar changes of each type of N-glycan by lectin blot were detected in A549 cells during EMT (Fig. 2C).
To explore whether the reduction of b1,6-GlcNAc branched N-glycans was because of the down-regulation of glycosyltransferase, the expression of GnT-V was further studied. It was found that the GnT-V mRNA levels were decreased by about 45% in TGF-b1-treated A549 cells as compared with control cells by qRT-PCR (Fig. 2D left). Meanwhile, a significant decrease of GnT-V protein levels was also observed by western blot (Fig. 2D right). These results demonstrate that the expression of GnT-V, as well as the formation of b1,6-GlcNAc Inhibition of b1,6-GlcNAc branched N-glycans' formation in lung cancer cells enhances TGF-b1induced EMT, cell migration and invasion The relationship of GnT-V with EMT markers and the alteration of GnT-V during TGF-b1-induced EMT suggest that GnT-V and its products b1,6-GlcNAc branched N-glycans play an important role in the control of EMT in human lung cancer. As there is no previous report showing the role of GnT-V in EMT in human lung cancer, it is important to elucidate whether GnT-V functions in EMT process. First, the effect of the N-glycosylation produced by GnT-V on TGF-b1-induced alterations of EMT was explored in A549 cells. After A549 cells were treated with swainsonine, which inhibits Golgi a-mannosidase II and ultimately causes the inhibition of b1,6-GlcNAc branched N-glycans' formation, the lectin blot was performed. The result revealed that the b1,6-GlcNAc branched N-glycans by L-PHA lectin blot showed a significant decrease (Fig. 3A), indicating a dramatic suppression of b1,6-GlcNAc branched N-glycans in A549 cells, as expected. And it was surprised that the b1,6-GlcNAc branched N-glycans down-regulated by swainsonine significantly enhanced the change of cell morphology in TGF-b1-induced EMT and altered the levels of E-cadherin and N-cadherin (Fig. 3B). It suggests that b1,6-GlcNAc branched Nglycans function as an EMT suppressor.
Given interference synthesis of b1,6-GlcNAc branched N-glycans by swainsonine could enhance TGF-b1-induced EMT, we further tried to explore whether direct inhibition of the b1,6-GlcNAc branched N-glycans' formation achieved through suppression of GnT-V expression could also enhance TGF-b1-induced EMT. Stable GnT-V knockdown A549 cells were gained in a puromycin selection by using the lentiviral shRNAs system. As shown in Figure 3C, in shGnT-V A549 cells, GnT-V mRNA and protein levels were significantly decreased as compared with scramble A549 cells. Consistent with decreased GnT-V mRNA and protein levels, GnT-V knockdown cells showed a dramatically reduced binding with L-PHA by flow cytometry (Fig. 3D). Taken together, these results demonstrate that both GnT-V and its products b1,6-GlcNAc branched N-glycans are significantly suppressed by shGnT-V expression, and reduction of b1,6-GlcNAc branched N-glycans is because of the down-regulation of glycosyltransferase.
Furthermore, we observed the changes of cell morphology and mobility after GnT-V knockdown. GnT-V down-regulation by shRNA induced a scattered distribution of cells and significantly enhanced TGF-b1-induced morphological alterations of EMT, supporting the conclusion that GnT-V functions as an EMT suppressor (Fig. 3E) [34]. The change of phenotype was associated with clear alterations of TGF-b1-induced EMT markers, E-cadherin was lost to a much greater extent and N-cadherin was increased much more in GnT-V knockdown cells as compared with scramble cells (Fig. 3F). These results suggest that GnT-V is critical in maintaining the epithelial phenotype and suppressing the EMT of A549 cells. Moreover, TGF-b1-induced EMT is known to be closely related to enhancement of cell motility. To examine whether GnT-V expression affected TGF-b1induced cell motility, transwell and transwell matrigel assays were employed to detect the cell migration and invasion. First, in the transwell assay, it was observed that more cells migrated to the lower surface of the membrane in the GnT-V knockdown cells after TGF-b1 treatment (Fig. 3G). Consistent with this result, in transwell matrigel assay, the invasive ability of GnT-V knockdown cells was also greater enhanced as compared with scramble cells after treatment with TGF-b1 (Fig. 3H).
Taken together, these results indicated that TGF-b1-induced EMT, cell migration and invasion in the lung cancer cells were enhanced by interference of b1,6-GlcNAc branched N-glycans' formation or suppression of GnT-V expression. It suggests that GnT-V and its products b1,6-GlcNAc branched N-glycans are involved in an EMT-like switch facilitating cell migration and invasion in vitro.
Overexpression of GnT-V in lung cancer cells reduces TGF-b1-induced EMT, cell migration and invasion
As overexpression of GnT-V is an effective tool to alter the cell surface b1,6-GlcNAc branched N-glycans, next we used GnT-V overexpressed A549 cells to investigate in detail the relationship of GnT-V's catalytic activity with TGF-b1-induced EMT. To explore whether GnT-V affecting A549 cells' EMT behaviour was dependent on its catalytic activity, either wild-type GnT-V (wtGnT-V) or inactive mutant GnT-V (△cGnT-V) was transfected stably into A549 cells. The inactive mutant GnT-V, constructed the shortened GnT-V gene without coding sequence for six amino acids at C-terminal extreme end for destroying its catalytic activity. Because the deletion of 4-8 amino acids at its carboxyl-terminus can destroy its catalytic activity [35,36]. The stable cells were selected with puromycin and identified as shown in Figure 4A. The GnT-V mRNA and protein levels were significantly increased in either wtGnT-V-A549 or △cGnT-V-A549 cells, and a significant enhancement of L-PHA binding in wtGnT-V-A549 cells, whereas little change in L-PHA binding was observed in △cGnT-V-A549 cells as compared with vehicle cells, indicating that △cGnT-V-A549 cells mainly expressed the inactive form of GnT-V protein (Fig. 4B). Therefore, these results suggest that b1,6-GlcNAc branched N-glycans are prevalent under the increased catalytic activity of GnT-V, and glycosyltransferase activity is deficient in △cGnT-VA549-cells as compared with wtGnT-V-A549 and vehicle cells.
Overexpression of wtGnT-V abolished the gain of a strikingly a mesenchymal phenotype and significantly reduced TGF-b1-induced EMT in A549 cells, as shown by the inhibition of the cell morphological changes (Fig. 4C top), the increase of N-cadherin and the decrease of E-cadherin (Fig. 4C bottom), and the actin remodelling (Fig. 4D), supporting the conclusion that GnT-V functions as an EMT suppressor. Consistent with the above results, wtGnT-V overexpression also weakens the ability of TGF-b1-induced cell migration and invasion in A549 cells (Fig. 4E and F) migration and invasion as compared with vehicle cells (Fig. 4C, E and F), indicating that the loss of GnT-V catalytic activities did not have the ability to inhibit the TGF-b1-induced EMT and cell motility in A549 cells. These results suggest that the GnT-V suppressing lung cancer EMT behaviour is glycosyltransferase activity-dependent.
We also verified whether GnT-V functions as EMT suppressor in H1975, another human lung adenocarcinoma cell line. H1975 cells with stable wtGnT-V overexpression, which originally shows low level of GnT-V expression, exhibit an EMT phenotype, i.e. fibroblastoid morphology, high N-cadherin and vimentin, and low E-cadherin. As shown in Figure 5A and B, wtGnT-V mRNA and protein levels, and L-PHA binding in wtGnT-V-H1975 cells were significantly enhanced as compared with vehicle cells. Strikingly, overexpression of wtGnT-V in H1975 cells partly reversed their mesenchymal phenotype into an epithelial-like one and also the capacity of cell migration and invasion (Fig. 5C, E and F). And after TGF-b1 treatment, overexpression of wtGnT-V in these cells also inhibited the TGF-b1-induced EMT, as demonstrated by inhibition of the cell morphological changes, up-regulation of E-cadherin and down-regulation of N-cadherin ( Fig. 5C and D), and suppression of the ability of TGF-b1-induced cell migration and invasion ( Fig. 5E and F).
Over all, these results suggest that GnT-V plays a role in maintenance of the epithelial phenotype and suppressing the TGF-b1-induced EMT, and cell migration and invasion via its product of b1,6-GlcNAc branched N-glycans. The target glycoproteins of GnT-V and the underlying mechanisms need further investigation.
Inhibition of b1,6-GlcNAc branched N-glycans' formation enhances the activation of TGF-b/Smads signalling pathway It has been known that most of the switches from an epithelial to a mesenchymal-like phenotype are regulated by TGF-b signalling [20]. Because both the interference of b1,6-GlcNAc branched N-glycans' formation and the knockdown of GnT-V enhance TGF-b1-induced EMT and cell motility in lung cancer A549 cells. Hence, it is speculated that both may regulate some key signal molecules in TGF-b signalling pathway. We found that either swainsonine treatment or GnT-V knockdown of A549 cells caused the increased Smad2 and Smad3 phosphorylation in response to TGF-b1 as compared with control cells (Fig. 6A and B). And the results of immunofluorescence staining (Fig. 6C left) and nuclear protein immunoblotting (Fig. 6C right) showed that shGnT-V-A549 cells' exposure to TGF-b1 had more nuclear translocation of pSmad2 and pSmad3 than scramble cells. In addition to Smad signalling, we also investigated the effect of GnT-V on some TGF-b non-Smad signalling pathways. We detected the phosphorylation of AKT, ERK1/2, P38, JNK and FAK by western blot (Fig. 6D). It was found that GnT-V knockdown had little effect on TGF-b-non-Smads signalling except the increased FAK signalling pathway. These results showed that GnT-V knockdown and the inhibition of b1,6-GlcNAc branched N-glycans' formation enhanced TGF-b signalling through increased Smads phosphorylation and their nuclear translocation.
Furthermore, we examined the effect of GnT-V on TGF-b1induced transcriptional activity. As shown in Figure 6E, knockdown of GnT-V in A549 cells resulted in significantly enhanced activity of TGF-b1-induced transcriptional response from a Smad2/4dependent receptor 3TP-luciferase [37] and a Smad3/4-dependent reporter (SBE) 4 -luciferase [38], indicating that GnT-V was involved in the regulation of TGF-b/Smad2/3/4-dependent transcriptional response. It suggests that GnT-V is involved in TGF-b1-induced EMT switch through TGF-b/Smads pathway. Then, to further confirm the effect of GnT-V on Smads-mediated transcriptional activity, we observed the changes of protein and mRNA levels of Snail and Slug, which are strong repressors of E-cadherin expression. Snail and Slug are typical TGF-b downstream target genes, which contain Smad3-binding G/C-rich sequence and are transactivated by Smad3 following TGF-b1 treatment [39]. Knockdown of GnT-V enhanced TGF-b1-induced mRNA level of Snail and Slug according to qRT-PCR results (Fig. 6F left), which was also confirmed by western blot (Fig. 6F middle), and increased nuclear translocation of snail by Immunofluorescence staining (Fig. 6F right). All these results demonstrated that knockdown of GnT-V enhanced TGF-b1induced up-regulation of Smads-mediated transactivation. It suggests that TbRs, one of the GnT-V's substrates, may involve in the process.
GnT-V impairs the activation of TGF-b/Smads signalling pathway in a catalytic activitydependent manner Next, we considered whether the effect of GnT-V overexpression on TGF-b/Smad signalling is not associated with its catalytic activity. Indeed, overexpression of wtGnT-V in A549 cells decreased Smad2 and Smad3 phosphorylation and nuclear translocation of pSmad2/3 in response to TGF-b1 as compared with △cGnT-V and vehicle cells ( Fig. 7A and B), and the similar results obtained in H1975 cells with stably overexpression of wtGnT-V (Fig. 7A). In contrast, overexpression of △cGnT-V, an inactive form, in A549 cells showed no influence on the phosphorylation and nuclear translocation of Smad2 and Smad3 in response to TGF-b1 as compared with vehicle cells (Fig. 7A and B). Moreover, in A549 cells, overexpression of wtGnT-V in A549 cells had no significant influence on the phosphorylation levels of AKT, ERK1/2, P38, JNK, except decreased phosphorylation of FAK as compared with △cGnT-V or vehicle cells with TGF-b1 treatment (Fig. 7C). These results indicated that GnT-V suppressed TGF-b signalling at the Smads activation stage, consistent with the effect on EMT behaviour, is catalytic activity-dependent.
Furthermore, overexpression of wtGnT-V in A549 cells and H1975 cells reduced the activities of both Smad2/4-driven 3TP and Smad3/ 4-driven (SBE) 4 luciferase transcriptional reporter activity (Fig. 7D), and blocked the protein up-regulation of Snail and Slug after TGF-b1 treatment as compared with vehicle cells. Moreover, increased nuclear translocation of Snail disappeared in wtGnT-V-A549 cells with TGF-b1 treatment as compared with vehicle cells, explaining the reduction of TGF-b1-induced EMT (Fig. 7E). Meanwhile, consistent with phosphorylation of Smads, in △cGnT-V-A549 cells, there was little change in TGF-b1-induced Smad3/Smad4 transcriptional reporter activity, and Snail/Slug expression as compared with vehicle cells ( Fig. 7D and E). These results indicate that GnT-V suppressing TGF-b1-induced up-regulation of Smads-mediated transactivation is catalytic activity-dependent.
Overall, our data strongly support the importance of GnT-V in the inhibition of a morphological switch from a resting epithelium-like to a migratory phenotype through regulation of the TGF-b/Smads pathway and is also catalytic activity-dependent. And we speculate that GnT-V may act through b1,6-GlcNAc branched N-glycans' modification of TbRs to affect the TGF-b1-induced EMT in lung cancer cells.
Discussion
In the present study, we investigated the effect of GnT-V expression on EMT and its biological functions, such as cell migration and invasion. It was found that GnT-V act as a good prognostic factor which has lower expression in human lung cancer than in normal lung, and GnT-V expression was negative correlated with EMT mesenchymal markers, we also found that GnT-V expression and its products b1,6-GlcNAc modification of cell surface were altered during TGF-b1-induced EMT in lung cancer cells. These results indicate that GnT-V is an important event in control of EMT. Hence, inhibition of b1,6-GlcNAc branched N-glycans' formation, shRNA-mediated knockdown of GnT-V and overexpression of wild-type GnT-V or inactivated GnT-V were employed to specifically examine the role of GnT-V in TGF-b1-induced EMT, cell migration and invasion in lung cancer cells. We found that GnT-V suppressed the TGF-b1-induced EMT and cell migration and invasion, and these functions depended on its glycosyltransferase activity. The data further confirmed that GnT-V could be a suppressor of TGF-b1-induced EMT through b1,6-GlcNAc branched N-glycans' glycosylation of its target glycoproteins. To the best of our knowledge, this is the first report to demonstrate that GnT-V expression is inversely related to EMT behaviour in lung cancer cells. GnT-V is involved in lung cancer cells' EMT by regulating TGF-b/Smads and its downstream transcription factors through a catalytic activity-dependent manner.
Some reporters have shown that GnT-V and its products, b1,6-GlcNAc branching N-glycans, have positive correlation with cancer Fig. 6 Inhibition of b1,6-GlcNAc branched N-glycans' formation through swainsonine treatment or GnT-V knockdown in lung cancer cells enhances the activation of TGF-b/Smads signalling. (A) Swainsonine treatment enhances TGF-b1-mediated Smad2 and Smad3 phosphorylation in A549 cells, as determined the Smad2 and Smad3 proteins, and their phosphorylation levels (pSmad2, pSmad3) by western blot. A549 cells were pre-treated with or without swainsonine (1 lg/ml) for 24 hrs, before exposure to TGF-b1 (5 ng/ml) for indicated time periods. (B) Knockdown of GnT-V enhances TGF-b1-induced Smad2 and Smad3 phosphorylation in A549 cells. The phosphorylation and total protein levels of Smad2/Smad3 were determined by western blot. Scramble and shGnT-V A549 cells were exposed to TGF-b1 (5 ng/ml) for 1 or 24 hrs. (C) Knockdown of GnT-V enhances TGF-b1-induced nuclear translocation of pSmad2 and pSmad3 in A549 cells, as determined by immunofluorescence staining (left) and nuclear protein immunoblotting (right). Scramble and shGnT-V A549 cells were exposed to TGF-b1 (5 ng/ml) for 1 hr. (D) Knockdown of GnT-V does not alter TGF-b-non-Smad signalling in A549 cells. Scramble and shGnT-V A549 cells were exposed to TGF-b1 (5 ng/ml) for 1 hr, and the phosphorylation and total protein levels of signalling molecules (FAK, AKT, ERK, p38 and JNK) in TGF-b-non-Smad pathway were determined by western blot. GAPDH was used as loading control. (E) Knockdown of GnT-V increases TGF-b1-induced Smad2/Smad3 transcriptional reporter activity in A549 cells, as determined by a dual-luciferase assay. Scramble and shGnT-V A549 cells were transiently transfected by Smad2/4 driven-3TPluciferase and Smad3/4 driven-(SBE)4-luciferase. After transfection, cells were treated with or without TGF-b1 (5 ng/ml) for another 24 hrs, then dual-luciferase assay was performed. (F) Knockdown of GnT-V increases TGF-b1-induced Snail/Slug expression and nuclear translocation in A549 cells, as determined by qRT-PCR (left) and western blot (middle), and immunofluorescence staining (right) of A549 cells' exposure to TGF-b1 (5 ng/ml) for 24 hrs. malignancy and are strongly linked to tumour metastasis [40,41]. However, it is not always the case, as demonstrated by the fact that Dosaka-Akita et al. reported that lower expression of GnT-V is associated with a shorter survival and a poor prognosis in Stage I NSCLCs [13]. The present study also found that in lung cancer, GnT-V expression was lower than that in normal lung, and was favourable prognosis, as well as was inversely related to EMT behaviours. Moreover, GnT-V expression, as well as b1,6-GlcNAc branched N-glycans, was down-regulated during TGF-b1-induced EMT. All these results indicate that the role of GnT-V in cancer progression is tissue-specific, and GnT-V may be involved in the lung cancer cells' EMT process. Therefore, there remains a distinct possibility that TbRs, the target glycoprotein of GnT-V, may play a role in TGF-b1-induced EMT, cell migration and invasion in human lung cancer cells.
It is well known that EMT facilitates tumour metastasis. During EMT process, the epithelial cells are converted to the cells with more ability of migration and invasion, through up-regulation of mesenchymal markers, loss of epithelial markers, reduction of cell-cell adhesion and increase of cell-matrix adhesion. Previous studies have suggested the involvement of GnT-V in regulating cell adhesion, migration and invasion in various cancers, simply by affecting the N-glycosylation of cell surface protein receptors, including GFRs, integrins and cadherins [41][42][43]. Notably, most of GnT-V triggered b1,6-GlcNAc modification of glycoproteins as EMT markers or EMT regulators contributes to EMT behaviour [44]. And much evidence indicates that EMT is a multi-step process and is induced predominantly by the cellular surface glycoproteins, although the underlying mechanisms remain to be clarified. Moreover, the effect of GnT-V on EMT in lung cancer and its biological function have not yet been elucidated. Our results demonstrated that the reduction of b1,6-GlcNAc branched N-glycans by either swainsonine or GnT-V knockdown enhanced TGF-b1-induced EMT and in addition increased the capacities of cell migration and invasion in lung cancer A549 cells. Swainsonine treatment and GnT-V knockdown have been reported to significantly increase cell migration and invasion through down-regulation of b1,6-GlcNAc branched N-glycans of a5b1 integrin in human choriocarcinoma cells [45]. To further directly prove that GnT-V was a suppressor of EMT, which was associated with its catalytic activity, we estab-lished overexpression of wild-type GnT-V and inactive △cGnT-V as a negative control for stable cells. And we further confirmed that GnT-V was a suppressor gene to negatively regulate the EMT behaviour, cell migration and invasion in a catalytic activity-dependent manner. These results were consistent with the effect of GnT-V activity in fibrosarcoma-matrix adhesion, monocyte adhesion trans-endothelial migration, and extravillous trophoblast invasion, although were different from that in breast cancer motility [46,47]. These diverse results may be because of the differences in the malignancy of the cells. Furthermore, GnT-V expression is regulated in a tissue-specific manner, and the biological functions of GnT-V expression are different in various tissues and cells, depending on the biological function of target substrate glycoproteins, which can vary in different tissues and cells [48]. Up-regulation of GnT-V may contribute to altered biological properties of lung cancer cells by increased synthesis of b1,6-GlcNAc branched N-glycans of certain target glycoproteins, resulting in inhibition of TGF-b1-induced EMT and cell motility. All these results indicate that the role of GnT-V in cancer progression is tissue-specific. Therefore, there remains a distinct possibility that other target glycoproteins may play a role in EMT, migration and invasion of human lung cancer cells and the target glycoproteins of GnT-V remain to be determined.
To further investigate the molecular mechanisms by which GnT-V affected the EMT behaviours of lung cancer cell, we investigated the effect of GnT-V activity on TGF-b signalling. GnT-V's catalytic activity had little effect on TGF-b non-Smads signalling except the FAK signalling pathway. But it was associated with the decreased level of Smads phosphorylation, nuclear translocation, Smads-driven transcriptional reporter activity, and Snail/Slug expression of Smads downstream transcription factors to down-regulate TGF-b/Smads signalling. Aberrant modification of b1,6-GlcNAc branched N-glycans by GnT-V affects the functions of cell surface receptors and the downstream signalling pathways mediated by these receptors. Previous studies have demonstrated that GnT-V +/+ mouse cells sensitized to multiple cytokines increased the Galectin-3 cross-linked GnT-V-modified N-glycan on EGFR at the cell surface to delay constitutive endocytosis [3,49]. Furthermore, this effect of GnT-V is associated with the number of N-glycans (n) of glycoprotein. The number of N-glycans is a distinct feature of each glycoprotein and cooperates with the avidities for the galectin lattice of glycoproteins to regulate surface glycoprotein residency levels [50]. The effect of b1,6-GlcNAc branched N-glycans of TbRI and TbRII, which have a few N-glycan sites (NXS/T) (n = 1 and n = 2), on surface retention, is different with high glycosylation sites of EGFR (n = 8) [51,52]. In GnT-V +/+ tumour cells, lattice avidity is strengthened and promotes endocytosis of low-n receptor TbRs, by positive feedback to high-n receptors having high avidities for galectin lattice because of its stability located on cell surface. Moreover, GnT-V À/À tumour cells display reduced galectin-3 binding to complex N-glycans on high-n receptors and increased high-n receptors' mobility on cell membrane and, thus, movement into both caveoli and coated-pits, but limited low-n receptor internalization and maintain downstream signalling sensitivity [53,54]. Therefore, we speculated that the inhibitory effect of GnT-V on TGF-b signalling pathway possibly increased endocytosis of TbR through the modification of N-glycosylation, resulting in reduced cell surface retention, diminished TGF-b signalling, therefore decreased target gene Snails expression, and blocked EMT behaviours. And the exact mechanism by which GnT-V influences the TGF-b signalling requires further investigation.
In brief summary, this is the first evidence that GnT-V suppresses TGF-b1-induced EMT, cell migration and invasion through its catalytic activity in the human lung cancer, which further suggests that GnT-V and its products b1,6-GlcNAc branched N-glycans appear to be responsible for modulating EMT and cancer invasion in lung cancer. These findings support the hypothesis that GnT-V and b1,6-GlcNAc branched N-glycans may function as a suppressor for modulating EMT and invasion of lung cancer, and provide a hint for development of a new concept to regulate EMT, which may be a potential new therapy target for lung cancer, and that it can be used as a prognostic marker for lung cancer progression. | 2016-05-04T20:20:58.661Z | 2014-06-09T00:00:00.000 | {
"year": 2014,
"sha1": "8440b5d6a4c9ba8efc42cd9d17fba776e6ab7971",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1111/jcmm.12331",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8440b5d6a4c9ba8efc42cd9d17fba776e6ab7971",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
208830538 | pes2o/s2orc | v3-fos-license | Analysis of seaweeds from South West
: Seaweeds contain many varied and commercially valuable components, from individual pigments and metabolites through to whole biomass, and yet they remain an under cultivated and underutilised commodity. Currently, commercial exploitation of seaweeds is predominantly limited to whole biomass consumption or single product extracts for the food industry. The development of a seaweed biorefinery, based around multiple products and services, could provide an important opportunity to exploit new and currently underexplored markets. Here, we assessed the native and invasive seaweeds on the South West coast of the UK to determine their characteristics and potential for exploitation through a biorefinery pipeline, looking at multiple components including pigments, carbohydrates, lipids, proteins and other metabolites.
Introduction
Annually, over 25 million tons of macroalgal (seaweed) biomass is harvested globally, with the vast majority (95%) of this produced in Asia. The total market is currently worth $5.6 bn, with food products for human consumption making up approximately $5 bn of this [1] and the rest predominantly from animal feeds, fertilizer, pharmaceuticals and cosmetics. Recently, seaweed consumption by cattle has even been suggested as a useful way to decrease greenhouse gas emissions and combat climate change: when Asparagopsis taxiformis was used at 2% concentration in cattle feed, a significant reduction in methane production was observed [2]. With limited space for agricultural expansion in the terrestrial environment, the exploitation of seaweeds is gaining increasing attention in regions not traditionally associated with its mass cultivation and consumption. Whilst direct human consumption arguably remains the most easily accessible market in these "new" regions, a lack of cultural and social awareness and/or acceptance of the benefits will most likely hinder uptake. However, and these different species have a range of components that can make them valuable for the production of nutraceuticals, fuels, fertilisers and fine chemicals.
Here, we explored some of the components that can make individual seaweeds (commonly found in the South West) valuable as feedstocks in a biorefinery-based process, such as lipids and high-value omega oils, pigments with antioxidant and antibacterial properties, plant hormones and macronutrients, minerals, total carbohydrate and proteins. We avoided established food additives derived from seaweed sugars, such as agar and carrageenan, as they have been investigated previously and already offer established markets [1,10]. In addition, we also assessed heavy metal levels which could potentially hinder commercialisation opportunities.
Preparation of Seaweed
All samples were rinsed in filtered seawater to remove sand and other large particulates such as micro plastics, frozen at −80 • C and then freeze dried at −55 • C. Freeze-dried samples were ground to a fine powder that was stored in sealed containers at −80 • C to prevent sample degradation.
Pigment Extraction
To 50 mg dried seaweed, 2 mL of acetone was added along with 100 mg glass beads. Samples were then disrupted by rapid agitation in a bead beater for 3 min after which cell debris was settled by centrifugation. The supernatant (containing the extracted pigments) was analysed by high-performance liquid chromatography (HPLC) using an Accela system (Thermo Scientific, Waltham, MA, USA) fitted with a Waters Symmetry C8 Column (150 × 2.1 mm, 3.5 µm particle size, thermostatically maintained at 25 • C) according to the method of Zapata et al. [12]. Pigments were detected at 440 nm and 660 nm and identified by retention time and online diode array spectra. Monovinyl chlorophyll-a standard was obtained from Sigma-Aldrich Ltd. and other pigment standards were purchased from the DHI Institute for Water and Environment, Denmark. Quality assurance protocols followed Reference [13].
CHN/Protein
A Thermoquest Flash EA 1112 Elemental Analyzer (Thermo Fisher Scientific, Inc.) with high-temperature dry combustion was used to measure the percentage of carbon and nitrogen in each sample. Per each seaweed, three technical replicate samples consisting of 2 mg of finely ground freeze-dried seaweed were analysed. The percentage of protein per seaweed was estimated using the percentage of nitrogen determined by CHN analysis multiplied by the N-Prot conversion factor 5.0, as described in Reference [14] rather than the traditional N-prot value of 6.25 [15], which is generally recognised as overestimating protein content in seaweeds.
Ash
Ten to 50 mg of freeze-dried powdered seaweed was ashed in a muffle furnace for 2 h at 650 • C, then cooled to room temperature for 30 min in a desiccating cabinet. The ash percentage was calculated by (final weight/initial weight) × 100 for each of the samples.
Metal Analysis
Lithium tetraborate beads (10.2167 g) and 1.0000 g of seaweed were weighed and placed in a platinum crucible and heated in a Claisse electric Ox fusion furnace for a 1 h thermal cycle at 1050 • C. The fused borate beads were analysed by X-ray emission spectroscopy using the Panalytical Axios XRF Spectrophotometer.
Lipids
The fatty acid concentrations and profiles of the seaweed samples were determined post conversion to fatty acid methyl esters (FAMEs) using GC-MS. To 7-11 mg freeze-dried finely ground seaweed, tridecanoic acid (C13:0) was added as an internal standard and cellular fatty acids were converted directly to FAMEs by adding 1 mL of transesterification mix (1:1 v/v (methanolic-HCl); (chloroform: methanol 2:1)) followed by incubation at 70 • C for 1 h. After cooling, FAMEs were recovered by addition of n-hexane (1 mL) followed by vortexing. The upper hexane layer was injected directly onto the GC-MS. The FAMEs were identified and quantified using retention times and qualifier ion response. All parameters were derived from calibration curves generated from a FAME standard mix.
Phytohormones
Detection of four different bioactives (abscisic acid (ABA); trans-zeatine riboside (TZR); cis-zeatine riboside (CZR); and N6-(2-isopentyl) adenosine (2iP)) was achieved using Agrisera's plant hormone ELISA kits (Agrisera AB). The ELISAs were performed according to manufacturer instructions. Extractions were performed as follows: dry biomass (1 g) was resuspended in 100 mM acetate buffer (12 mL, pH 4.0) and vortexed for 2 min. Following sonication with a probe sonicator (6 times × 30 s on-pulse and a 30 s off-pulse, amplitude 10 microns) in an ice bath, samples were centrifuged at 10,000× g, at 4 • C for 15 min. The pH was rechecked and returned to pH 4.0, if necessary, with acetic acid prior to solid phase extraction purification. The SPE Supra-Clean ® C18-S Columns (50 mg/1 mL, PerkinElmer) were conditioned with 1 mL of methanol, equilibrated with 1 mL of 100 mM acetate buffer (pH 4.0) prior to the addition of 4 mL of extract. Following a wash with 1 mL of 5% methanol, the column was dried and 200 µL of 100% methanol was added to the elute fractions.
Results and Discussion
Seaweeds potentially make excellent lignin-free feed stocks for biorefineries due to the large range of products that can be extracted and isolated from them such as oils, proteins, carbohydrates and pigments. In addition, once high-value primary products have been refined, micronutrients can be recovered from the residual waste biomass.
In this study, 17 abundant and easily harvestable species of seaweed from the South West (see Table 1) were collected, identified and categorised by taxon-Chlorophyta (greens), Rhodophyta (reds) and Phaeophyta (browns). Seaweeds were assessed for total lipid content; production of the valuable omega-3 and omega-6 fatty acids, eicosapentaenoic acid and arachidonic acid; plant hormones relevant to agricultural fertilisers; as well as general protein and carbohydrate content, pigmentation (antioxidants) and ash/mineral content.
Lipid and High-Value Omega-3 Oil Content
The FAME analysis was used to assess the total lipid content of each seaweed species ( Table 2). The total concentration of lipids in the seaweeds are small in comparison to most land-based plant species [16] and varies with seasonality, generally peaking in late summer, decreasing over the autumn and winter and increasing again in spring [17]. Our data reflects this, with lipid content ranging from 0.8% of dry biomass in Colpomenia peregrina to 2.9% in Fucus serratus. Whilst the overall lipid content was low, the proportion of commercially important long-chain polyunsaturated fatty acids (PUFAs) such as eicosapentaenoic acid (EPA) and arachidonic acid (ARA) was high, suggesting some seaweeds may prove viable sources of high-value nutraceuticals, in a global market worth in the region of $400 billion annually [18]. All of the seaweeds tested with the exception of Ulva lactuca showed high concentrations of ARA (C20:4) and/or EPA (C20:5). Table 2. Fatty acid (FA) profiles of South West seaweed samples. Data, expressed as % of total FA content and total lipid, are given as mg/g and % of dry biomass. Fish-derived oils (themselves a bioaccumulation of algae synthesised PUFAs) are currently the major source of omega-3 and omega-6 PUFAs [16,19]. Seaweed could provide a viable alternative source where, for example, in Palmaria palmata, despite having a lipid content of just 2.3% at the time of sampling, 50% (11.6 mg/g biomass) of this lipid was in the form of EPA (20:5). Given that the EPA content of commercially available fish oil supplements range between~40-500 mg/g [20], it is clear that seaweeds could make a big impact as a substitute feed stock for the production of high-value omega-3 animal feed supplements, nutraceuticals, pharmaceuticals and cosmetics. Indeed, the recommended dietary ratio of ω-6: ω-3 PUFA is less than 10 [16] and with all of the seaweeds analysed here having a ratio of between 0.15 and 2, so offering a significant advantage to the food and feed industries.
Ulva lactuca (a common bloom forming species in the South West and globally) was the only seaweed tested which did not contain any EPA or ARA, although it did, however, contain a relatively high abundance of the essential fatty acid linoleic acid (18:2). The majority of the fatty acid content was palmitic acid (C16:0), which is used at high concentrations in products such as soaps and industrial release agents. Given the lower economic value of these products and the low overall lipid content of Ulva lactuca, it is unlikely that this seaweed would be an economical alternative source, unless integrated into a biorefinery model with at least one higher value co-product.
Pigments
In 2016, the global pigment and dyes market was estimated to be valued at $30.42 billion [21], and is expected to see significant growth in demand in the coming years. Whilst global omega-3 and omega-6 PUFA supplies are intrinsically linked with marine algae production, the pigment and dyes market is not as reliant on marine sources, and establishing seaweed-derived materials is more challenging. Nevertheless, seaweed pigments have found uses due to the fact of their antimicrobial properties [22], as well as uses in dyes, additives, health supplements, antibiotics, bioelectronics and antioxidants. The pigment content of each seaweed species was identified using HPLC. The primary pigments are given in Table 3. Levels of pigmentation varied both within taxon and among species. The lowest overall pigment content was seen in Ahnfeltiopsis devoniensis with just 37 µg pigment/g dry biomass whilst Punctaria latifolia had the highest content at 1319 µg/g.
Fucoxanthin, the xanthophyll responsible for the brown colouring of Phaeophyta, has been shown to have promising therapeutic uses in cancer and obesity treatments [23,24], as well as antibacterial properties [22] and could be beneficial in fertilisers, as it can help to reduce crop diseases. Unsurprisingly, whilst low levels of this pigment were observed in all the seaweeds analysed, only the browns contained it at concentrations that could be useful for industrial extraction, ranging from 340.4 µg/g in Punctaria latifolia to just over half this in Fucus serratus at 131.8 µg/g.
Product colour can have a huge impact on consumer perception, but concerns over the safety of synthetic colour additives and the increase in industrial safety requirements is leading the food and drink industries to increasingly seek natural colour alternatives [25]. Seaweed pigments range in colour from blue-greens (chlorophylls) to yellows (xanthophylls) and orange-reds (carotenes), and have a similar level of stability relative to their synthetic counterparts making them ideal as food colourings [26]. Chlorophyll (chl) is already approved and registered as a food additive (E140) and is used to colour various foods and beverages green [27] and has also been shown to have potential uses within cosmetics as deodorants and dentifrices due to the fact of its odour reducing properties [28].
As expected, both the levels and types of chlorophyll varied across the seaweed species tested. Both the green seaweeds had chl-b and chl-a at an approximate ratio of 1:1.5 (Spongomorpha aeruginosa) and 1:1.4 (Ulva lactuca). The red seaweeds generally contained predominantly chl-a, with very low levels (less than 10% of the chlorophyll pigment pool) of chl-c and/or chl-b. The exception was Palmaria palmata which contained chl-a only. The brown seaweeds all generally contained chl-c at levels of 12-22% than that of chl-a. The exceptions to this were Fucus serratus which had an approximately 50:50 ratio of chl-c:chl-a, and Gastroclonium ovatum whose chl-c accounted for just 0.5% of its total chlorophyll pigmentation.
The carotenoids α and β carotene are converted to vitamin A in the human body [29] and along with the xanthophylls lutein, zeaxanthin and violaxanthin, play a crucial role in maintaining eye health as well as reacting with free radicals to reduce oxidative stress and lipid peroxidation [24,30]. Punctaria latifolia contained the highest level of carotenoids at 46.8 µg/g, Palmaria palmata had the highest levels of lutein at 56.4 µg/g and Gastroclonium ovatum the highest levels of zeaxanthin at 11.8 µg/g. Violaxanthin was abundant in all the brown seaweeds with Punctaria latifolia containing 106.3 µg/g of this orange pigment. Table 3. Primary pigments of South West seaweed samples (µg/g dry biomass). Chlorophyll (chl)-c is given as the total of C-1, 2 and 3 subtypes. Antheraxanthin, a keto-carotenoid and yellow colorant, has a range of benefits to human and animal health. Due to the fact of its high demand in the pharmaceutical, nutraceutical, food and feed industries, there are major efforts to improve antheraxanthin production from biological sources instead of synthetic ones [31]. None of the green and only one of the red seaweeds (Rhodomela confervoides) contained antheraxanthin and this was at a very low level of just 2.5 µg/g. All of the brown seaweeds analysed contained antheraxanthin, but the content varied from 1.3 µg/g in Gastroclonium ovatum to 54.1 µg/g in Punctaria latifolia.
Overall, from a commercial perspective, Punctaria latifolia has good levels of all the important pigments (with the exception of lutein) making it an ideal candidate for multi-pigment isolation. Lutein production would be best achieved with Palmaria palmata which would also generate chlorophyll-a as a secondary pigment product.
Macronutrients-Carbohydrate and Protein
Many types of useful carbohydrates such as laminarin, cellulose, starch, alginate, fucoidan, agar and carrageenan are found in seaweed. Agar, alginate and carrageenan are all used within the food industry as thickening agents within food [10] and are currently the most commercially significant products from seaweed after direct consumption as food [5]. Crucially, they are all produced in single product-based industrial processes, rather than in the biorefinery approach explored herein. Beyond the food ingredients industry, seaweed biomass with high carbohydrate content can be used for production of biofuels by anaerobic digestion and fermentation converting the sugars to ethanol or butanol. Seaweeds were initially assessed for the total carbohydrate content using a traditional phenol-sulphuric acid method in which polysaccharides are broken down to monosaccharides, dehydrated to furfurals and reacted with phenol to produce a measurable colour.
Although the method detects almost all carbohydrates, the sensitivity varies depending on the type of sugars present [6] since the absorptivity of the different carbohydrates varies somewhat.
Thus, unless a sample is known to contain only one carbohydrate, the results must be expressed arbitrarily in terms of one carbohydrate (in this instance we used glucose). We found this method to be very unreliable due to the variations in carbohydrate structures found in seaweeds and, as such, we instead estimated the total carbohydrate content based on the protein, lipid and ash values (Table 4). Nine of the seaweeds tested had an estimated carbohydrate content of over 60%, with the highest observed in Saccharina latissima (73.0%).
Colpomenia peregrina had the highest ash content and was particularly mineral rich resulting in a much lower carbohydrate content (estimated at 12.2%).
The CHN analysis was used to calculate the protein content of the seaweeds. As well as protein being an important part of a healthy diet, nitrogen is an important component needed for plant growth; seaweeds with a high nitrogen content work well as sustainable sources of nitrogen for fertilisers [32]. With the exception of Fucus serratus, the highest concentrations of nitrogen/protein were observed in the Rhodophyta ranging from 9.37-17.65% of the total dry biomass. All of the seaweeds, however, have a nitrogen content that can make them suitable as a feed stock for the sustainable production of agricultural fertilisers.
Minerals
The ash content of the seaweeds ranged between an 18.2 and 85.3% dry biomass weight (Table 4). High ash content is indicative of high levels of minerals and trace elements [32] which are beneficial in fertilisers as a sustainable source of essential nutrients required for plant/crop growth/development. However, high levels of metal can be an issue when used as a food source or as fertilisers if they are found at unacceptably high levels. Indeed, some species of seaweed can perform a bioremediation service within metal-polluted waters, and seaweed harvested from such environments may be unsuitable for use in food or fertilizer without costly processing to remove heavy metals [33]. However, conversely, the bioremediation opportunities for seaweeds within aquatic systems may offer a significant and valuable upstream "service product" within a biorefinery process, subsidising the production of lower value downstream products.
X-ray fluorescence spectroscopy was used to assess whole dry biomass for the presence of a range of minerals (Table 5). Demand for phosphorus is expected to outstrip supply as soon as 2035 [34]. An estimated 80-90% of phosphorus use is in the form of fertiliser production with much of this then being lost through leaching from the soils [35]. Seaweeds can make an excellent alternative to current commercial fertilisers because of trace nutrients such as phosphorus and nitrogen, as well as the presence of hormones that can encourage plant growth. Seaweeds can, however, also rapidly accumulate pollutants, such as dissolved metals from their environment [36], or become loosely or transiently associated with metal particulates. As such, composition analysis of this nature must be taken with a pinch of (metallic) salt, as even the smallest metallic fragment derived from, for example, litter, fishing or structural material could give a potentially misleading result. Table 5. Mineral composition of seaweeds from South West England (ppm). One ppm is equivalent to 0.0001% of total dry biomass. The phosphorus content in the seaweeds analysed ranged from 0.1 to 0.3% of dry biomass-ideal for fertilizer products. However, the presence of aluminium was observed in all but three seaweeds, and whilst this is not unusual [37], if used as a fertiliser, over time soils could become contaminated with elevated levels of aluminium which in acidified soils can lead to plant toxicity [38]. Transition metal molybdenum and heavy metals arsenic and lead were not observed in any of the samples which is testament to the water quality on the South West coast where the sampling took place.
Mineral (ppm)
Colpomenia peregrina had a consistently high mineral content across the board and was one of just two of the seaweeds assessed to contain copper at detectable levels. This was not unexpected since this seaweed grows best when attached by rhizoidal filaments to rock bed or molluscs such as oysters from which minerals are leached.
Manganese was detected in most of the seaweeds and was particularly high in Rhodomela confervoide at 0.142% of the dry biomass. Whilst this mineral is important in many industrial processes, such as metal alloy manufacture, it is also widely used in nutritional supplements.
All of the seaweeds analysed contained detectable zinc and all but three of the seaweeds had iron at levels that could be used easily in nutritional supplements (the total daily recommended intake of iron is just 8.7-14.8 mg/day). It should be noted that the highest levels of this metal were observed in the physically smaller seaweed species such as Lomentaria articulata (4 mg/g) and Colpomenia peregrina (9.3 mg/g).
Phytohormones
In the agricultural industry, phytohormones have several commercial uses that are related to plant growth, flowering, ripening and alleviation of abiotic stresses such as drought, heat, cold, light and nutrient stress. Synthetic phytohormones have been used in agriculture for over 50 years with great success, but recent changes in regulation and consumer preferences have created a greater need for natural and sustainable alternatives. Whilst high-purity extracts may or may not prove a viable high-value product in a seaweed biorefinery in their own right, their presence in fractions to be used as agricultural fertilisers, soil conditioners and bio-stimulants could provide an economic premium for such products. Cytokinins (including TZR, CZR and 2iP) are amongst the most valued phytohormones in agriculture as they are directly related to plant growth, flowering and fruit set, and there are currently not many natural sources available in the market.
The levels of four different bioactive molecules (i.e., abscisic acid (ABA), trans-zeatine riboside (TZR), cis-zeatine riboside (CZR) and N6-(2-isopentyl) adenosine (2iP)) within the dried seaweed biomass was assessed (Figure 1). The levels of phytohormones were species specific and there was no trend with taxa, and whilst several seaweeds showed strong phytohormone production, four had levels that were extremely low and not of any commercial relevance. The highest level of ABA was seen in Ulva lactuca, which also had strong levels of TZR, CZR and 2iP. Sargassum muticum and Himanthalia elongata both had high levels of 2iP and TZR, whilst CZR was highest in Ulva lactuca and Rhodomela confervoides.
Conclusions
Seaweeds contain many varied and commercially valuable elements from individual pigments through to the whole biomass, and yet they remain an under cultivated and underutilised commodity. Currently, commercial exploitation of seaweeds is mostly limited to whole biomass consumption or single product extracts for the food industry. The development of a seaweed biorefinery, based around multiple products and services, could provide an important opportunity to exploit new and currently underexplored markets. This is especially so within countries such as the UK which have large stretches of coastline where natural harvest or commercial cultivation could be carried out alongside other marine activities such as windfarms. By taking a holistic biorefinery approach to fractionate and utilise multiple components of the biomass, not only can a significant level of revenue be achieved, but seaweeds may fuel the replacement of synthetically manufactured compounds often derived from petroleum oils. We have demonstrated in this work that native and invasive seaweeds on the South West coast contain valuable products and have the potential to be exploited through a biorefinery pipeline.
Seaweed is a primary food source in many Asian countries, but with very little consumed in western diets. Seaweeds can be, without doubt, nutritious due to the high abundance of pigments and oils that are beneficial to our health. This can be exploited by both the artisanal food and traditional food industries in Europe and beyond. However, the shift in cultural attitude required for greater consumption of seaweed-derived foods may hinder the expansion of this market in the west. A biorefinery approach, generating multiple high to low value products for multiple markets (such as pigments, PUFAs, high-quality fertilisers), could alleviate limitations to expansion of these traditional industrial seaweed activities. Indeed, a more radical approach than that outlined here employs hydrothermal processing of biomass or extracts to create multiple low-value product streams. As an established industrial process, such processing can be applied to "waste" fractions
Conclusions
Seaweeds contain many varied and commercially valuable elements from individual pigments through to the whole biomass, and yet they remain an under cultivated and underutilised commodity. Currently, commercial exploitation of seaweeds is mostly limited to whole biomass consumption or single product extracts for the food industry. The development of a seaweed biorefinery, based around multiple products and services, could provide an important opportunity to exploit new and currently underexplored markets. This is especially so within countries such as the UK which have large stretches of coastline where natural harvest or commercial cultivation could be carried out alongside other marine activities such as windfarms. By taking a holistic biorefinery approach to fractionate and utilise multiple components of the biomass, not only can a significant level of revenue be achieved, but seaweeds may fuel the replacement of synthetically manufactured compounds often derived from petroleum oils. We have demonstrated in this work that native and invasive seaweeds on the South West coast contain valuable products and have the potential to be exploited through a biorefinery pipeline.
Seaweed is a primary food source in many Asian countries, but with very little consumed in western diets. Seaweeds can be, without doubt, nutritious due to the high abundance of pigments and oils that are beneficial to our health. This can be exploited by both the artisanal food and traditional food industries in Europe and beyond. However, the shift in cultural attitude required for greater consumption of seaweed-derived foods may hinder the expansion of this market in the west. A biorefinery approach, generating multiple high to low value products for multiple markets (such as pigments, PUFAs, high-quality fertilisers), could alleviate limitations to expansion of these traditional industrial seaweed activities. Indeed, a more radical approach than that outlined here employs hydrothermal processing of biomass or extracts to create multiple low-value product streams.
As an established industrial process, such processing can be applied to "waste" fractions generated after high-value products have been isolated in a seaweed biorefinery, providing an intrinsic value to every component of the biomass. Hydrothermal liquefaction (HTL) uses a high-temperature and -pressure process to convert any biomass in to four primary outputs-bio crude oil, gas, ash (from which metals can be recovered) and an aqueous fraction (to which soluble minerals such as nitrogen, phosphorus and potassium partition) and is an excellent example of a biorefinery process. Indeed, two of the seaweeds (Ulva lactuca and Sargassum muticum) harvested as part of this study were subject to HTL in a separate investigation [39], suggesting that extracts derived from their whole biomass of insufficient value or of no obvious market can be successfully converted into low-value products with established markets. Here, we have shown that the seaweeds of the South West coast, seaweeds which are currently overlooked, unexploited and often unappreciated, naturally contain a plethora of metabolites and compounds with high commercial relevance and interest. Individually, they are unlikely to reach commercial exploitation, yet when exploited together in a biorefinery approach, the potential exists for a burgeoning seaweed-based bio-economy to develop in areas such as the South West of the United Kingdom. | 2019-10-24T09:02:45.467Z | 2019-10-21T00:00:00.000 | {
"year": 2019,
"sha1": "1b35292842a919691b7bbea5ad5db2509a0ef07f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/9/20/4456/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "cbc915aeb321d8c8ea3f7c65004d04e1a362ded0",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
149453258 | pes2o/s2orc | v3-fos-license | Are Chinese Residents Willing to Recycle Express Packaging Waste ? Evidence from a Bayesian Regularized Neural Network Model
While enriching people’s lives, the rapid development of online shopping has posed a severe challenge to the environment. Questionnaires focusing on the intention to recycle packaging waste are designed. These questionnaires contain first-level variables such as recycling behavior attitude, recycling behavior cognition, situational factors, historical recycling behavior, and recycling behavior intention. With the collected questionnaire data, a regression analysis is first conducted on the selection of variables and the effect of variable prediction. After ensuring the validity of the variables, 15 second-level variables are extracted into eight principal components using principal component analysis. These components serve as input to a Bayesian regularized neural network. Subsequently, a three-layer (8-15-1) neural network model is constructed; the trained neural network model achieves a high degree of fit between the predicted and measured values of the test set, thus further proving the rationality of the selected variables and the neural network model. Finally, this study uses the connection weights matrix of the neural network model and the Garson formula to analyze in depth the specific impact of each second-level variable on the intention to recycle packaging waste. Note that given the particularity of packaging waste recycling behavior, the impact on social norms, recycling behavior knowledge, values, and publicity on behavioral intentions in second-level variables is different from that obtained in similar previous studies.
Introduction
Thanks to the rapid development of China's economy and urbanization [1] as well as the improvement in people's living standards, China has become the most developed country in the world for online shopping.This has brought convenience to people's lives, but has also been accompanied by a large increase in packaging waste that exerts tremendous pressure on the environment [2].According to the "Report on Status and Trends of Green Packaging Development in China's Express Industry" (2017) [3] issued by the China National Post Office in 2016, China's express service directly consumed about 3.2 billion woven bags, about 6.8 billion plastic bags, 3.7 billion packaging boxes, and a total of 330 million rolls of sticky tape.The corrugated box consumption was equivalent to 72 million trees.However, the overall recycling rate of China's express packaging waste is less than 20%.Residents discard express packaging waste as ordinary garbage because of the low profitability of recycling express packaging and the lack of convenient recycling options [4].Recyclable packaging waste consists of renewable resources, such as packaging boxes as well as non-degradable substances such as plastic bags and tapes.Therefore, the impact on discarding express packaged garbage should not be underestimated from either the economic or the environmental point of view.
From a macro perspective, the greenhouse effect has arouse global attention in recent years, especially for carbon dioxide emission which serious caused global warming [5].The final product structure and final demand structure are the factors that hinder the reduction in carbon emission intensity [6].Thus, recent years have witnessed strengthened control over the recycling industry in China as governments have gradually attached importance of the problem of proliferating express packaging waste [7].Various measures have been proposed to support China's developing concept of New-type urbanization [8], such as regulating express packaging production, optimizing recycling systems, and building a recycling platform, but they are all at the preliminary recommendation stage rather than existing legal regulations.Although the state is optimizing the external environment for Express Packaging Waste Recycling (EPWR), it is also important to pay attention to the behavioral intentions of the participants in express packaging garbage collection.Hence, it is very necessary to analyze the relevant psychological variables of the participants and predict their behavioral intentions [7].
Among many common methods, multiple linear regression may ignore the interaction between each dimension variable and the nonlinear causal relationship in this paper.Logistic regression is sensitive to the multicollinearity of independent variables in the model.If two highly correlated independent variables are placed into the model at the same time, the symbol of the weaker one may be reversed.The structural equation is the validated model which is based on the existing theory.Since the extended variables are added in our questionnaire, the relationship between variables needs to be further explored but not validated.Therefore, this paper selects a neural network that can learn and store a large number of input-output mode mapping relationships.In addition, this network automatically adjusts its internal neuron weight parameters to predict and analyze variables of different dimensions.Moreover, there are many factors affecting BIRC which may be more likely to interfere with each other, and the principal component analysis (PCA) is used to screen the factors to simplify the complexity of the neural network and improve the prediction accuracy.PCA has been effectively integrated with neural networks in the fields of electricity [9], agriculture [10], tourism [11], and industrial manufacturing [12], but not yet into research on environmental behavior.Furthermore, the fields mentioned above have rarely considered the regularization of neural networks when using principal component analysis and neural network models.Therefore, based on principal component analysis, this study predicts behavioral intentions by using Bayesian neural regularization networks.
By referring to the contributions of researchers in environmental behavior, this study designs a questionnaire about intentions in EPWR behavior.After the questionnaire is distributed, collected, and tested for reliability and validity, principal components are extracted from relevant variables using PCA.In terms of the results, the Bayesian regularized neural network model is employed to simulate intentions of EPWR.Finally, the influence of the main component in behavioral intention is analyzed by calculating the sensitivity of the output of the neural network model.The results can contribute to public participation in EPWR of China.
This paper is organized as follows.Section 2 presents a review of relevant literature.Section 3 provides an introduction to the methods used in this study.The preliminary analysis and pre-processing of the data required by the neural network model are described in Section 4. Section 5 introduces the construction and training of the Bayesian regularized neural network as well as a measurement and discussion of the sensitivity coefficient of each variable.Finally, we conclude this study.
Literature Reviews of Packaging Waste and Recycling Behavior
China grew fastest in the world, and China's growth has resulted in a burgeoning waste management problem [13].Municipal solid waste mainly consists of residential, institutional, street cleaning, commercial and industrial wastes in China.Chinese municipal solid waste has increased from 31.3 to 113.0 million tons from 1980 to 1998, following an annual increase rate of 3-10%.Chinese municipal solid waste categories in China comprise kitchen wastes, paper, plastic, glass, batteries, metal, brick and stones, fabric, pottery, and discarded domestic appliances [14].
Some researchers have realized that rapid development of express delivery and online shopping is imposing a serious burden on the environment [15] and have tried to mitigate the pollution caused by express packaging through low-carbon design [16], yet a focus on studying EPWR is rare.Most studies focus on studying the design and recycling of food packaging [17,18] and plastic packaging [19,20].Among these, European researchers carry out more studies on recycling of packaging waste: Rui et al. [21] study the economic feasibility of a packaging waste recycling system and compares the possibilities between Portugal and Belgium; Mrkajić et al. [22] use quantitative and qualitative methods to evaluate the effectiveness of the Serbian packaging waste recycling system and find that prolonging the producer responsibility system could effectively improve the operating efficiency of recycling; Yıldız-Geyhan et al. [23] measure different packaging waste recycling systems from the perspective of the social life cycle and discover that a regular recycling system scores better than existing recycling systems and informal recycling channels.
Recycling behavior has gradually become a topic of global concern as an easy-to-implement and enforceable environmentally responsible behavior [24,25].As early as a decade ago, Tonglet et al. [26] and Robinson and Read [27] investigate the recycling behavior of residents in the London Borough and Brixworth areas using questionnaires.In recent years, researchers study recycling behavior at a more microscopic scale.In predicting recycling behavior, Chan and Bishop [28] examine how the moral code extends the TPB theory.Similarly, Wan et al. [29] expand the model of recycling attitude and recycling behavior, then propose a new research variable known as policy effect perception.Taking the point when recycling attitude affects recycling behavior as an entry point, Huffman et al. [24] compare the various effects of social factors and worldview on both self-reported and observed recycling behavior.Miliute-Plepiene et al. [30], Oztekin et al. [31], Poškus and Žukauskien ė [25] focus on the effects of maturity, gender, and personality type on the recovery mechanism, respectively.With the continuous advancement of society, research on recycling behavior is no longer limited to traditional recyclables.Hu and Yu [32] and Wang et al. [33] study the intention to recycle e-waste.In studying recycling behaviors, new research methodologies are rare, and researchers focus mainly on structural equation model [29,31,32,34], linear or logistic regression [24,[35][36][37], or a combination of the above methods [38].
Theoretical Framework of Behavioral Science
As a necessary process of behavioral occurrence, behavioral intention is the decisive factor before the behavior occurs [39], as well as the psychological tendency and subjective probability of the individual before performing the behavior [40].Behavioral intention is an important mediator of behavior because other subjective psychological factors indirectly affect actual behavior through behavioral intention.Researchers often apply the Theory of Planned Behavior (TPB) and Attitude-Behavior-Condition (ABC) theory to predict behavior and behavioral intentions.TPB is the theory of the relationship between attitude and behavior as posed by Ajzen.It is the inheritance and continuation of rational behavior theory and attitude theory and is also an influential theoretical framework in various fields such as behavioral research.A large number of empirical studies have proven that it can significantly improve the ability to interpret and predict behavior [41].According to TPB, human behavior is planned, and recycling behavior is determined by behavioral intention.Behavioral attitudes, subjective norms, and perceived behavioral control are the three major factors influencing behavioral intentions.Unlike TPB, ABC theory treats external conditions as an important factor to promoting and restricting behavior.External conditions mainly refer to behavioral convenience, namely situational factors, and ABC theory holds that behavior will occur when the cumulative effect of external conditions and attitudes is positive [42].Mannetti et al. [38] propose that these two theoretical frameworks can be used to study people's participation in recycling and that the incentives of the two frameworks are attitudes and material incentives, respectively.Previous studies in psychology have focused on the framework of attitudes.In recent years, researchers in different fields have been more willing to let these two frameworks learn from each other.
Behavioral attitude is an important psychological variable for predicting environmental behavior, and positive environmental attitude will significantly promote the generation of environmental behavior [43].Sia et al. [44] hold that attitude variables include values, beliefs, and environmental concern, whereas Kaiser et al. [45] argue that environmental attitude variable includes environmental knowledge, environmental values, and environmental behavioral tendencies.Values are the foundation of attitude formation [46], and environmental issues always involve conflicts between individual and collective interests.Therefore, values play an important role in predicting environmental behavior.Stern et al. [47] divide values into ecological, egoistic, and altruistic values.Later researchers discover that different values could form different new ecological paradigms.For example, altruistic and ecological values are positively related to environmental behavior while egoistic values are in contrast [48].Environmental concern is also important to forming environmental attitudes, and improved attitude can further consolidate recycling behavior; hence, environmental concern is the positive latent variable of recycling intention [34,49].Base on the relevant theory of behavior, as an individual's personality varies there is a positive correlation between knowledge and behavior [50].Environmental knowledge is an important antecedent variable of behavior, which has a significant impact on the intention to recycle and, thus, promotes the generation of behavior [32,45,51].
On the cognitive level of recycling behavior, Stern et al. [47] find that environmental responsibility is of great importance for predicting recycling behavior and that individual behavioral intention is also restricted by other individuals or groups.Castronova [52] and Robinson and Read [25] consider that role models are conducive to promoting interactional emulation and learning potential; that is, a herd mentality can lead to generation of recycling behavior.Regarding the important components of the behavioral system, Davies et al. [53], Tonglet et al. [26], and Wan et al. [29] believe that behavioral perception has an impact on behavioral intention and that the reaction results mainly affect behavior through its information and motivation functions, that is, through psychological cognition [54].Behavioral control perception, a key variable in formation of TPB theory, has a positive effect on behavioral intentions.A strong perception of behavioral control can enhance an individual's willingness to carry out behavior [55].The conclusions of Davies et al. [53] and Tonglet et al. [26] support this theory, and Oztekin et al. [56] further find that compares with men, women recycling behavior is more susceptible to behavioral control perception.
As stated above, environmental behavior is also affected by the external environmental context [37,57].Of the situational factors, social norms are the basic principles for determining and adjusting people's common activities and the relationships between people and are the necessary code of conduct for the entire society and members of various social groups [58].Whitmarsh [59] discovers that public pressure from families and neighbors is highly effective in directing the environmental behavior of residents and can be a significant factor in predicting behavioral intention.Therefore, the role of social norms in environmental behavior should be emphasized.Miliute-Plepiene et al. [30] also propose that social norms are particularly important to the early stages of recycling systems.In addition, the perceived pressure of social norms is particularly significant in the Chinese cultural environment, which encourages people to adopt relevant behavior to integrate into society smoothly [60,61].Chen et al. [62] and Poortinga et al. [63] also find that economic incentives are important external dependent variables that influence behavioral intentions.Although policy institutions are an important manifestation of government-constrained individual behavior and states can adopt persuasive or mandatory mechanisms to increase the enthusiasm of public participation [64], which are an important inducement for residents to participate in specific behavior [65].At the same time, publicity can enhance residents' perceptions and understanding, thereby improving residents' behavioral choices and regulating the influence of behavioral intention on recycling behaviors [32,66,67].However, some studies find that the impact of publicity on behavioral intention varies due to differences in social and cultural background [68].
In addition, the interpersonal behavior theory proposed by Triandis [69] states that behavioral habits and rules also have an impact on the occurrence of behavior.In other words, the more entrenched a specific habit is, the fewer obstacles there are for implementing behavior and the easier it is to generate the behavior.Michiyo [70] finds that when predicting recycling behavior, historical recovery experience is a better predictor than behavioral attitude; Tonglet et al. [26], Klöckner and Oppedal [71], and Knussen and Yule [72] also consider recycling behavior habits and lifestyle as important predictive variables and find that the recycling habits of men have a greater effect on behavioral intentions [56].
In summary, researchers hold many different opinions on the factors affecting the intention to recycle, yet most of them are based on the theoretical framework of TPB and ABC.TPB allows variables to be added to the theoretical model to enhance its explanatory power and predictive validity [31,39,73].Thus, this study, referring to TPB and ABC theory, augments indicator variables such as knowledge of recycling, concern about recycling problems, herd mentality, behavioral result perception, social norms, and historical recycling behavior to improve the accuracy of behavioral intention prediction.The questionnaire selects a set of variables that predict the behavioral intention of recycling, as shown in Figure 1: recycling behavioral attitudes (including environmental values, concern about recycling problems, and knowledge of recycling), recycling behavior recognition (including environmental responsibility, herd mentality, behavioral result perception, and perceived behavioral control), situational factors (including social norms, economic incentives, perceived effectiveness of policy and publicity), and historical recycling behavior (including Habits adjustment behavior, and Interpersonal facilitation behavior).The specific logical hypothesis is that the psychological characteristic factors, namely recycling behavior attitude (RBA) and recycling behavior recognition (RBR), as well as historical recycling behavior (HCB) have a direct impact on the behavioral intention of recycling and conservation (BIRC).The situational factor (SF) is the adjustment factor of the psychological characteristics affecting BIRC, and the social population variable (SPV) affects people's recycling behavioral habits.In addition, the normative nature of the recycling system [29] and the availability of recycling facilities [36,41,74] also have impacts on behavioral intentions.Therefore, these considerations are included when designing the questionnaire content.Table 1 presents the meanings of the abbreviations for these variables in the text.
Sustainability 2018, 10, x FOR PEER REVIEW 5 of 24 In addition, the interpersonal behavior theory proposed by Triandis [69] states that behavioral habits and rules also have an impact on the occurrence of behavior.In other words, the more entrenched a specific habit is, the fewer obstacles there are for implementing behavior and the easier it is to generate the behavior.Michiyo [70] finds that when predicting recycling behavior, historical recovery experience is a better predictor than behavioral attitude; Tonglet et al. [26], Klöckner and Oppedal [71], and Knussen and Yule [72] also consider recycling behavior habits and lifestyle as important predictive variables and find that the recycling habits of men have a greater effect on behavioral intentions [56].
In summary, researchers hold many different opinions on the factors affecting the intention to recycle, yet most of them are based on the theoretical framework of TPB and ABC.TPB allows variables to be added to the theoretical model to enhance its explanatory power and predictive validity [31,39,73].Thus, this study, referring to TPB and ABC theory, augments indicator variables such as knowledge of recycling, concern about recycling problems, herd mentality, behavioral result perception, social norms, and historical recycling behavior to improve the accuracy of behavioral intention prediction.The questionnaire selects a set of variables that predict the behavioral intention of recycling, as shown in Figure 1: recycling behavioral attitudes (including environmental values, concern about recycling problems, and knowledge of recycling), recycling behavior recognition (including environmental responsibility, herd mentality, behavioral result perception, and perceived behavioral control), situational factors (including social norms, economic incentives, perceived effectiveness of policy and publicity), and historical recycling behavior (including Habits adjustment behavior, and Interpersonal facilitation behavior).The specific logical hypothesis is that the psychological characteristic factors, namely recycling behavior attitude (RBA) and recycling behavior recognition (RBR), as well as historical recycling behavior (HCB) have a direct impact on the behavioral intention of recycling and conservation (BIRC).The situational factor (SF) is the adjustment factor of the psychological characteristics affecting BIRC, and the social population variable (SPV) affects people's recycling behavioral habits.In addition, the normative nature of the recycling system [29] and the availability of recycling facilities [36,41,74] also have impacts on behavioral intentions.Therefore, these considerations are included when designing the questionnaire content.Table 1 presents the meanings of the abbreviations for these variables in the text.
Regression Analysis
Since the theoretical model in this study adds extended variables base on TPB and ABC theory, to ensure the validity of the predictive model, a hierarchical regression analysis is first carried out on the variables.Base of the variance (R 2 ) explained by the models, it is evaluated whether adding variables is reasonable.Finally, a brief analysis is performed on the predictive effects of each variable on the BIRC to provide a reference to the prediction results of the neural network model.
Principal Component Analysis (PCA)
As there are many variables in the questionnaire and a certain degree of collinearity between them, if all the indicators are inputted into the neural network, the network complexity would increase and network training performance would reduce.However, abandoning some variable indicators would result in loss of information.With principal component analysis, feature dimensionality reduction can be achieved by orthogonally transforming multiple features into a few integrated features, so that the new main components coming from the original variables can describe or explain most of the features of the multivariate variance-covariance structure [75].This approach can decrease the correlation between neural network input variables, streamline neural network structure, and improve neural network prediction accuracy [76].
BP Neural Network
As a feed forward network using an error back propagation (BP) algorithm, a BP neural network is usually composed of an input layer, one or more hidden layers, and an output layer [77].According to the Kolmogorov theorem, as long as the number of hidden layer is 3, a BP network can achieve an approximation of any arbitrary precision [78].The neurons between the layers are connected by the corresponding network weights.The process of weight adjustment is the process of network learning until the network error reaches the convergence criterion.As a result, through back propagation the output approaches the expected output.The essence is to discover the mapping relations between input and output contained in the finite sample data, so that the appropriate output can be given for an untrained input.This generalization ability is important to measuring the performance of the neural network [79].
Although the BP neural network has strong nonlinear mapping ability, the gradient descent method used here depends on the initial conditions, and the network may converge on a local minimum value instead of the global minimum based on the gradient descent.To achieve a better fit, the network needs to debug the data multiple times, which will lead to over-fitting [80].Moreover, its learning speed, accuracy, and generalization ability are not ideal.
Bayesian Regularized Neural Network
Regularization refers to limiting the scale of weights and thresholds to improve the generalization ability of the neural network.In other words, on the basis of the neural network error function MSE, a penalty term, which can approximate the complex function, is added, thus improving the neural network function as the following Equation ( 1): where the square of the network weights is described as Equation ( 2): W i is the weight of the neural network connection; n is the total number of samples; E D is the sum of the residuals of the expected value and target value of the neural network; and α and β represent the regularization parameters that determine the training target of the neural network and control the degree of fit achieved.
Bayesian regularization takes the objective function of the traditional neural network model as a likelihood function.The regularizer corresponds to the prior probability distribution on the network weights, and the network weights are regarded as a random variable [81].A Bayesian regularization neural network refers to a forward neural network based on Bayesian regularization training [82].Using a hypothesized parameter probability distribution, this network learns in the whole weight space and evaluates relevant parameters.It then adjusts the regularization parameter and performs adaptive adjustment of the regularization parameters using Bayesian inference based on the posterior distribution [83].According to the probability density of weights to determine the optimal weighting function, and under the premise of ensuring the smallest squared network error, the weights are minimized to provide effective control of network complexity and to improve network generalization ability [84].Bayesian regularization optimizes the fit of the neural network of the training samples and minimizes model complexity by improving the training performance function of the neural network.
Questionnaire Survey and Scale Test
The Likert 5 evaluation method is employed for the questionnaire, where one indicates that the description of an item is completely inconsistent and five indicates that the item description is completely consistent.Upon finishing the questionnaire design, to ensure the validity of the questionnaire, a small-scale pre-study is carried out.A total of 187 questionnaires are distributed in the pre-study, of which 151 questionnaires are valid.The pre-study questionnaire is tested for reliability and validity; except for two variables, "environmental responsibility" and "social norms", the Cronbach's alpha coefficients of the other variables are all above 0.72.After checking the test results, it is finding that the "item and overall correlation coefficient" of each question under these two variables is less than 0.2, and therefore these two items are deleted from the formal questionnaire.After re-testing the reliability, the Cronbach's alpha coefficient of all variables is between 0.71 and 0.92, indicating that the modified questionnaire has a high degree of confidence.In the structural validity test, all variables are divided into independent, dependent, and regulatory variables.The KMO values of the three are all around 0.8.The Bartlett spherical test chi-square values is large enough, and the significant probability Sig is 0.000, indicating that the structure of the pre-study questionnaire is good.
After this, the formal questionnaire is distributed.A total of 628 questionnaires are collected, with 526 valid questionnaires and a recovery efficiency of 84%.Table 2 shows the reliability and validity tests results of the questionnaire.These results indicate that the credibility and structure of the questionnaire design are improved on the pre-study questionnaire and that the questionnaire data could be used for further regression and prediction.
Descriptive Statistical Analysis
Gender, age, education level, occupation type, monthly income level, city of residence, number of permanent residents in the household, and family type are incorporated into the demographic sociological variables in the questionnaire (see Table 3).The proportion of men and women is essentially balanced.Age is mainly concentrated on the interval between 18 and 50 years old, accounting for 98.2% of the sample.The proportion of people with education from junior college to a Master's degree is 93.6%, which is roughly consistent with the age distribution and academic level of online shopping customers in China.The cities of residence cover the eastern, central, and western regions in China, including Hong Kong, Macao, and Taiwan.The rest of the demographic social variables provide more comprehensive income and family type information.Therefore, from the perspective of demographic sociological variables, the questionnaire respondents have a certain degree of representativeness, indicating that they could be used as a microcosm to study the intention of EPWR in China.To test the explanatory power of each variable before predicting the BIRC, hierarchical regression of the data is performed by adding one dimension each time according to RBA, RBR, SF, and RBH.
Table 4 shows that R 2 , representing the degree of fit of the model, gradually increase as variables are added, and the path coefficients Sig of the four regressions are all below 0.005.The increase in R 2 is the largest when the psychological dimension of recovery is added; R 2 also increase when adding historical recycling behavior, but not to a marked extent.This specifies that on the basis of TPB and ABC theory, it is reasonable to add the extended variables to the questionnaire.As for the significance of the path coefficient of each variable (see Table 5), the predictive effects of ECV and SN are not obvious for each variable under the four dimensions, but other variables have significant predictive effects on BIRC.It is worth note that KR and ER have a negative predictive effect on BIRC.Analysis of the predictive effect also suggests that, apart from the two individual variables, the approach used in this study is effective against selecting and designing variables to predict BIRC.Since the regression prediction results are used as a reference in this study, the two factors with less dramatic predictive effects are not discarded.
Principal Component Analysis
Although the data in this study share the same dimensions, they are normalized and mapped to the 0-1 range to ensure convergence speed and accuracy of the iterative solution in later calculations.
The normalization formula is described as Equation (3): where X * is the raw data of a variable, X min and X max are, respectively, the minimum and maximum values in the original data, and X is the normalized data.The principal components of the variables are then extracted by principal component analysis.When extracting the principal component the eigenvalues are set >0.6 to improve the contribution rate of the principal components and to ensure the accuracy of the behavioral intention prediction.As a result, there is a certain discrepancy between the factor loading distribution dimension of each variable and the scale design.The cumulative variance explanation rate is 81.625%, which means that the eight factors could explain 81.625% of the information about the 15 variables.The variables are compressed and integrated while ensuring the information on raw data.Table 6 shows the maximum value of each variable index is extracted to obtain the specific variable meaning of the principal component in which it is located, and each component is marked with different color.Then, according to the principal component equation and the eigenvector matrix, the principal component values could be calculated.The principal component equation is Equation ( 4): where a j is the variable factor loading in the vector matrix, X j . is the normalized value of each variable, Y is the main component value, and the eight principal component values Y 1 -Y 8 are sequentially calculated.Pearson correlation analysis is carried out and there is no correlation between the principal components.Therefore, Y 1 -Y 8 can be utilized as inputs of the BP neural network prediction model.The topology of a neural network typically consists of an input layer, one or more hidden layers, and an output layer.Generally speaking, as long as a sufficient number of hidden-layer neurons are present, a three-layer network can fully approximate any nonlinear function of finite discontinuities with arbitrary precision to achieve an arbitrary nonlinear mapping.Therefore, this study constructs a three-layer neural network consisting of an input layer, a hidden layer, and an output layer.
Selection of BP Neural Network Nodes
In general, the number of input and output variables determines the number of nodes in the input and output layers.The input data in this study are a matrix consisting of eight principal component values.The output data are the BIRC score matrix.Therefore, the number of nodes in the input and output layers were eight and one, respectively.
The number of nodes in the hidden layer is especially important to neural network performance.With too few nodes, the network may not be able to learn and identify the input information fully; with too many nodes, excessive fitting and poor fault tolerance may result, and the model training time may be extended.While ensuring the accuracy of model prediction, the minimum number of hidden-layer nodes should be selected.There is still no clear and unified calculation formula to select the number of nodes.Usually, after repeated trials by operators, the optimal number of hidden-layer nodes is identified by measuring the network training errors and the quality of network fit.By using Equation ( 5), this study eventually determines the number of hidden-layer nodes as 15 when the training effect is optimal.Eventually a three-layer 8-15-1 neural network model is established: where n and m are the number of nodes in the input and output layers respectively and α. is arbitrary constants between 1 and 10.
Selection of BP Neural Network Training Function and Training Parameters
The training function of the network is the Bayesian regularization algorithm Trainbr, the training performance function is MSE, the transfer function from the input layer to the hidden layer is the sigmoid tangent function tansig, and the transfer function from the hidden layer to the output layer is the linear function purelin.The network learning rate Lris set to 0.05, the maximum number of training iteration steps are 1000, and the training convergence criterion is 0.001.At the 172nd epoch, the maximum MU value is 7.85 × 10 +10 and, therefore, the network stops learning.MU is a friction coefficient and will increase when further iterations make the error increase; reaching the maximum MU value indicates that the minimum error is found and the training converges [81].Figure 3 shows the training convergence process.The convergence curve reveals that after 172 iterations, the fitting accuracy reaches 0.00054935, and the number of effective network parameters is 137.In the model training process, convergence speed is fast, learning efficiency is high, and the trained network can be used as a test network of prediction.
Training of Neural Network Models
learning.MU is a friction coefficient and will increase when further iterations make the error increase; reaching the maximum MU value indicates that the minimum error is found and the training converges [81].Figure 3 shows the training convergence process.The convergence curve reveals that after 172 iterations, the fitting accuracy reaches 0.00054935, and the number of effective network parameters is 137.In the model training process, convergence speed is fast, learning efficiency is high, and the trained network can be used as a test network of prediction.
Predictive Simulation by the Neural Network Model
To verify the validity of the network as determined, 80 test data points are entered into the trained neural network model.The expected output of the test is the normalized measured value of BIRC, and the simulation prediction output is the training result after inputting 80 primary component values into the network.
Figure 4 shows that among the 80 samples, there are 69 errors in the interval [-0.1, 0.1]; the maximum error, which indicates that the prediction accuracy, is high.The predicted and measured values of most of the samples in Figure 5 coincide or show a similar trend.However, the positive error in Figure 4 is relatively greater, which led in Figure 5 to a larger ratio of predicted values that are greater than the measured values.
Predictive Simulation by the Neural Network Model
To verify the validity of the network as determined, 80 test data points are entered into the trained neural network model.The expected output of the test is the normalized measured value of BIRC, and the simulation prediction output is the training result after inputting 80 primary component values into the network.
Figure 4 shows that among the 80 samples, there are 69 errors in the interval [−0.1, 0.1]; the maximum error, which indicates that the prediction accuracy, is high.The predicted and measured values of most of the samples in Figure 5 coincide or show a similar trend.However, the positive error in Figure 4 is relatively greater, which led in Figure 5 to a larger ratio of predicted values that are greater than the measured values.
To verify the validity of the network as determined, 80 test data points are entered into the trained neural network model.The expected output of the test is the normalized measured value of BIRC, and the simulation prediction output is the training result after inputting 80 primary component values into the network.
Figure 4 shows that among the 80 samples, there are 69 errors in the interval [-0.1, 0.1]; the maximum error, which indicates that the prediction accuracy, is high.The predicted and measured values of most of the samples in Figure 5 coincide or show a similar trend.However, the positive error in Figure 4 is relatively greater, which led in Figure 5 to a larger ratio of predicted values that are greater than the measured values.Since the current predicted value is obtained from the normalized input matrix, the predicted value is anti-normalized and compared with the measured BIRC values of the 80 samples.Figure 6 shows the difference between the two values.Many decimal places in the data are found after normalization.To predict the participants' BIRC more intuitively, the anti-normalized data are rounded upward so that the predicted result corresponds to the five-level Likert scales in the questionnaire.After anti-normalization, there are 64 samples of zero error between predicted and measured values, and the errors in the remaining samples are contained within the interval [-1, +1].This further proves that a Bayesian regularized neural network model based on the principal components constructed in this study has the characteristics of high robustness and good ability to predict behavioral intentions.When analyzing from the perspective of degree of fit (see Figure 7), the training set has the highest fitting level, and the fitting degree of the test set also shows the high generalization ability of the model because the overall fitting degree of the model is close to 95%.Moreover, this model does Since the current predicted value is obtained from the normalized input matrix, the predicted value is anti-normalized and compared with the measured BIRC values of the 80 samples.Figure 6 shows the difference between the two values.Many decimal places in the data are found after normalization.To predict the participants' BIRC more intuitively, the anti-normalized data are rounded upward so that the predicted result corresponds to the five-level Likert scales in the questionnaire.After anti-normalization, there are 64 samples of zero error between predicted and measured values, and the errors in the remaining samples are contained within the interval [−1, +1].This further proves that a Bayesian regularized neural network model based on the principal components constructed in this study has the characteristics of high robustness and good ability to predict behavioral intentions.
rounded upward so that the predicted result corresponds to the five-level Likert scales in the questionnaire.After anti-normalization, there are 64 samples of zero error between predicted and measured values, and the errors in the remaining samples are contained within the interval [-1, +1].This further proves that a Bayesian regularized neural network model based on the principal components constructed in this study has the characteristics of high robustness and good ability to predict behavioral intentions.When analyzing from the perspective of degree of fit (see Figure 7), the training set has the highest fitting level, and the fitting degree of the test set also shows the high generalization ability of the model because the overall fitting degree of the model is close to 95%.Moreover, this model does not exhibit the phenomenon that the prediction error is increased after the error has been reduced to a certain value (also called over-fitting).These results all prove the reasonableness of the neural network model design, as well as the selection of training methods and parameters, which further proves that a good choice of input variables can help achieve ideal training results.When analyzing from the perspective of degree of fit (see Figure 7), the training set has the highest fitting level, and the fitting degree of the test set also shows the high generalization ability of the model because the overall fitting degree of the model is close to 95%.Moreover, this model does not exhibit the phenomenon that the prediction error is increased after the error has been reduced to a certain value (also called over-fitting).These results all prove the reasonableness of the neural network model design, as well as the selection of training methods and parameters, which further proves that a good choice of input variables can help achieve ideal training results.
Results of Sensitivity Calculation for Principal Components
According to Garson [85], the influence of the input variable or the relative contribution value can be calculated as the product of the connection weight automatically adjusted by the neural network in training, which can reflect the degree of influence of the input variable on the output variable, i.e., the sensitivity.The sensitivity coefficient formula is shown in Equation ( 6): According to Garson [85], the influence of the input variable or the relative contribution value can be calculated as the product of the connection weight automatically adjusted by the neural network in training, which can reflect the degree of influence of the input variable on the output variable, i.e., the sensitivity.The sensitivity coefficient formula is shown in Equation ( 6): where I j is the weight of influence of the jth inputs variable on the output variable, N i .and N h .are the numbers of input-layer and hidden-layer nodes, W ih . is the weight of the input layer to the hidden layer, and W ho is the weight of the hidden layer to the output layer.A larger I j value indicates a greater impact on the output and a higher sensitivity.Table 7 shows the weights upon completion of neural network training.Figure 8 presents the sensitivity coefficients of principal components 1-8 and the variables of the principal components that are substituted into the calculation formula.Of the eight principal components, the sensitivity coefficients of the second to sixth principal components are all greater than 12%, and that of the third principal component reaches 15.83%.This result shows that in this study, Behavioral Results Perception and Perceived Behavioral Control under the psychological cognitive dimension of EPWR have the most significant predictive effects on behavioral intention.The other four main components with a sensitivity coefficient greater than Of the eight principal components, the sensitivity coefficients of the second to sixth principal components are all greater than 12%, and that of the third principal component reaches 15.83%.This result shows that in this study, Behavioral Results Perception and Perceived Behavioral Control under the psychological cognitive dimension of EPWR have the most significant predictive effects on behavioral intention.The other four main components with a sensitivity coefficient greater than 12% is the following: KR, CRP; ER, HAB, SN; EI, PEP, and HM.Their predictive influence on behavioral intentions is more dramatic, and the variables just named explain most of the psychological characteristics of recovery, situational factors, and historical recycling behavior.Contrary to general expectation, although the sensitivity of the three values in the recycling behavior attitude dimension is greater than 10%, their predictive effect is not as good as that of other variables.The sensitivity factor of the seventh principal component is the lowest, indicating that the publicity index has limited effectiveness in predicting BIRC relative to other variables.
Analysis and Discussion of Sensitivity Results
Generally speaking, among all the variable dimensions to predict BIRC, the psychological cognitive dimension of recycling behavior has the most influential effect on behavioral intention prediction, which is consistent with the theory that behavior is the external activity dominated by psychology.The predictive utility of BRP and PBC also proves that the generation of behavior is regulated by the behavioral perception results [54,55], which plays a decisive role in the generation of BIRC through psychological cognition in EPWR.In other words, the richness of the resources and opportunities required by the individual to complete EPWR behavior largely determines their BIRC.According to social psychology, the individual's social psychology is also restricted by others or groups, which expresses certain social characteristics [25,58].Therefore, the higher sensitivity coefficients of HM and SN are also supported.This indicates that residents will learn and imitate the EPWR behavior of others or follow relative social practice when performing recycling behavior, thereby shaping a correct recycling awareness and understanding of the importance of their behavior being recognized by other people and the society.It is worth mentioning that according to the regression analysis described in Section 3.3.2, the predictive effect of SN on BIRC is not obvious.However, the principle of regression prediction is the influence of each single variable on the dependent variable after controlling for other variables, and SN is an important part of SF.In the theoretical model used in this study, SF regulates attitude and the psychological cognition of recycling behavior.Therefore, when predicting BIRC in EPWR, the single impact of social norms is limited, but it influences other variables to achieve more significant regulatory effects.
Behavioral attitude also plays a decisive role in behavioral intentions as an important dimension in theory of planned behavior.The coefficients of KR and CRP are relatively normal, but the regression prediction results show that the degree of mastery of KR is a negative predictor of BIRC.It is believed that, compared with other environmentally friendly behavior, EPWR behavior has lower grades and meager returns.Therefore, EPWR has not been officially and comprehensively promoted in China, which makes it fall into the "gray angel" of citizen perceptions.In other words, people that acquire more and deeper-level KR or enjoy higher education levels may be less involved in EPWR because of their busy work schedules, high salaries, and Chinese traditional concept of face [55].Nevertheless, people with lower education levels, due to unstable work, poor income, and the benefits available from EPWR, set recycling as a part of their source of income and would actively participate in recycling for money.People who are willing to recycle may be driven by the economic benefits rather than environmental values.Those with positive values may find it either too troublesome to participate in EPWR, or inconsistent with their status.Especially, the economic benefits of recycling are far less than their own wealth.Thus, this results in an attitude-behavior gap, which further leads to lower predictive efficacy for BIRC.This line of reasoning can also be used as an explanation for the negative predictive effect of ER on BIRC in regression prediction.
Apart from the psychological factors of the behavioral producers themselves, the external stimulation system also plays a regulatory role and is treated as a primary factor of determining behavior.Therefore, the government's adoption of economic means or the formulation and implementation of relevant policies and regulations can play a positive guiding role in EPWR behavior, but can also stiffen persistence in non-recycling behavior.Publicity is not as much of an incentive as economic means, nor is it as binding as policy means, thereby making the variable index with the lowest sensitivity coefficient when predicting BIRC in EPWR.This finding is different from the conclusions of some other researchers [26,56].From the perspective of historical recycling habits, both HAB and IFB are found to have more than 10% sensitivity to recycling behavior intentions, and HAB combined with ER and SN is found to have a greater degree of influence.This shows that active persuasion and encouragement have an impact on BIRC, but the individual's own EPWR experience and habits have a more profound effect on their own BIRC.When the behavior is completed, a "warm effect" will be generated, which can promote the generation of further environmentally friendly behavior [86], especially for low-cost environmentally friendly activities like EPWR.
Briefly, in the future EPWR process, more attention should be paid to the psychological cognition of residents, ensuring that they have a high sense of environmental responsibility, setting a role model to provide a reference template for residents' recycling behavior, and shaping a social atmosphere that encourages residents to participate in recycling.For those who do not have comprehensive environmental values and knowledge of recycling, efforts should be made to enable them to realize on a fundamental level that participating in EPWR is not only an act leading to certain economic benefits, but also an environmentally friendly behavior that can effectively protect the environment.As for residents that have higher environmental awareness, they should be brought to realize that participating in EPWR will not make them lose face, but is rather worthy of promotion, thus enabling them to transfer correct environmental awareness and attitude successfully into environmental behavior.In addition, a focus is also needed on cultivating and enhancing residents' positive BRP and PBC, enhancing exposure to the news media to the status quo of express packaging waste to enhance residents' attention to this issue and strengthening the popularization of relevant recycling knowledge.Therefore, it subtly influences the environmental values of residents.In terms of external situational factors, relevant departments should not increase publicity intensity excessively, but should put more energy into implementing economic instruments and policy means and establishing social norms to guide parents and train their children from an early age to recycle waste for environmental purposes.As a result, recycling behavior is habituated and rooted.This will also have a positive impact on individuals and even groups of the children's social circle, and a benign circle will be shaped.
Conclusions
This study has predicted the intention of express packaging waste recycling behavior based on data collected from a questionnaire.The questionnaire is designed based on TPB theory and ABC theory, and relevant literature is used to expand the set of variables: recycling knowledge, concern about recycling problems, herd mentality, behavioral results perception, social norms, and recycling history behavior.Recycling behavioral attitudes, recycling psychological cognition, situational factors, and historical recycling behavior constitute a variable dimension that measures behavioral intentions.In this study, a regression analysis is carried out on the rationality of extending the set of variables, and 15 variables are extracted into 8 principal components.This avoids collinearity between variables while simplifying the variable set, thus improving the training efficiency and predictive accuracy of the neural network model.Subsequently, the eight principal component values are entered into the neural network model, and 526 questionnaires are classified into a training set and a test set, with the former used to train the neural network model and the latter to verify the validity of the model after training.Finally, the sensitivity of each principal component to the output result is analyzed base on the weights in the neural network model.The main conclusions are as follows: 1.
The extended variable of historical recycling behavior effectively improves the predictive power of the intention to recycle.
2.
The input of the neural network can be effectively streamlined by extracting the principal components from the variables.
3.
A neural network based on Bayesian regularization can optimize the generalization ability of the network: the fitting precision is 0.0054935 after 172 iterations, and an ideal training effect is achieved.The simulation results from the verification set reveal that this study shows certain rationality in the selection of variables and training models.In the future, the attitude of BIRC could be accurately predicted by metrics of related variables.4.
According to the calculation results of the Garson formula, the sensitivity coefficients of behavioral result perception and perceived behavioral control are the highest among the second-level variables, whereas the sensitivity coefficient of publicity is the lowest.The predictive effect of values on behavioral intention is low, thus indicating a behavior-attitude gap that has arisen in the recycling behavior of citizens.The sensitivity of cognitive behavior among first-level variables is the highest, highlighting the importance of psychological cognition in recycling practice.As for recycling behavior attitude, concern about recycling problems and knowledge of recycling have a good predictive effect on behavioral intention.Social norms, economic incentives, and perceived effectiveness of policy of the situational factors all have higher sensitivity to behavioral intentions.Historical recycling behavior also makes a better contribution to behavioral intention prediction, and the forecasting accuracy for habit adjustment behavior is better.
As China's express packaging waste problem becomes worse, this study fills in some of the blanks in research into EPWR behavioral intentions.Apart from previous studies that utilize regression or BP neural network models to predict behavior or behavioral intentions, this study employs the Bayesian regularized neural network based on the main components of the variable index to predict behavioral intention.This methodology has strong generalization ability and high prediction accuracy, thereby achieving a sound balance between complexity and degree of fit of the neural network model.By measuring and analyzing the sensitivity of the eight principal components to behavioral intentions, this study could provide a reference to the government or relevant departments to conduct EPWR activities and encourage the public to become involved in EPWR.
Figure 1 .Figure 1 .
Figure 1.Research model of intention of express packaging waste recycling.
Discussion of the Bayesian Regularized BP Neural Network Model 5.1.Construction of the Bayesian Regularized BP Neural Network Model 5.1.1.Determination of the Number of BP Neural Network Layers
Figure 2
Figure 2 presents the model map used in this study.15 variable indicators are compressed into eight principal components by principal component analysis, and then the eight principal component values are used as input to the 8-15-1 three-layer Bayesian regularized BP neural network model.The 526 questionnaires collected are classified into training and validation sets, of which 446 are training sets and 80 are validation sets.The training sets are inputted into the established neural network model.At the 172nd epoch, the maximum MU value is 7.85 × 10 +10 and, therefore, the network stops learning.MU is a friction coefficient and will increase when further iterations make the error increase; reaching the maximum MU value indicates that the minimum error is found and the training converges[81].Figure3shows the training convergence process.The convergence curve reveals that after 172 iterations, the fitting accuracy reaches 0.00054935, and the number of effective network parameters is 137.In the model training process, convergence speed is fast, learning efficiency is high, and the trained network can be used as a test network of prediction.
Figure 2 .
Figure 2. Route map of PCA and Bayesian BP network.Figure 2. Route map of PCA and Bayesian BP network.
Figure 2 .
Figure 2. Route map of PCA and Bayesian BP network.Figure 2. Route map of PCA and Bayesian BP network.Sustainability 2018, 10, x FOR PEER REVIEW 14 of 24
Figure 3 .
Figure 3. Training process of neural network.
Figure 3 .
Figure 3. Training process of neural network.
Figure 4 .
Figure 4. Comparison error between predicted value and measured value.Figure 4. Comparison error between predicted value and measured value.
Figure 4 .
Figure 4. Comparison error between predicted value and measured value.Figure 4. Comparison error between predicted value and measured value.
24 Figure 5 .
Figure 5. Coincidence graph of predicted value and measured value.
Figure 6 .
Figure 6.Comparison error between predicted value and measured value after anti-normalized.
Figure 5 .
Figure 5. Coincidence graph of predicted value and measured value.
Figure 6 .
Figure 6.Comparison error between predicted value and measured value after anti-normalized.
Figure 6 .
Figure 6.Comparison error between predicted value and measured value after anti-normalized.
Figure 7 .
Figure 7. Fit of training and test sample.
Figure 7 .
Figure 7. Fit of training and test sample.
5. 3 .
Sensitivity Analysis Based on Neural Network Output Weights 5.3.1.Results of Sensitivity Calculation for Principal Components
Figure 8 .
Figure 8. Sensitivity coefficients and variable meanings of the main components for the output.
Table 2 .
Reliability and validity test results of questionnaire.
Table 3 .
Descriptive analysis of questionnaire population variables.
Note: 1 means married but do not have children or children that do not live together; 2 means married and live with children.
4.3.2.Analysis of the Predictive Effect of BIRCFurthermore, the predictive effects of RBA, RPR, SF, and RBH on the BIRC are analyzed by using linear regression to prior detection for neural networks.
Table 5 .
Analysis of the predictive effects of RBA, RPR SF, and RBH on the BIRC.
Table 6 .
Orthogonal rotation of matrix for components.
Table 7 .
Connection weight matrix of input layer to hidden layer and hidden layer to output layer.
Sensitivity coefficients and variable meanings of the main components for the output. | 2019-04-30T12:14:39.372Z | 2018-11-12T00:00:00.000 | {
"year": 2018,
"sha1": "b38d559f400cdc11bd63d43aecc616efcb1b14d2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/10/11/4152/pdf?version=1542013634",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "b38d559f400cdc11bd63d43aecc616efcb1b14d2",
"s2fieldsofstudy": [
"Environmental Science",
"Business"
],
"extfieldsofstudy": [
"Economics"
]
} |
7913374 | pes2o/s2orc | v3-fos-license | Roles of the WHHL Rabbit in Translational Research on Hypercholesterolemia and Cardiovascular Diseases
Conquering cardiovascular diseases is one of the most important problems in human health. To overcome cardiovascular diseases, animal models have played important roles. Although the prevalence of genetically modified animals, particularly mice and rats, has contributed greatly to biomedical research, not all human diseases can be investigated in this way. In the study of cardiovascular diseases, mice and rats are inappropriate because of marked differences in lipoprotein metabolism, pathophysiological findings of atherosclerosis, and cardiac function. On the other hand, since lipoprotein metabolism and atherosclerotic lesions in rabbits closely resemble those in humans, several useful animal models for these diseases have been developed in rabbits. One of the most famous of these is the Watanabe heritable hyperlipidemic (WHHL) rabbit, which develops hypercholesterolemia and atherosclerosis spontaneously due to genetic and functional deficiencies of the low-density lipoprotein (LDL) receptor. The WHHL rabbit has been improved to develop myocardial infarction, and the new strain was designated the myocardial infarction-prone WHHL (WHHLMI) rabbit. This review summarizes the importance of selecting animal species for translational research in biomedical science, the development of WHHL and WHHLMI rabbits, their application to the development of hypocholesterolemic and/or antiatherosclerotic drugs, and future prospects regarding WHHL and WHHLMI rabbits.
Introduction
According to WHO, the major cause of death within member nations is cardiovascular diseases which account for about 30% of all deaths [1]. This report has indicated that cardiovascular diseases are one of the most important classes of diseases to be overcome. As main risk factors for cardiovascular diseases, hypercholesterolemia, hypertension, disorders in glucose metabolism, smoking, aging, male gender, and social stress are listed. Particularly, control of serum lipid levels is thought to be most important for the prevention of cardiovascular diseases. Currently, in the Japanese population, the upper limits of the normal ranges for serum total cholesterol and LDL cholesterol levels are 220 mg/dL and 140 mg/dL, respectively, and the lower limit of the normal range of HDL cholesterol is defined as 40 mg/dL [2]. According to studies conducted during the 1980s, the incidence of cardiovascular events increases as the serum cholesterol level increases and decreases with hypocholesterolemic treatments [3]. One potent hypocholesterolemic compound is statin, a competitive inhibitor of 3-hydroxy-3-methylglutaryl (HMG)-CoA reductase, a rate-limiting enzyme in cholesterol synthesis. The first statin (compactin) was initially developed by a Japanese pharmaceutical company, Sankyo Co. Ltd. [4], and this accelerated the development of cholesterol lowering drugs. The hypocholesterolemic effect of compactin was initially examined with rats. However, the anticipated cholesterol-lowering effect was not observed [5], and the development of this compound was ceased. On the other hand, since compactin showed a potent inhibitory effect on cholesterol synthesis in vitro and in chickens, researchers had been looking for other mammalian species applicable for the assessment of this agent. They found a report of a mutant rabbit strain showing hyperlipidemia, written in a Japanese university's bulletin [6]. This rabbit strain contributed greatly to the development of this compound. The strain was the Watanabe heritable hyperlipidemic (WHHL) rabbit. This was in 1979. Currently, there are seven statins in widespread clinical use. It is estimated that statins are prescribed to more than 40 million patients worldwide and statin therapy has decreased mortality from cardiovascular diseases by 20-50% [7]. Thus statins became essential agents for the treatment of hypercholesterolemia and cardiovascular diseases. These results demonstrate the importance of selecting animal species and/or animal models for translational research to develop therapeutic agents.
This review raises the importance of selecting animal species and/or animal models for translational research by describing the history of the WHHL rabbit and its contribution to studies of hypercholesterolemia and atherosclerosis.
The Development of the WHHL Rabbit and Its Characteristics
The history and characteristics of the WHHL rabbit were described in a previous article [8]. In 1973, Dr. Yoshio Watanabe (1927Watanabe ( -2008 found one male Japanese white rabbit showing hyperlipidemia. From this mutant, he established a strain, the WHHL rabbit, after seven years of selective breeding. At first, this strain was designated the hyperlipidemic rabbit (HLR) [9]. He submitted a study on this strain to an international journal and renamed it the Watanabe heritable hyperlipidemic (WHHL) rabbit [10], according to a suggestion by the editor. The strain has 300-700 mg/dL of total cholesterol and 300-400 mg/dL of triglyceride in plasma. There were atherosclerotic lesions in the aorta and xanthoma in the digital joints. The serum glucose level and blood pressure were in normal ranges. In WHHL rabbits, the function of lowdensity lipoprotein (LDL) receptors on the cell membrane was almost deficient and the clearance of LDL from the circulation delayed [11]. Such symptoms closely resemble human familial hypercholesterolemia (FH), which develops spontaneously, and thus the WHHL rabbit is recognized as the first animal model of this disease. Later, the Nobel Prize winners Goldstein and Brown used WHHL rabbits to verify their hypothesis of an LDL receptor pathway for the metabolism of lipoproteins and clarified human lipoprotein metabolism [12][13][14][15]. Their studies revealed that lipoprotein metabolism in the WHHL rabbit closely resembles human FH. Consequently, WHHL rabbits were used as an animal model for the development of cholesterol-lowering agents.
One of the most important features of an animal model for hyperlipidemia is the occurrence of myocardial infarction, the final event of human hypercholesterolemia. The development of severe atherosclerotic lesions in the coronary arteries is a prerequisite for the occurrence of myocardial infarction, but the incidence of coronary atherosclerosis in the WHHL rabbit was initially very low. To establish a new strain which develops coronary atherosclerosis, serial selective breeding was conducted and in 1985, the coronary atherosclerosis-prone WHHL rabbit was developed [16]. Further, a strain with severe coronary atherosclerosis was developed in 1992 [17]. Despite such long-term efforts, the incidence of myocardial infarction remained very low. After a further seven years of selective breeding with improved criteria, such as the use of descendents of rabbits with macrophage-rich coronary lesions, a new strain of WHHL rabbits was established; the myocardial infarction-prone WHHL (WHHLMI) rabbit that spontaneously develops myocardial infarction by progression of coronary atherosclerosis followed by occlusion of the coronary arteries [18]. The characteristics of WHHLMI rabbits are described in a previous review [19]. During their establishment, marked differences in the composition of atherosclerotic plaques were found between the aorta and coronary arteries [20], and the WHHLMI rabbit became an animal model with which to examine the inhibitory effects of drugs on coronary atherosclerosis. These studies suggested genetic factors other than hypercholesterolemia to be important to myocardial infarction and coronary atherosclerosis. Figure 1 shows the changes in serum lipid levels with aging and the distribution of cholesterol in lipoproteins among WHHLMI rabbits [8]. Serum cholesterol levels are 900-1,400 mg/dL at weaning (3 months old) and at 6 months old, and then decrease gradually (700-1,200 mg/dL at 12 months old, 600-1,100 mg/dL at 18 months old, and 500-1,000 mg/dL at 24 months old). Serum triglyceride levels are 150-500 mg/dL and the change with aging is small. The HMG Co-A reductase activity (cholesterol biosynthesis) in WHHLMI rabbits does not decrease with aging and the precise mechanism of the age-related decrease in cholesterol is still unknown [21]. About 70% of cholesterol occurs in the LDL fraction, 16% in the very low-density lipoprotein (VLDL) fraction, 13% in the intermediate density lipoprotein (IDL) fraction, and 0.8% in the high density lipoprotein (HDL) fraction. Figure 2 shows the extent of atherosclerotic lesions in the coronary arteries and aorta of WHHLMI rabbits [8]. The main coronary artery is the left circumflex artery and the atherosclerotic lesion is more progressed compared to that in the left anterior descending artery and the right coronary artery. Therefore, the degree of coronary atherosclerosis (cross-sectional narrowing) has been evaluated using the left circumflex artery. The degree of aortic atherosclerosis was shown as the ratio of the surface lesion area to the lumen surface area of the aorta. Atherosclerotic lesions develop from 2 months old. At age 12 months, coronary cross-sectional narrowing was about 80% and about 60% of the aortic lumen surface was covered by atherosclerotic lesions. At 18 months old, coronary cross-sectional narrowing and aortic lesion increased to 90% and 80%, respectively [22].
Prior to the development of the WHHLMI strain, WHHL rabbits were used to investigate mechanisms of the development of atherosclerosis, and many aspects have been
Species Differences in Lipid Metabolism and Atherosclerosis
As mentioned, lipoprotein metabolism in rabbits closely resembles that in humans. However, representative laboratory animals such as mice and rats have very different lipoprotein metabolism from that in humans (Table 1). Some examples of major species differences in lipid metabolism are the following. (1) In mice and rats, apoB editing enzyme is observed in the intestine and in the liver, but in humans and rabbits, this enzyme is expressed only in the intestine [33].
In humans and rabbits, apoB-48 is a major apolipoprotein of chylomicron and chylomicron remnants, which carry exogenous lipids derived from foods and apoB100 is a major apolipoprotein of VLDL, IDL, and LDL, which are endogenous lipoproteins derived from liver. In mice and rats, however, endogenous lipoproteins as well as exogenous lipoproteins also contain apoB-48, because of the expression of apoB editing enzyme in the liver [34]. Since the metabolic clearance of lipoproteins containing apoB-48 is very rapid, apoB-48 containing VLDL particles disappear rapidly from the circulation in mice and rats. As a result, the LDL lipid levels in mice and rats are very low compared with those in humans.
(2) Hepatic lipase is circulating in the blood stream in mice thus different from humans in degradation of neutral lipids and transportation of free fatty acids into the tissues [35]. (3) In mice and rats, there is no cholesterol-ester transfer protein (CETP) activity in plasma, which transfers cholesterol from HDL to VLDL, IDL, and LDL [36], although CETP plays an important role in humans and rabbits. As a result, in mice and rats, the proportion of cholesterol in the HDL fraction is high compared with other lipoprotein fractions. Therefore, lipoprotein profiles of mice and rats are markedly different from that of humans, even in knockout mice lacking apoE or the LDL-receptors [8]. (4) Competitive inhibitors of a rate-limiting enzyme for cholesterol synthesis, statins, showed potent hypocholesterolemic effects in WHHL rabbits [37][38][39][40][41][42][43][44][45] but not in mice and rats [5]. In humans, statins are the most effective hypocholesterolemic drugs. These results demonstrate how it is important to choose appropriate species in translational research. (5) C-reactive protein (CRP), a major inflammatory marker in humans and rabbits, which increases in patients with acute coronary syndrome [46], is not responsive to inflammation in mice and rats, due to a lack of complement activation [47]. The major inflammatory marker of mice is serum amyloid P component (SAP), instead of CRP. (6) The types of myocardial fibers in mice are also different from those of humans and rabbits [48]. (7) Moreover, the ECG waveforms in mice and rats are clearly different from those of humans, but rabbit ECG shows similar waveforms to humans [49,50]. As such, mice and rats have greatly different sets of factors for lipoprotein metabolism and cardiovascular diseases. Therefore, to employ mice and rats for studies on cardiovascular diseases and lipid metabolism, great care is required with analyses and/or the interpretation of the results obtained from experiments. Figure 3 shows features of WHHLMI rabbits which resemble humans and applicable translational research fields.
Translational Research on the Development of the Lipid-Lowering Agents
Since the WHHL rabbit is close to humans in lipoprotein metabolism, it was used for the development of various lipid-lowering agents and atherosclerosis-suppressing agents [8]. The hypolipidemic effects of various drugs have been investigated with WHHL rabbits ( Table 2): cholesterol synthesis inhibitors, such as HMG-CoA reductase inhibitors and squalene synthetase inhibitors; inhibitors of microsomal triglyceride transfer protein, which works in the assembly of VLDL particles in liver; anionic exchange resins, which block the enterohepatic circulation of bile acids; omega-3 fatty acids, which are a component of fish oil; fibrates, which lower serum triglyceride levels. In studies with a cholesterol synthesis inhibitor, statin, serum total cholesterol levels of WHHL rabbits were decreased dose-dependently by 10-30% compared with the control group [37,39]. The mechanisms for the reduction in serum cholesterol levels by statins are an increase in expression of mRNA of LDL receptors in the liver [39] and, decrease in the excretion of VLDL cholesterol from the liver in cases of high-dose treatment [38]. The agents that inhibit squalene synthetase, another rate-limiting enzyme in cholesterol synthesis, also decreased the serum cholesterol level by similar mechanisms [51]. Since a small amount of LDL receptor protein can be processed from a precursor to a mature form in WHHL fibroblasts [52], inhibition of cholesterol synthesis in the liver is expected to cause LDL receptors to accumulate on the surface of hepatocytes. Anion exchange resins absorb bile acids at the duodenum and block the enterohepatic circulation [53]. As a result, cholesterol is utilized in the hepatocytes for the synthesis of bile acids, and then the hepatocytes, which was exhausted the cholesterol pool, increase the number of LDL receptor molecules to acquire external cholesterol [39]. Therefore, the combination of an inhibitor for cholesterol synthesis and an anion exchange resin can decrease the serum cholesterol level markedly, and this was proved using WHHL rabbits [40]. Since microsomal triacylglycerol transfer protein (MTP) inhibitors are also effective in WHHL rabbits [54], they may have potential benefit for human FH. The successful treatment in WHHL rabbits means that patients with FH, excluding the LDL-receptor negative type, can be treated with these agents.
Translational Research on Antiatherosclerotic Effects
The purpose of lowering serum cholesterol levels is to inhibit atherogenesis and to circumvent the cardiovascular and cerebrovascular events. The WHHL rabbit contributed to prove the effects of cholesterol-lowering therapies on delaying the progression of atherosclerosis. Statin treatment resulted in a decrease in serum total cholesterol levels by 20-30%, and the cross-sectional narrowing of the coronary arteries was significantly decreased [41][42][43][44][45].
In several clinical studies, the incidence of cardiovascular events was significantly reduced in the statin-treated groups despite little or no improvement in coronary stenosis on evaluation by coronary angiography [55]. The WHHL rabbit contributed to the clarification of this paradoxical mechanism [42][43][44][45]. On the administration of statin to 10month old WHHL rabbits for one year, in which coronary atherosclerosis had already developed to a mature stage, statin treatment showed not only the prevention of further progression of the coronary atherosclerotic lesions, but also various stabilizing effects on coronary plaques, such as reductions in the contents of macrophages and extra cellular lipids in lesions, and increase in the contents of collagen fibers and preservation of the smooth muscle cells in lesions. Thus it was clarified that, statin administration makes atherosclerotic lesions more stable, that is, less likely to rupture. With this study, it was confirmed that the stabilization of atherosclerotic lesions is important for the prevention of coronary events. Nowadays, more than 40 million patients worldwide are prescribed statins. Another type of cholesterol synthesis inhibitor, squalene synthesis inhibitors, that act downstream of the cholesterol synthesis pathway, also showed similar hypocholesterolemic and atheroma-stabilizing effects in WHHLMI rabbits [56].
Using WHHLMI rabbits, antiatherosclerotic effects have also been evaluated with other compounds such as omega-3 fatty acids, which decrease serum triglyceride levels by changing the composition of fatty acids [57][58][59][60][61]; antioxidants, such as probucol, vitamin C, and vitamin E [62][63][64][65]; agents that regulate the function of macrophages [66,67]; drugs that inhibit the rennin-angiotensin pathway [68][69][70][71]. Interestingly, antiatherosclerotic effects of antihypertensive agents were unequal in WHHL or WHHLMI rabbits. Angiotensin converting enzyme (ACE) inhibitors and angiotensin-II receptor blockers (ARBs) showed antiatherogenic effects [69][70][71][72], but calcium antagonists and beta-blockers were not effective [73,74]. Systolic blood pressure in WHHL and WHHLMI rabbits is 100-120 mmHg, which is slightly higher than normal [75]. This may be why calcium antagonists and beta-blockers did not show distinct antiatherosclerotic effects. In contrast, antihypertensive effects of ACE inhibitors and ARBs are mediated by suppressing the effects of angiotensin II. Angiotensin-II stimulates atherogenesis by impairing the function of arterial endothelial cells, proliferation of arterial smooth muscle cells, and inflammation [76]. These pleiotropic effects of angiotensin-II are considered to be mediated by reactive oxygen species. Thus, the WHHL rabbit is indispensable for studies on the antiatherosclerotic effects of the various compounds.
Imaging Technology for Evaluation of Atherosclerotic Lesions
Although it is important to evaluate drug efficacy in clinical use, it is difficult to evaluate atheroma-stabilizing effects of drugs in clinical practice. With coronary angiography, it is possible to see the degree of stenosis but difficult to evaluate the severity of lesions, if the lesions are spread and extended in the coronary arteries, or if the coronary arteries are expanded due to the outward remodeling of the vessels. Furthermore, it is very important to develop noninvasive technologies and equipment to detect dangerous lesions, that is, vulnerable plaques that are prone to rupture, not only for the diagnosis but for the prevention of cardiovascular events. As vulnerable plaques that cause cardiovascular events, softtype plaques rich in macrophages and large lipid droplets covered with a thin fibrous cap are important. To detect such soft-type plaques, computed tomography (CT) [77], positron emission tomography (PET) [77], CT plus PET [78], magnetic resonance (MRI) [78,79], and intravascular ultrasound (IVUS) [80] have been applied to WHHLMI rabbits. One successful example was evaluation of the antiatherosclerotic effect of probucol, a potent antioxidant, in WHHLMI rabbits by imaging with CT plus PET [81]. Ogawa et al. demonstrated clearly that imaging with CT plus PET is powerful technology to detect antiatherosclerotic effects of compounds. Once imaging technologies for the evaluation of atherosclerotic lesions are established, they can be used not only for the assessment of drug effects, but also for the detection of dangerous coronary lesions that could lead to cardiovascular events such as acute coronary syndromes and consequently the prevention of ischemic heart diseases.
Perspectives
To overcome cardiovascular diseases, many research issues remain unresolved, despite diligent studies for the development of diagnostic methods and lipid-lowering agents. Particularly important is clarifying the mechanism of the disruption of coronary lesions (arterial plaque rupture and the following formation of a thrombus), which depress the trigger for the onset of acute coronary syndromes, and establishment of treatments. Still no suitable animal model, which is compatible with the study of human acute coronary syndromes, has been developed. To develop a suitable animal model for human acute coronary syndromes, trial studies/experiments such as the enhancement of vulnerable coronary lesions, and application of physical pressure to coronary lesions, are currently underway with WHHLMI rabbits. To destabilize coronary lesions, serial selective breeding with new criteria such as the formation of vulnerable plaques is also ongoing, in parallel with the development of genetically modified WHHLMI rabbits overexpressing matrix metalloproteinases (MMPs), and so forth. The established strain would be a subject of analyses for the identification of the genes/loci responsible for the phenotype established. In the near future, with advances in gene-targeting technologies by using ES or iPS cells capable of germ-line transmission, in combination with the nuclear transfer technique, more precise manipulation of the rabbit genome may also be available. Since the lesion composition and severity of coronary lesions differ even in WHHLMI rabbits, despite no difference in the serum cholesterol levels, it will be important to explore marker proteins and/or risk factors affecting coronary lesions. Once markers and risk factors relating to vulnerable coronary atheromas are found, the mechanism of cardiovascular events may be clarified. Such findings would contribute to the development of new clinical diagnostics and thence to the prevention of cardiovascular events.
In conclusion, selecting appropriate animal model is important in translational research. WHHL and WHHLMI rabbits have contributed to development of hypocholesterolemic and antiatherosclerotic compounds and medical devices, such as imaging technologies for atherosclerosis, and diagnostic techniques for acute coronary syndromes, in addition to elucidation of the mechanisms of atherogenesis and coronary plaque rupture. These studies are helpful for progression of therapeutics. | 2014-10-01T00:00:00.000Z | 2011-04-19T00:00:00.000 | {
"year": 2011,
"sha1": "3e1b869a98fedbb5ca75adec387951f194d1c779",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2011/406473.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3e1b869a98fedbb5ca75adec387951f194d1c779",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
238107666 | pes2o/s2orc | v3-fos-license | Comparison of strain and ferroelectric properties of La,Nb and Li,Nb co-modified BNKT–ST ceramics
A comparison between the influence of concurrent A and B-site acceptor-donor (Li,Nb) and donor-donor (La,Nb) doping in Pb-free Bi 0.5 (Na 0.84 0.16 ) 0.5 on structure, ferroelectric and high voltage induced strain were made. In this work, Lithium was chosen as acceptor and Lanthanum was chosen as donor dopant on A-site and Nb 5+ as a B-site donor. Both modifier impurities promoted a phase evolution from the coexistent of two phase structures (rhombohedral-tetragonal) into a more symmetric single pseudocubic structure. Nonetheless, (La,Nb) donor-donor doping induced the phase transition at 1 mol%, while (Li,Nb) acceptor-donor doping at 2 mol%. Interestingly, acceptor-donor (Li,Nb) co-substitution results in the broadening of ergodic relaxor and ferroelectric (ER-FE) mixed phase boundary which is favourable for multidevice applications. This comparative study show that the A-site vacancies play a substantial role to induce the phase transformation from nonergodic-ergodic relaxor (NR-ER) and the improvement in their high voltage induced strain.
Introduction
The emphasis of piezoelectric study during the last decade was given to expansion of lead-free alternative piezoelectric ceramics owing to the environmental issues and laws enacted on the Pbbased (PZT and its solid solutions) piezoceramics [1][2][3]. Bismuth (Bi 3+ )-based perovskites are the lead-free ceramics which have drawn a huge consideration due to similarity in electronic structure with Pb 2+ . Bismuth sodium titanate (BNT) ceramic has tempted special interest owing to its environmental friendliness, and good piezoelectric performance. Nonetheless, the problematic poling, low resistivity and high recorded coercive field are the downsides of unmodified BNT [2][3][4]. Various elemental alterations in pure BNT have been prepared to improve the desired electromechanical properties [5][6][7][8][9]. Among BNT-based binary systems, BNKT is the highly tempting system with relatively better electromechanical performance and morphotrophic phase boundary (MPB) likewise PZT ceramics.
Previous studies on BNKT-based piezoelectric ceramics found that the cation doping induces the hardening and softening effect which can improve the piezoelectric and ferroelectric properties [10][11][12][13][14][15]. Soft donor doping disrupts the ferroelectric (FE) order by generation of A-site vacancies and increase the dielectric, piezoelectric and electromechanical behavior [16,17]. On the contrary, the hardening effect due to acceptor doping may generate oxygen vacancies and decreasing dielectric constant, loss, and enhancing coercive field and mechanical quality factor [16,17] Then, the mixture of powders was put at ball-milling again for 24h and completely dried in oven.
Uniaxial pressing was used to produce disc-shaped samples (diameter of 10mm) at 98 MPa. These disc samples were annealed at the temperature of 1160°C with the dwell time of 2h.
Samples were coated by paste of silver and put in furnace at 650°C for the dwell time of 0.5h to form permanent electrodes. Poling process was done in bath of silicon oil by applying dc field of 3-4 kV/mm. X-ray diffraction (XRD, X'pert MPD 3040, Philips, The Netherlands) was utilized in this study to characterize crystal structure analysis. Polarization vs. electric field hysteresis (P-E) was analyzed by utilizing ferroelectric tester (Manufactured by Precision LC, Radiant Technologies Inc., Albuquerque, NM). Piezo strain property was recorded by utilizing contact-type sensor for displacement measurement (Millitron; Model 140).
In case of La-Nb co-doping, the structural phase transition was observed at x,y = 0.010, while Li-Nb co-doped system showed phase transition at x,y = 0.020. This kind of phenomenon was observed previously in BNKT-based piezoceramics modified with Hf [10], Zr [11] and Nb [13].
The comparison of high voltage polarization induced hysteresis (P-E) and related composition dependence of remnant polarization (P r ), coercive field (E c ) and maximum polarization (P m ) for La,Nb and Li, Nb co-modified BNKT-ST piezoceramics at ambient temperature (RT) were presented in Fig. 2 and | 2020-04-30T09:11:53.042Z | 2020-04-29T00:00:00.000 | {
"year": 2020,
"sha1": "9bb6229c1716e47437ad66bd6711c79a80893ff4",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-24755/v1.pdf?c=1631848066000",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "e8828ee28223c8a84040a0f2fc8e3928fa9983a9",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.