id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
4954949
pes2o/s2orc
v3-fos-license
Implications of publicly available genomic data resources in searching for therapeutic targets of obesity and type 2 diabetes Obesity and type 2 diabetes (T2D) are two major conditions that are related to metabolic disorders and affect a large population. Although there have been significant efforts to identify their therapeutic targets, few benefits have come from comprehensive molecular profiling. This limited availability of comprehensive molecular profiling of obesity and T2D may be due to multiple challenges, as these conditions involve multiple organs and collecting tissue samples from subjects is more difficult in obesity and T2D than in other diseases, where surgical treatments are popular choices. While there is no repository of comprehensive molecular profiling data for obesity and T2D, multiple existing data resources can be utilized to cover various aspects of these conditions. This review presents studies with available genomic data resources for obesity and T2D and discusses genome-wide association studies (GWAS), a knockout (KO)-based phenotyping study, and gene expression profiles. These studies, based on their assessed coverage and characteristics, can provide insights into how such data can be utilized to identify therapeutic targets for obesity and T2D. Introduction Obesity and T2D are major public health problems, and their rates are increasing. It has been reported that 40% of adults in the UK will have obesity by 2025 1 , and the worldwide population with T2D will approach 600 million in the next 20 years 2 . Understanding the molecular mechanisms of these conditions is important to identify their therapeutic targets, but there has been limited success in identifying target genes because they are not genetic disorders in general aside from rare cases of clear genetic abnormalities, such as maturity onset diabetes of the young, Donohue syndrome, or Rabson-Mendenhall syndrome 3 . Another challenge is that they are generally not initiated from a single organ, unlike cancer. For example, a major mechanism of obtaining T2D is acquiring insulin resistance, which may involve the accumulation of various environmental factors and multiple organs such as adipose, liver, and muscle are involved in that process. These characteristics imply that obesity and T2D result from abnormal dynamic states of relevant biological functions rather than aberrations of certain driver genes, which has created challenges in searching for simple therapeutic targets. For this reason, approaches to medically treat obesity or T2D are more about controlling the phenotypes of subjects, such as reducing caloric intake or appetite for obesity and decreasing blood glucose levels, increasing sensitivity to insulin, increasing insulin secretion, or using insulin therapy for T2D, rather than curing the disease by eliminating its drivers or altering the metabolic status back to a normal state. Considering that obesity and T2D are due to abnormal dynamic states of relevant biological functions, it can be challenging to find therapeutic targets that can be applied to all subjects, and it may be necessary to identify different points of intervention for different subjects as an abnormality of the same biological function can be achieved from multiple points of aberration of molecular activities. For this reason, understanding the overall mechanisms and identifying the therapeutic candidates of obesity and T2D in the general population requires studying cohorts of sufficient size that are large enough to include variances in metabolic phenotypes and potentially diverse driving mechanisms, along with comprehensive data that can represent the exact status of individual subjects, such as detailed phenotypes and multi-omic profiles (such as genomic, epigenetic, metabolic, proteomic profiles). However, research communities studying obesity and T2D lack such comprehensive data resources, which is unlike other diseases, such as cancer, where many comprehensive multi-omic data resources are publicly available. Even though there are no comprehensive data resources for obesity and T2D, individual studies can constitute certain aspects of comprehensive data collections. This review will discuss currently available genomic data resources that can be utilized to identify therapeutic candidates for obesity and T2D, including GWAS, a KObased phenotyping study, and gene expression studies that have observed expression changes in subjects with obesity and type 2 diabetes across relevant organs. The included data sets range from individual studies to large data sets curated by many international consortia. Utilizing these data sets and considering their characteristics can be an alternative approach that mimics comprehensive molecular profiling and provides a useful reference by curating customized genomic data sets to study therapeutic candidates for specific phenotypic conditions. DNA-level susceptibility to obesity and T2D Many early approaches to identify genetic effects on obesity and T2D were GWAS. GWAS observes known or candidate single-nucleotide polymorphisms (SNPs) and phenotypes that are related to obesity and T2D, where the statistical association between each SNP and phenotype is evaluated. Based on GWAS, it is possible to identify genes that have or are close to loci that are associated with susceptibility to the studied phenotypes. Unlike rare cases of diabetes with clear genetic drivers, variants at these susceptibility loci can have subtle effects on the function of relevant genes, as previous studies reported rather modest effect sizes of genetic variants on T2D that range from 10 to 35% 4, 5 . Nevertheless, T2D is known to have a notable genetic basis, as the co-occurrence of T2D in monozygotic twins is significantly higher at~70% frequency, whereas dizygotic twins showed a frequency of only 20-30% 6 . In normal populations with susceptibility loci, these subtle effects can generate long-term phenotypic differences in conjunction with other non-genetic, often environmental factors. Table 1 lists selected popular GWAS that assessed phenotypes related to obesity or T2D. Most consortia or studies are based on a collection of cohorts, and it should be noted that occasionally, some cohorts are included in multiple consortia or studies. Phenotype information available from these individual cohorts may not be completely coherent with each other within a consortium or study. Thus, the phenotypes listed in Table 1 are those that each consortium or study made an effort to generate via coherent collections and analyses. Some consortia or studies directly analyzed the association with disease outcomes (obesity or T2D; DIAGRAM, InterAct, GoT2D, and T2D-GENES), whereas others studied associations with more detailed phenotypes, such as body measurements or fat compositions (EPIC-Norfolk, Fenland, GENESIS 7 , GIANT, UK Biobank, and UKHLS), lipid profiles (EPIC-Norfolk, Fenland, GENESIS 7 , GLGC 8 , InterAct, UK Biobank, and UKHLS), and insulin resistance/sensitivity (Fenland, GENESIS 7 , and MAGIC). Individual-level genetic data are rarely available except for a few that accept applications; thus, it is difficult to collect individual-level raw genetic data from multiple cohorts together with phenotypic information to conduct an association analysis. However, the analyzed summary statistics of statistical associations between SNPs and phenotypes are often publicly available, where p-values of statistical significance, frequencies in cohorts, and effect sizes are available in general, and this information is useful for designing and conducting a meta-analysis of interest. Table 2 lists selected major GWAS publications that assessed genetic associations with phenotypes relevant to obesity or T2D. Studied phenotypes are listed for each work, but it should be noted that most studies consider additional phenotypes for the adjustment of statistical associations or prioritization of associated variants. Most studies are meta-analyses that utilize multiple cohorts from several consortia or studies. A general approach of these meta-analyses is to identify novel loci with susceptibility by increasing the size of population with multiple cohorts or by providing independent evidential support for the identified novel loci by using extra cohorts as independent validation data. Another approach of meta-analysis is systematically integrating the results of multiple GWAS of various phenotypes to model certain types of conditions or diseases. A good example of this type of meta-analysis is the work by Lotta et al. 9 , where they identified candidate loci that are associated with lipodystrophy-like phenotypes by integrating the results of several GWAS consortia. Most studies provide a list of identified loci, and some studies also provide more detailed summary statistics through their related consortia. A novel meta-analysis of GWAS can be designed to study genetic loci susceptible to specific combinations of phenotypes by integrating GWAS summary statistics that were derived from analyzing associations with individual phenotypes. In addition to GWAS-derived data sources from individual consortia or studies, there are online data resources in which previous GWAS results are curated and can be accessed with user-friendly interfaces. NHGRI-EBI GWAS Catalog 10 provides searches and visualization of published SNP-trait associations and bulk download of its contents for systematic analysis. It currently contains 63,205 unique SNP-trait associations from >3200 publications, and it contains GWAS on phenotypes other than obesity and T2D. Type 2 Diabetes Knowledge Portal 11 is a T2D-focused online data portal in which 22 GWAS/exome chip/whole genome sequencing/exome sequencing data sets are curated with association information for 47 traits. It provides user interfaces that can simulate the systematic integration of multiple GWAS with various phenotypes, where users can search for variants of interests in individual GWAS data sets from participating consortia and form combinations. However, it does not provide bulk download of entire integrated data sets. These data portals provide the functionality of various searches on diseases, genes, phenotypes, or variants. Available GWAS results cover associations with various phenotypes that are related to obesity or T2D, most of which belong to one of four categories: insulin resistance/ sensitivity-related phenotypes, lipid profile-related phenotypes, outcome of obesity, and outcome of T2D. For a better understanding of gene coverages that are associated with these phenotypes, genes that have been associated with any of the four phenotype categories were collected from the NHGRI-EBI GWAS Catalog 10 . Specifically, the bulk GWAS result data of all 63,205 SNPs that have ever been reported to be associated with phenotypes were obtained, and SNPs that were associated with phenotypes of at least one of the four categories were collected. For each SNP with such an association, a gene that includes the SNP was determined to be associated with the corresponding phenotype, or a gene that is closest to the SNP was determined to be associated if the SNP was in an intergenic location. Fig. 1 shows a Venn diagram of the 2375 genes that are associated with at least one of the four obesity/T2D-related phenotype categories. A certain degree of common genes is shown, but each phenotype category has its own genes of exclusive associations. The six genes that show associations with all four categories of phenotypes include the well-known peroxisome proliferator activated receptor gamma (PPARG), where PPARG is a regulator of adipocyte differentiation 12 and has been implicated in numerous diseases, including obesity 13 and T2D 14 . Another gene is peptidase D (PEPD), and it is known to play an important role in collagen metabolism 15 . As already mentioned, the direct effect size of GWASidentified loci to obesity/T2D-related phenotypes is relatively small. It should be noted that the genes related to GWAS-identified loci imply the biological functions of certain roles in developing metabolic disorders rather than these genes being decisive disease drivers. For this reason, considering the genes from GWAS generally requires further direct validation of the mechanisms that drive these metabolic disorders. Causal gene identification with gene KO mouse models GWAS takes a passive observational approach that searches for associations between the phenotypes of interest and genetic variants in real populations. For this reason, it is challenging to uncover specific mechanisms of action from the identified susceptible loci as they can explain marginal effect sizes in general. In comparison, understanding the function of genes by knocking them out in model species and observing the resulting phenotypes is an extreme interventional approach. In this approach, knocking-out each gene is done for model species and the resulting phenotypes are observed based on predefined protocols. A good example of this approach is the International Mouse Phenotyping Consortium (IMPC) 16 , where the objective is producing KO mouse lines for >20,000 known genes and observing various resulting phenotypes with standardized protocols. It is an international consortium of multiple institutions, and Fig. 1 Genes that were ever reported to be associated with phenotypes relevant to obesity or T2D, which were assessed from NHGRI-EBI GWAS Catalog 10 these institutions produce germ line transmissions of targeted KO mutations in embryonic stem cells for known/predicted mouse genes. Each mutant mouse line is tested through a standardized primary phenotyping pipeline (see the website of the consortium for a complete list of studied phenotypes) in all major adult organ systems and most areas of major human disease. Briefly, phenotypes are observed from embryonic status until the 16th week and include fatality, body measurements and compositions, metabolic profiles, insulin-related phenotypes, pathological, physical, and physiological phenotypes. It is an ongoing project, and the current release (Release 6.1) includes phenotype information from knock outs of 3371 mouse genes. IMPC provides online search functionality for genes, diseases, and phenotypes, and detailed phenotype information is provided if available for queried KO models. Among the studied phenotypes from IMPC, phenotypes relevant to obesity or T2D can also be grouped into the following three categories: insulin resistance/sensitivityrelated phenotypes, lipid profile-related phenotypes, and obesity-related phenotypes, such as weight changes. Among the 3371 studied IMPC genes, genes that showed statistically significant changes in phenotypes that belong to any of the three categories were assessed from IMPC Release 6.1. Fig. 2 shows the Venn diagram of 856 genes that caused these statistically significant phenotypic changes for each phenotype category. Like the case of GWAS-identified genes, genes from KO-based phenotyping studies also show a certain degree of overlap and unique genes in each phenotype category. There are 30 genes that show changes in all three phenotype categories, and they include previously known genes involved in energy transfer and metabolism. CHN1 is a GTPaseactivating protein 17 , BNIP2 is related to myogenesis 18 and GTPase activator activity 19 , and HBS11L and GIMAP6 20 are related to GTP binding. NCOA1 is involved in controlling the energy balance between white and brown adipose tissues 21 . CYP17A1 and CYP27B1 are members of the cytochrome P450 superfamily of enzymes 22 , and they are monooxygenases that catalyze many reactions involved in drug metabolism and the synthesis of cholesterol, steroids and other lipids. LEPR is a receptor for leptin and is involved in the regulation of fat metabolism 23 . The advantage of this KO-based phenotyping approach is its direct observation of resulting phenotypes from individual gene KO, which minimizes the undesirable effects of other factors in analyzing the biological function of the target gene. However, there are a few challenges with this approach. Establishing KO mouse models itself is a challenging task, often requiring significant time and effort. Controlling the quality of the standardized phenotyping protocol can also be a technical obstacle, especially when multiple independent organizations collaborate internationally. There is also an inherent limitation that lethal genes are hard to study with this approach, as KO of these genes will disable producing adult mouse lines and the following phenotyping processes. In addition to such challenges in a KO-based phenotyping approach, a few characteristics should be noted before utilizing the phenotyping results of gene KO. Current phenotyping protocols are focused on identifying phenotypes in normal environments (for example, feeding normal chow); thus, these studies do not represent possible phenotypic changes under certain environmental stresses of interest (for example, a high fat diet) that were not considered in the phenotyping protocols. As this approach is conducted based on model species, potential discrepancies between the model species and humans should be considered. Another issue is that this approach performs KO of genes in the whole body rather than tissue-specific silencing, whereas in realistic situations, several relevant organs can have individual roles via specific biological functions in developing metabolic disorders. Thus, consideration of the genes from KO-based phenotyping studies requires an understanding on these pros and cons and their relationships with human disease mechanisms. Human gene expression profiling of obesity and T2D A metabolic disorder is a condition in which the dynamic status of in vivo metabolism falls into disorder throughout the body (for example, insulin-resistant state of T2D). Thus, developing effective therapeutic approaches can require an understanding of the exact dynamic states of metabolic systems within the body of individual patients. This understanding of exact dynamical states of in vivo metabolic systems can require the following considerations. First, comprehensive molecular profiling is Fig. 2 Genes that showed statistically significant phenotype changes after KO from IMPC (based on Release 6.1) necessary to form broad multi-omic observations, including gene expression, protein expression, and metabolic profiles. Second, this comprehensive molecular profiling needs to be conducted on various relevant organs, such as adipose, liver, and muscle to study insulin resistance. However, gene expression profiling is the only relatively popular approach for high-throughput molecular profiling due to its advantages of higher reliability and lower costs than the other techniques. There are also certain challenges in acquiring the human tissue samples needed for molecular profiling as surgical treatment is not a general treatment for obesity or T2D. For these reasons, few studies are currently available that have conducted comprehensive molecular profiling in various relevant organs, even when only gene expression is considered. Nevertheless, some studies have conducted gene expression profiling in specific organs in certain conditions of interest. Like the case of GWAS with various phenotypes, appropriate integration of these data sets can enable data set assessment in a way that mimics comprehensive multi-organ profiling. To integrate multiple gene expression profiles from independent studies, normalization of data sets between data sets is required to achieve data-level coherency. The most desirable normalization of data sets requires all data sets to be generated from the same platform; however, gene expression profiling has been performed with various microarray and next-generation sequencing platforms. There are many different platforms for gene expression profiling, but the most popular platform with the largest number of studies is the Affymetrix GeneChip Human Genome U133 Plus 2.0 microarray, despite recent advancements in nextgeneration sequencing platforms. Table 3 lists the studies on obesity or T2D with available gene expression profiles based on the Affymetrix GeneChip Human Genome U133 Plus 2.0 microarray. Most studies profiled samples of only one tissue, except for two data sets (GSE13070 and GSE41168). The approaches of the studies vary, such as studying gene expression profiles of disease only, comparing disease profiles with normal control profiles, comparing profiles across different stages of disease, comparing profiles before and after certain interventions, and comparing profiles from siblings or twins to reduce the effect of genetic backgrounds. From this collection of expression profiles of various conditions performed with the same profiling platform (as listed in Table 3), gene expression profiles from multiple studies can be integrated into a single normalized data set so that the subject conditions of the studies match our conditions of interest. As a simple example of integrating gene expression profiles of several studies with subjects of interest, differentially expressed genes (DEGs) between lean healthy subjects and obese healthy or obese diabetic subjects were identified in a tissue-specific way. From the 20 data sets listed in Table 3, 17 studies (except for E-TABM-325, GSE27916, and E-MTAB-1895) provide BMI information and metabolic profiles or insulin resistance/sensitivity information. A total of 602 gene expression profiles of adipose, liver, and muscle samples from the 17 studies were integrated into a single data set, where lean/obese conditions of the samples were determined based on BMI and healthy/diabetic conditions of the samples were determined based on the metabolic profiles and insulin resistance/sensitivity information. For each tissue type, a gene was declared as a DEG if it showed more than a 1.5fold change in expression with an FDR-adjusted p-value < 1E-6 (t-test) between lean healthy samples and obese/ diabetic samples. Fig. 3 shows the Venn diagram of 2334 DEGs identified from three tissue types. Due to tissuespecific gene expression, many DEGs are differentially expressed in a tissue-specific manner. For example, PPARG is an adipose-specific DEG, which is a regulator of adipocyte differentiation. There are 34 common DEGs that show differential expressions from all three tissue types. Five of these 34 DEGs are known to be related to metabolism or mitochondria. FAHD1 is related to tyrosine metabolism and a mitochondrial enzyme 24 , and THRSP is related to regulation of lipid metabolism and lipogenesis 25 . DNAJC15 26 is a negative regulator of the mitochondrial respiratory chain, prevents mitochondrial hyperpolarization states and restricts mitochondrial generation of ATP, MRPS10 is a mitochondrial ribosomal protein, and LIAS is localized in mitochondria and known to be associated with hyperglycinemia 27 . Note that they are DEGs common to all tissue types, and the relevance to mitochondria and metabolism may not be tissue-specific. Compared to DNA-level genetic variants, which make a relatively small contribution to effect sizes, DEGs of significant expression changes from phenotypes of interest can imply more direct representation of the biological mechanisms that drive such phenotypes because these expression changes are a snapshot of the current biological dynamic status. Thus, searching therapeutic targets based on gene expression profiles may provide higher chances of identifying points of intervention compared to searching solely based on DNA-level susceptible genetic variants. However, it should be noted that gene expression profiles are based on transcription profiles; thus, they have their own limitations. First, there can be discrepancies between transcription-level activities and protein levels or metabolic activity levels, as there are many posttranscriptional regulatory mechanisms, such as small RNA activities. Second, identifying key driver events of these transcriptional changes is still a challenge. Nevertheless, publicly available gene expression profiles from relevant studies of obesity and T2D are important and beneficial resources as they provide unique information Comparing biological coverage of GWAS, KObased phenotyping, and gene expression profiles To compare the coverage of obesity/T2D-related genes that can be identified from currently available data from GWAS, KO-based phenotyping, and gene expression profiles, the genes that were identified from different data types were compared to one another. Fig. 4 illustrates the Venn diagram of the obesity/T2D-related genes that were identified from each data type in the previous sections and the amount of overlap between them. The identified genes show very little overlap between different data types, where DEGs from gene expression profiles show significantly low overlap with the other two data types (pvalue of low overlap: DEG-GWAS = 7.73E-17, DEG-IMPC = 0.026). The overlap between the genes identified from GWAS and the KO phenotyping study is also very low, but its statistical significance is not as strong as the other cases. This low commonality between the obesity/T2D-related genes from different data types suggests that their different approaches to assessing the relationships between genes and phenotypes cause biases in the coverage of identified genes. The discrepancy is clearer between the DEGs from gene expression profiles and the genes from the other two data types, suggesting that gene expression-level changes and DNA-level genetic effects may cover different biological aspects. This difference in coverage between the results of studying gene expression profiles and the results of studying DNA-level genetics becomes more evident when their enriched biological functions are compared one another. For each list of genes identified from studies of gene expression profiles, GWAS, and KO-based phenotyping, the statistical enrichment levels of known biological functions were evaluated to identify the most strongly relevant biological functions for each list of genes. Molecular Signatures Database 28, 29 is a collection of annotated gene sets, where 17,774 gene sets are curated with a related list of genes (Molecular Signatures Database v5.2). Among these data, each of the 6659 gene sets that represent known biological pathways (curated from pathway databases, such as KEGG 30 and REACTOME 31,32 ) and Gene Ontology 33, 34 biological processes and molecular functions was evaluated for its overlap with each list of genes identified from gene expression profiles, GWAS, and KO-based phenotyping, and the statistical significance of overlap was computed as a hypergeometric p-value. For the list of genes from each data type, biological functions with an FDR-adjusted p-value < 1E-10 were declared as the most strongly relevant functions, and Fig. 5a shows the Venn diagram of the most strongly relevant biological functions for the three data types. The biological functions that are very strongly enriched in the genes that showed obesity/ T2D-related phenotypes from KO-based phenotyping (IMPC) were mostly discovered by other data types except for one function, whereas 37 biological functions were discovered by both GWAS and gene expression profilebased analysis, and three functions were also discovered by gene expression profile-based analysis. The biological functions from gene expression profile-based studies show large discrepancies with those from GWAS, which strongly implies differences in the biological coverage of gene expression profiles and DNA-level genetic susceptibility information. Fig. 5b illustrates the very strongly enriched biological functions for different data types that are relevant to obesity/T2D, and it shows different biological mechanisms that are specifically enriched in DEGs from gene expression profiles. From Fig. 5b, the list of genes from gene expression profiles, GWAS, and KO- based phenotyping commonly have strongly enriched biological functions that are related to metabolism, differentiation, homeostasis, and lipids. However, biological functions that are related to muscle, immune, catabolism, cytokine, epigenetic modification, and inflammation are specifically enriched in the genes from gene expression profiles in general. This finding implies that genes involved in such biological functions are more affected by dynamic gene expression changes than by static genetic backgrounds. These results emphasize that we need to consider all discrepancies in gene coverage and biological functions that can be identified with different data types in searches for therapeutic targets and strategies. Conclusion Many efforts to understand obesity and T2D and find their therapeutic targets have been made. However, few data resources exist with comprehensive high-throughput molecular profiles for obesity or T2D whereas such comprehensive molecular information is essential for understanding these conditions. In this review, publicly available genomic data resources of obesity and T2D are discussed, covering major GWAS, a KO-based phenotyping study, and studies with gene expression profiles based on a popular microarray platform. While no comprehensive data resource is available, systematic integrations of these individual data sources based on their associated phenotypes and experimental conditions give us a chance to mimic comprehensive collections of genomic data. GWAS and the KO-based phenotyping study provided insights into the function of individual genes, whereas gene expression profiles provided complementary opportunities to observe dynamical systematic changes of biological functions that could not be observed with DNA-level information. A comparison of obesity/ T2D-associated genes that were identified from different data types showed different coverage of identifiable genes, and a comparison of their enriched biological functions provided stronger clues into the biological discrepancies that can be recognized with different data types. Thus, utilizing these data resources for own studies with specific disease models requires the consideration of such discrepancies in data characteristics and coverage. From this point of view, a desirable approach to building a comprehensive molecular profile for obesity or T2D requires consideration of the following. First, a cohort must be broadly collected so that it can represent various ranges of metabolic conditions as metabolic conditions, such as obesity or T2D, are continuously developed with varying states of metabolic dynamics. Second, a comprehensive collection of phenotypes must be monitored to precisely model the progression status of metabolic conditions. Third, a collection of tissue samples for relevant organs must be collected from individuals in the cohort as several organs participate in the development of metabolic conditions. Lastly, efforts should be put towards making the molecular profiles of tissue samples as comprehensive as possible by covering various levels of molecular mechanisms, including information at the DNA, transcript or gene expression, epigenetic, protein, and metabolic profile levels. Such comprehensive molecular profiling from human multiple organs (if possible) or even organs from model species will give us information on molecular activities in obesity and T2D with an unparalleled level of resolution, and this rich information will become a solid basis for searching for therapeutic targets and developing treatment strategies.
2018-04-27T04:32:24.562Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "16a76e14176400bdfc34b246fe6fb68fcd09db1b", "oa_license": "CCBYNCND", "oa_url": "https://www.nature.com/articles/s12276-018-0066-5.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a83ad0a9bda5cf403f53e2b2a60cc633a07d6235", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
268549319
pes2o/s2orc
v3-fos-license
Current status of contact lenses usage in Korea: A population-based cohort study 2021 Purpose To investigate trends in contact lens usage in a nationally representative sample of the Korean population in 2021. Methods For this retrospective study, we analyzed data of 3,601 Korean participants aged 10–59 years, from the Korea National Health and Nutrition Examination Survey (KNHANES 2021 version), who underwent eye examination, of whom 1,136 individuals (274 men and 862 women) were contact lens users. The demographic trend among Korean contact lens wearers was examined using statistical analyses to investigate the changes in their contact lens-wearing experience, duration of lens use, type of lens used, location of purchase, presence of an Eye Care Practitioner(ECP)’s prescription, lens-related ophthalmic complications, and type of lenses worn at the time of complications, according to sex. Multivariable logistic regression analysis was conducted to examine the association of each variable with the rate of complications and use of soft lenses. Results The average age of the contact lens users was 33.42±0.33 years, with 70.36% (weighted percentage) of users being women who used contact lenses for significantly longer periods than men (p<0.001). Additionally, only wearing of cosmetic lenses was significantly correlated with the occurrence of complications (p = 0.006), and 6.76% of users purchased lenses without a prescription. Multivariate analysis among the contact lens users revealed a significant relationship between the complication rate and female sex (p = 0.002), pre-existing eye disease diagnosed by ECPs (p = 0.0288), and duration of contact lens use (p<0.0001). Conclusion We identified sex differences in contact lens usage trends in Korea. The main changes observed were an increase in middle-aged lens users and a decrease in female users compared to that in the early 2000s. In addition, contact lens complications were significantly associated with sex and pre-existing eye disease. Therefore, those wearing contact lenses for extended periods should exercise caution and consult eye care specialists in the presence of any symptoms. Introduction Since the invention of contact lenses as an alternative to glasses in 1888 by Adolf Eugen Fick [1], significant advances have been made in the material and design of contact lenses.Many people utilize contact lenses today to address refractive errors and for cosmetic reasons [2][3][4].The design, material, and manufacturer of lenses that are easily accessible in each nation are all related to actual trends in contact lens wear.In addition, the distribution of refractive errors, the level of training of contact lens dispensers (optometrists, ECPs, or unregulated lens sellers), the age and sex ratio of the population, and the socioeconomic conditions of the patients affect the status of prescriptions for contact lenses [5]. With the increasing incidence of myopia at younger ages [6], the use of contact lenses for myopia correction is becoming common.In addition, the age at which corrective lens usage is suggested is also reported to be decreasing, especially because of recent advances in slowing the progression of myopia, either with overnight orthokeratology (OK) or use of multifocal soft contact lenses [5,[7][8][9][10]. The yearly international contact lens prescribing report is the most extensive annual multinational study of contact lens prescriptions; it reports the results of large-scale surveys in 20 countries over 20 years, starting with a 1996 UK contact lens survey [5,11].However, it is regrettable that although the target countries have differences in contact lens prescription systems, these inter-country differences are not considered.Moreover, there may be a bias because contact lens practitioners in each country were only included in the dataset if they voluntarily responded to the survey.Although information on contact lens prescribing trends would be useful to facilitate national eye-care planning, there are only a few population-based or large-scale studies on prescribing status, especially in East Asia [12][13][14][15].The concerns from a survey conducted by the Korean Contact Lens Society in 2008, is the decrease in age of contact lens users and rapid increase of cosmetic colored lens-related complications [16].The Korea National Health and Nutrition Examination Survey (KNHANES) is an ongoing nationwide, population-based, cross-sectional health examination and survey that accumulates data on the health and nutritional status of the non-institutionalized population of South Korea.Ophthalmologic examinations were included in the survey in the latter half of 2008 to investigate the prevalence and risk factors of common eye diseases [17].This study aimed to investigate contact lens prescribing trends using data collected from a recent large-scale Korean population-based survey of contact lens prescription status and complication history.In addition, we analyzed the characteristics of individuals who experienced contact lens complications.South Korea operates within a distinctive context where the conventional concept of an "optometrist" is notably absent.The term "Eye Care Practitioner (ECP)" was used to prevent confusion of readers, because it encompass both ophthalmologists and optometrists who is capable of examining eyes and recognizing and managing eye diseases. Study population The KNHANES is a national surveillance system that has been assessing the health and nutritional status of Koreans since 1998.Contact lens-related investigations were conducted among participants aged >40 years in 2019 and 2020 and those aged 10-59 years in 2021.Since the present study requires investigation of various age groups, the 2021 data were utilized.Of the 7,090 individuals surveyed in the KNHANES in 2021, 3,603 individuals aged 10-59 years old were subjected to an eye examination, and 3,601 were included in this study; two participants were excluded due to being in an anophthalmia state or having undergone ocular enucleation or exenteration.In total, there were 1,136 participants who were using contact lenses in this study.The KNHANES data are publicly available (https://knhanes.kdca.go.kr/knhanes/sub03/ sub03_02_05.do);the authors did not have access to information that could identify individual participants during or after data collection.This study was approved by the Institutional Review Board of the Catholic University of Korea, and the requirement for informed consent was waived because of the retrospective cohort nature of the study (IRB number: SC23ZISE0053). KNHANES survey for contact lens use The KNHANES surveys are conducted annually using a rolling sampling design that involves a complex, stratified, multistage, probability-cluster survey of a representative sample of the noninstitutionalized civilian population in South Korea.The KNHANES is composed of three component surveys: a health interview, a health examination, and a nutrition survey.Health interviews and examinations are conducted by trained medical staff and interviewers at mobile examination centers.After completion of the surveys, data are assembled and stratified based on socioeconomic status; health behaviors; quality of life; healthcare utilization; anthropometric measures; biochemical profiles using fasting blood serum and urine; measures for dental health, vision, hearing, and bone density; radiography results; food intake; and dietary behavior [18].The survey items relevant to the present study included questions on the experience of wearing contact lenses, duration of lens use, type of lens used, place of purchase, presence of an ECP's prescription, lens-related ophthalmic complications (including keratitis), and the type of lenses worn at the time of complications.The types of lenses were classified into soft, hard (RGP), OK, and cosmetic lenses, respectively.The presence or absence of ocular conditions was identified based on self-reported responses from participants.Responses were solicited by selecting from among items pertaining to glaucoma, cataracts, macular degeneration, retinal vascular occlusion, diabetic retinopathy, and dry eye syndrome.Complications were identified based on self-reported responses from participants.Participants were asked about their experience with contact lens-related "ophthalmic complications," including corneal inflammation, and responded with "Yes" or "No."Data on the types and severity of complications were not collected. Statistical analyses All statistical analyses were performed using SAS version 9.4 (SAS Institute, Cary, NC, USA).The KNHNES utilizes a multi-stage clustered probability design.Individual-level weights are assigned to estimate the population by accounting for the complex survey design, survey nonresponse, and post-stratification.Therefore, all data statistics of this survey were presented as weighted percentages throughout the study [18].The clinical and demographic characteristics of the study participants were presented as mean ± standard error (SE) for continuous variables and as numbers with weighted percentages (%) for categorical variables.The p-value for continuous variables was obtained using Student's t-test, and the p-value for categorical variables was obtained using the Rao-Scatt chi-square test.Multivariable adjusted logistic regression analysis was conducted to examine the odds ratio (OR) and 95% confidence interval (CI) for the association of each variable with the lens complication rate and with soft lens use.Model 1 was unadjusted, and Model 2 was adjusted for age, sex, income, region, duration of contact lens use, place of purchase, whether the lenses were prescribed by an ECP, and whether the participants had a history of eye disease.The p-value was calculated using hierarchical multivariate logistic regression analyses, and statistical significance was set at p <0.05. Results The general characteristics of the participants wearing contact lenses are summarized in Table 1.Regarding the duration of use, the group with 1-5 years of use accounted for the majority of the samples at 33.17%.Women had a significantly longer period of use than men (p<0.001), with 46.45% of women and only 17% of men reporting usage of lenses for a duration of �5 years.Among male contact lens users, nearly half of them (45.59%)used the lenses for �1 year.The most commonly used lens type was soft lens for both sexes, and women were significantly more likely to use cosmetic lenses compared to men (p<0.0001). Purchasing lenses from an eyeglass stores was the most predominant method of purchase for both sexes (88.61% of men and 81.83% of women).In the same context, only 21.01% of the population purchased lenses based on prescriptions from ECPs.It can be inferred from this that most lens users purchase lenses without consulting their ECPs.Additionally, in the population that experienced complications, the use of soft lenses was significantly higher among men, at 88.45% in men and 60.38% in women (p = 0.0151). The overall prevalence of contact lens use was 32.77%; in addition, only 18.96% of all men and 47.28% of all women had used lenses, indicating that women were the predominant lens wearers.The prevalence of contact lens use was highest in the 30-39-year age group for men and the 20-29-year age group for women (Fig 1).Among contact lens users, 70.36% (weighted percentage) of the participants were women.The mean age of participants with contact lens use was 33.42±0.33 years (p<0.0001). Fig 2 shows the prevalence of contact lens use in South Korea stratified according to various factors.To investigate whether urbanization, location of residence, population percentage of women, and age 20-39 years were associated with the prevalence of contact lens use, the percentage of contact lens users was stratified according to the population of city dwellers and other geographical areas.In addition, the percentages of women contact users and users aged 20-39 years were stratified into five groups (Fig 2B and 2C, respectively).Seoul, Gwangju, Busan, and Daejeon exhibited the highest usage rates.The lowest values were observed in Jeonnam and Ulsan.Soft lens use accounted for 48% of the total usage, followed by cosmetic lenses at 33%, corneal refractive lenses at 10%, two or more types at 5%, and hard lenses at 4% (Fig 3). Table 2 presents a quantitative comparison of patients with contact lens complications.Although there was no statistically significant correlation between age and the frequency of complications, the frequency of complications was highest in the 30-39-year age group.The frequency of complications was also significantly higher among women compared to men (chi-square p<0.0001).In addition, there were some positive correlations between the duration of contact lens wear and the frequency of complications.Among the various types of contact lenses used, only cosmetic lens usage seemed to have a significant correlation with the occurrence of complications (p = 0.006).The mode of purchase and type of prescription did not appear to be significantly correlated with the occurrence of complications.Finally, soft lenses were associated with the highest rate of complications, possibly because they were the most commonly used type. Tables 3 and 4 presents an analysis of the factors affecting contact lens use and complications by controlling for other variables through logistic regression.In addition to the previously mentioned factors, various other factors such as region, household income, and previously diagnosed eye disease were included as influencing factors.In the univariate analysis using logistic regression, older age (p = 0.0177), female sex (p<0.0001),pre-existing eye disease diagnosed by an ECP (p = 0.001), and duration of contact lens use (p<0.0001)were associated with the complication rate after contact lens use (Table 3).Additionally, among pre-existing eye diseases, we conducted additional analyses to examine the correlation between ocular surface-related conditions, specifically dry eye syndrome, and lens complication.As a result, dry eye syndrome exhibited a significant association with the complication rate (data not shown, p = 0.0303). In contrast, multivariate analysis revealed a significant relationship between the complication rate and female sex (p = 0.002), pre-existing eye disease diagnosed by an ECP (p = 0.0288), and duration of contact lens use (p<0.0001)among contact lens users (Table 3).No other variables were identified as risk factors associated with an increased complication rate among contact lens users.In the univariate analysis for soft lens usage, the 20-39-year age group (p<0.0001),non-pre-existing eye disease diagnosed by an ECP (p = 0.004), duration of contact lens use (p<0.0001), and place of purchase (p = 0.0002) were associated with soft lenses usage (Table 4).No other factors were significantly associated with soft lens usage.In contrast, multivariate analysis revealed a significant relationship between soft lens usage and the 20-39-year age group (p<0.0001),male sex (p = 0.0048), non-pre-existing eye disease diagnosed by an ECP (p = 0.0011), duration of contact lens use (p<0.0001), and place of purchase (p<0.0001) in contact lens users (Table 4).No other variables were identified as risk factors for increased soft lens usage. Discussion With an increasing number of contact lens users in South Korea, research on contact lens usage trends is being consistently conducted.A multi-institutional, nationwide survey on contact lens use was conducted from October 2000 to September 2002 with 482 individuals [19], and a similar study involving 920 high school students in a specific region was published in 2011 [20].To the best of our knowledge, this is the first large population-based study to investigate trends in contact lens use in South Korea. Considering the advancements in medical technology and the improvement of socioeconomic conditions, this study presents a comprehensive report on the status and complication trends of contact lens use on a nationwide scale, encompassing individuals of all ages, with a sample size of 1,136 cases.In addition, while previous studies were limited to analyzing contact lens usage in a specific cohort of contact lens wearers, this study evaluated the proportion of contact lens users in the entire population, providing a more accurate understanding of trends in contact lens use. The most notable observation from this study was the increase in the age of contact lens users and the decrease in the rate of female users compared to the data from previous studies. We indirectly compared our study with the report published in 2004, which investigated contact lens user trends using questionnaires distributed in universities and hospitals [19].Although the previous study was not a population-based study, we were able to present approximate changes in contact lens usage patterns.According to that report [19], the main contact lens users were the 20-29-year age group, accounting for 72% of the total sample, followed by the 30-39-year group at 16.8% and �40 years old at only 2.7%.In the present study, however, the average age of contact lens users was 33.42 years; 28.88% were aged 30-39, and 30.7% were aged �40 years, indicating that the average age of contact lens users is increasing.Further, women accounted for 70.36% of our sample, showing a reduction from 88% reported in a 2004 study [19].We compared our findings with the 2020 Global Contact Lens Prescribing Report by Morgan [5].Morgan's study comprises a much larger sample size, having collected data from prescribing practitioners and encompassing over 400,000 contact lens fits from 71 countries.The study's primary objective was to document global trends in contact lens prescriptions; thus, it focused on a limited number of parameters, including material, modality, frequency of wear, type of correction, and care system.In contrast, our study involved a small number of lens users, approximately 3,000, who voluntarily selected their lens type, and were from a single national study.Furthermore, our study gathered various data types, such as lens type, duration of lens wear, places of lens purchase, and the presence of complications.Despite these differences, we found it valuable to compare the two studies to gain insights into the general similarities and differences in lens-wearing patterns. In 2020, a multi-institutional survey was conducted with 13,311 individuals aged 1-75 years across 24 countries.The average age of the participants was 32.4±15.6 years, similar to that in our study.However, the proportion of women contact lens users was 65%-lower than that reported in our study (70.36%), indicating that a large proportion of young women are using contact lenses for cosmetic purposes and that there are more middle-aged contact lens users than in the past.This is probably because middle-aged women started using contact lenses during the 2000s when they were in their 20s and have continued to wear them since. Figs 2 and 3 present the purpose of lenses used in South Korea by comparing the regional lens user ratio and the percentage of lens types currently in use.Fig 2B and 2C present the percentages of users aged 20-39 years and women users, who were the majority of lens users in previous studies [16,19,20], stratified by region.The contact lens usage rates in Seoul, Gwangju, Busan, and Daejeon were similar.The percentage of contact lens use experience in the past one month was 48% for soft lenses, 33% for cosmetic lenses, 10% for OK lenses, and 4% for RGP lenses (Fig 3). We further compared the lens usage trends of our study with those in Japan, a country with similar ethnic characteristics where the primary purpose of contact lens use is cosmetic enhancement.In a study by Itoi et al. [10] conducted at Kyoto Medical University, Japan, We analyzed the contact lens usage patterns separately for men and women.We found distinct differences in usage patterns between the two sexes, which, to our knowledge, have not been reported previously.Teenage women were more likely to be contact lens users (p = 0.1379), significantly more likely to wear lenses for long periods of time (p<0.0001),and significantly more likely to use cosmetic lenses than men (p<0.0001)(Table 1). Additionally, we analyzed the contact lens-related complications based on population factors.Due to limitations on the number of survey questions, we were unable to include inquiries about specific types and severity of complications, as well as lens modality.Therefore, the information presented regarding contact lens complications should be viewed with caution.However, this study is significant as it is one of the first attempts to explore trends in contact lens complications within the Korean population, considering the lack of population-based research on this subject in Korea.Therefore, our study analyzed the impact of various population-based factors on lensrelated complications.Incidences of contact lens complications were significantly more common in women (p<0.0001),although complications from soft lenses were significantly more common in men than in women (p = 0.0151) (Table 1).Table 2 shows that female contact lens wearers had a significantly higher frequency of complications than males (p<0.0001).Most female contact lens users in South Korea have been wearing lenses for at least five years (46.45%), as shown in Table 1.This can explain the high rate of complications among female contact lens users. More than 50% of male contact lens users had been wearing contact lenses for less than a year, and within the contact lens user group, men preferred soft lenses.Complications occurred even after a short period of use, usually less than one year, suggesting that insufficient care may also be a contributing factor. We conducted a multivariate analysis to identify factors that were strongly correlated with complications after contact lens use and found that the percentage of complications among women was higher even after adjusting the duration of contact lens use.This suggests other potential contributing factors, such as make-up, a high prevalence of cosmetic lens use, or a lower threshold for discomfort among women.This finding aligns with the results of a study conducted by Kim in 2008Kim in -2009 [16] [16], where a significant increase in complications was observed among teenage patients using cosmetic-colored contact lenses, with a prevalence of 32.33%.The percentage of complications also increased with household income, but this may be due to the proportional relationship between income and medical services.Instead, the correlation between income and complications may have been underestimated owing to the tendency of the lower-income group to have a lower level of medical use. The group diagnosed with ophthalmic diseases by an ECP showed 1.8-fold higher rates of complications than the other groups (Table 3), indicating the correlation between underlying ophthalmic diseases and lens use complication.Therefore, individuals with ophthalmic diseases require proper care from an ECP to prevent complications while using contact lenses.However, our study showed that over 80% of contact lens users purchased lenses from eyeglass stores without prescription, which means that their underlying complications may have not been addressed before using contact lenses (Table 1).Given the increased risk of ocular diseases such as macular degeneration, cataracts, and glaucoma in individuals aged over 40 years, it is crucial to enhance patient education about the significance of from ECP before contact lens use. As this was a cross-sectional study, limitations such as the risk of recall or information bias cannot be denied.In addition, we could not determine the causal relationship between the variables and the rate of complications because we investigated the odds ratios rather than the hazard ratios.Unfortunately, the KNHANES is a survey that investigates various health conditions and nutritional status, not limited solely to ophthalmic examinations.Due to the constraints on the number of survey questions for specific conditions, questions related to various complication types and severity and lens modality were not covered in the survey.Contact lens modality and types of lens care solutions are critical factors that can impact lens use complications; however, due to the reasons mentioned above, these aspects were not investigated in the present study.Therefore, conducting in-depth research on the specific risk factors that may be related to critical complications to focus on and provide care in these specific areas is warranted.Finally, questions related to contact lenses were included in the KNHNES only from 2020 to 2022.Only the 2021 survey included all age groups; the 10-39-year age groups were not included in the 2020 or 2022 surveys.Therefore, it would be beneficial to conduct further research with longer study durations to discover trends over longer periods of time. Despite these limitations, the significance of this study cannot be overlooked.Foremost, this is the first study on contact lens usage in a uniform population of a single ethnicity, country, and at a single research institute.The advantages of conducting epidemiological research within a single ethnicity are notable.This approach enhances internal consistency, allowing for more coherent and reliable findings.Furthermore, it provides valuable insights into the population's specific characteristics, sensitivities, and unique healthcare needs, which can inform future research and individualized patient care.Second, this study is the largest and latest of its kind conducted in South Korea.The sample size of this study was 1,136, the largest population in a single country covered in a study since 2016.Additionally, this is the latest large-scale study of contact lens usage trends in South Korea since 2014, as Korean national data have only recently become available.In addition, trends in lens use can be studied precisely by directly identifying the percentage of contact lens users within the entire population. Conclusion It has been more than 30 years since individuals in South Korea began wearing contact lenses.The average age of lens users has increased since then.The frequency of wearing lenses for aesthetic purposes among women in South Korea is still much higher than that in other countries.Nevertheless, the demand for multifocal soft contact lenses among middle-aged and elderly individuals is expected to increase for myopic regression, considering the high rate of myopia in the East Asian population.To ensure safe contact lens use, it is advisable to emphasize the significance of precise prescriptions, screening for eye conditions, and guidance from eye care professionals. Fig 2 . Fig 2. Prevalence of contact lens use in South Korea.(A) Overall prevalence of contact lens use in South Korea.(B) Prevalence of lens use among women in South Korea.(C) Prevalence of lens use in the 20-39-year age group in South Korea.*Reprinted from Statistical Geographic Information Service of Korea under a CC BY license, with permission from Lee Ju Won, Director of Geospatial Information Service Division(Statistics Korea), original copyright 2023.https://doi.org/10.1371/journal.pone.0296279.g002 Table 2 . Comparison of quantitative variables in patients with and without complications after using contact lenses (n = 1136). contact lens users aged 16-60 years were surveyed between 2003 and 2016; the prevalence of cosmetic lens use was �15% during the study period, lower than the prevalence in South Korea (33%).This indicates that the use of cosmetic lenses in South Korea is high, especially among the young female population. Table 3 . Logistic regression of the relationship between the variables and increasing complication rate in contact lens users. Outcome: Complication, yes. Table 4 . Logistic regression of the relationship between variables and increasing soft lenses use rate in contact lens users. Outcome: Soft lenses, use.
2024-03-22T05:12:48.478Z
2024-03-20T00:00:00.000
{ "year": 2024, "sha1": "0b072d649dabd61f511b1885a7fff64e2f59b1a4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0296279", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0b072d649dabd61f511b1885a7fff64e2f59b1a4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18591803
pes2o/s2orc
v3-fos-license
Free association transitions in models of cortical latching dynamics Potts networks, in certain conditions, hop spontaneously from one discrete attractor state to another, a process we have called latching dynamics. When continuing indefinitely, latching can serve as a model of infinite recursion, which is nontrivial if the matrix of transition probabilities presents a structure, i.e. a rudimentary grammar. We show here, with computer simulations, that latching transitions cluster in a number of distinct classes: effectively random transitions between weakly correlated attractors; structured, history-dependent transitions between attractors with intermediate correlations; and oscillations between pairs of closely overlapping attractors. Each type can be described by a reduced set of equations of motion, which, once numerically integrated, matches simulations results. We propose that the analysis of such equations may offer clues on how to embed meaningful grammatical structures into more realistic models of specific recursive processes. Introduction Complex thought processes, as well as their uniquely human expression through language, appear to be based on the same cortical machinery that we essentially share with other mammals, despite major variations in absolute size and relatively minor variations in internal organization [1]- [4]. This suggests, in our view, that in order to understand cognitive capacities that are apparently uniquely human one should consider the possibility that they may arise 'spontaneously', through phase transitions induced by quantitative changes in certain parameters of cortical organization. If so, it would not be unreasonable to utilize simple generic models of cortical processing in order to model language processes [5], provided proper consideration is taken of quantitative aspects. Following a suggestion by Chomsky and colleagues [6], we have focused on the emergence of latching dynamics in models of cortical networks, as a simplified model of a recursive process [7]. A computational mechanism for recursion provides, as pointed out in [6], for the generation of an infinite range of expressions of arbitrary length, out of a finite set of elements. Such sequences of elements, if not spanning uniformly the space of all possibilities but rather constrained by a non-trivial structure of transition probabilities at each recursion step, in fact implement a syntax, comparable in principle to those observed in natural language or in reasoning [8]-although in the reduced models that we can simulate such syntactic structure is a rather abstract and apparently pointless statistical characterization of the latching transition matrix. A transition from finite to infinite recursion may have occurred in the human species tens of thousands of years ago, as a sudden result of a gradually expanding cortical connectivity [7] and it may have later been bootstrapped and refined, of course, by additional complex processes of cultural evolution [8]. 3 In previous reports, we have considered a simplified model of a semantic memory system, implemented as a Hopfield associative network with Potts variables [7,9,10]. We have shown how to analyse the storage capacity of the model [11], which characterizes it even in regions of parameter space in which no latching dynamics occurs. We have also provided a first description of the structure of latching transitions [12], which we aim to characterize in more detail in the present study. We refer to these earlier publications for a more extensive introduction on semantic memory, on the representation of concepts through features and on cortical organization, all crucial elements to motivate the analysis of the model. We also refer to a recent paper that discusses the storage of correlated representations, a necessary trigger for latching dynamics to occur [13]. The Potts-Hopfield memory model The model can be regarded as an attractor neural network whose units represent themselves local attractor networks, realized in small patches of cortex, and which can each converge dynamically into one of S local attractors. The activity of local network i can then be described synthetically by an analog 'Potts' unit, i.e. a unit that can be correlated to various degrees with any one of S local attractor states. The state variable of the unit, σ i , is thus a vector in S dimensions, where each component of the vector measures how well the corresponding feature is being retrieved by the local network. The possibility of no significant retrieval-no convergence and hence no correlation with any local attractor state-can be added through an additional 'zero' state. Because the local state cannot be fully correlated, simultaneously, with all S features and with the zero state, one can use a simple normalization S k=0 σ k i = 1. Having introduced such Potts units as models of local network activity, in the following we will use the terms 'local network' and 'unit' as synonyms. The global network is comprised of N (Potts) units connected to one another through tensor sets of weights, which represent collections of long-range synaptic connections, between distant patches of cortex. The network has stored p Potts activity patterns, as global attractor states that represent concepts in semantic memory. When global pattern ξ µ is being retrieved, the state of the local network i is in the local attractor state σ i ≡ ξ µ i , retrieving feature ξ µ i , a discrete value which ranges form 0 to S (the zero value standing for no contribution of this group of features to concept µ). As shown in [11], such a compositional representation of concepts as sparse constellations of features (with a global sparsity parameter a measuring the average fraction of features active in describing a concept) leads to the desired global attractor states when long range connections have associated weights J kl which can be interpreted as resulting from Hebbian learning. In this expression, each element of the connection matrix C i j is 1 if there is a connection between units i and j, and 0 otherwise (the diagonal of this matrix is filled with zeros), while c M stands for the average number of connections arriving to Potts unit i. In this model, the maximum number of patterns, or concepts, which the network can store and retrieve scales roughly like c M S 2 /a. We refer to [11] for an extensive analysis of the storage capacity of the Potts model. Basic conditions for latching dynamics Here, we are interested in studying not the storage capacity but rather the dynamics of such a Potts model of a semantic network. Latching dynamics emerges as a consequence of incorporating two additional crucial elements in the Potts model: neuronal adaptation and correlation among attractors. Intuitively, latching may follow from the fact that all neurons active in the successful retrieval of some concept tend to adapt, leading to a drop in their activity and a consequent tendency of the corresponding Potts units to drift away from their local attractor state. At the same time, though, the residual activity of several Potts units can act as a cue for the retrieval of patterns correlated to the current global attractor. As usual with autoassociative memory networks, however, the retrieval of a given pattern competes, through an effective inhibition mechanism, with the retrieval of other patterns. One can then imagine a scenario in which two conditions are fulfilled simultaneously: the global activity associated with a decaying pattern is weak enough to release in part the inhibition preventing convergence toward other attractors; but, as an effective cue, it is strong enough to trigger the retrieval of a new, sufficiently correlated pattern. In such a regime of operation, after the first, externally cued retrieval, the network state experiences the concatenation in time of successive memory patterns, i.e. it latches from attractor to attractor (see figure 1). In a previous report [12], we have offered a first description of the complexity of latching dynamics, and discussed which parameters control it. Latching transitions were seen to be neither deterministic nor random, nor to depend solely on the correlation between consecutive attractor states. Furthermore, a marked asymmetry was observed in the transition matrix, controlled by a threshold parameter U . Introducing a model of adaptation In retrieval dynamics without adaptation, units are updated with the rule under the influence of a tensorial local 'current' signal which sums the weighted inputs from other units, with a fixed threshold U favouring the zero state To model firing rate adaptation, however, we introduce a modification in the individual Potts unit dynamics. The update rule is now mediated, for k = 0, by the vectors r (the 'fields', or 'local potentials', which integrate the h 'currents') and θ (the dynamic thresholds specific to each state), which are integrated in time where the fields are assumed to change more rapidly than the thresholds, i.e. with time constants We also include a nonzero local field for the zero state, driven by the (slow) integration of the total activity of unit i in all nonzero directions, (1 − σ 0 i ). with now (b 1 ) −1 (b 3 ) −1 . The local field for the zero state, which is taken to be initially equal to U , eventually increases towards U + 1 for active units, down-regulating their activity and thus preventing local 'overheating'-and at the same time destabilizing ordinary fixed-point attractors. Note that a fixed threshold U of order 1 is crucial to ensure a large storage capacity (as shown in [14]) and to enable unambiguous memory retrieval, precisely by stabilizing the fixed-point attractors that here we destabilize over a slower timescale (b 3 ) −1 . A final element we include, partially correcting the effect of the field for the zero state, is an effective self-coupling J kk ii , constant for every i and k = 0, which adds stability to the local network. Generating correlated distributions A standard mathematical procedure to introduce model correlations in a group of p patterns is through a hierarchical algorithm, which may be parametrically varied from producing independent to highly correlated patterns. Patterns are defined using one or more generations of parents, from which they descend, emulating a genetic tree. Since many patterns share the same parents, the generation process introduces correlations among descendant patterns, which are simpler for one-parent families and more complex in the case of multiple parents. We adopt a multi-parent scheme, in particular we allow for up to 200 parents, which we call factors [7]. 6 They represent semantic category generators, relating directly the correlation between patterns to categorization in a real semantic system, so as to preserve a possibility to link the correlational statistics of our model to observations in the cognitive neuroscience of semantic memory, which we pursue in a cognate report [13]. These factors are defined simply as distinct random subsets of the entire set of Potts units. In the simulations, each subset includes N f units out of the total N units, and a total of 200 such factors are generated. The overlaps in the spatial distribution of different factors therefore are purely random, and clustered around their mean value N f 2 . Next, global patterns are generated from the factors, which have been indexed by n in order of decreasing mean importance. For each global pattern, the specific importance of each factor is given by a coefficient γ µn obtained by multiplying the overall factor exp(−ζ n) by a random number, taken to be 0 with probability 1 − a, and otherwise drawn with a flat distribution between 0 and 1, specifically for pattern µ. A value taken by factor n, σ n , is randomly drawn among the S 'genuine' attractors, and a contribution γ µn is added to the field onto each Potts unit over which factor n has been defined, in the direction σ n . After accumulating contributions from all factors, the direction in which each unit received the largest field is computed, and the N a units receiving the largest maximal fields are assigned the corresponding direction σ n in pattern µ, while the remaining N (1 − a) units are assigned the null state in pattern µ. With this procedure, pairs of Potts units have uncorrelated activity when averaged across patterns (because the different patterns that both engage the pair will span nearly evenly the different local states). Pairs of patterns, instead, can by highly correlated once averaged across units, particularly if they share one or a few most important factors; and positively correlated if these factors have been assigned the same direction in Potts space. Thus, correlations among patterns will be higher if the importance of different factors decreases rapidly (e.g. in the simulations the value ζ = 0.02 was used, equivalent to assuming of order 50 'important' factors); and they will tend to vanish if all factors are equally important (ζ = 0). When correlations are very high each pattern tends to be significantly correlated with a specific subset of the others, those sharing the main factor that influences them, and positively correlated with a fraction 1/S of this subset. In this scheme, the number of memory items significantly overlapping with one recently retrieved, and which can be the target of a non-random transition, scales up as p/S, and does not depend on the connectivity. By contrast, the storage capacity for retrieval can still scale up as in the case of uncorrelated patterns, if a proper learning rule is used [13]. To characterize statistically the correlations among the resulting set of patterns we introduce for each pair of patterns µ and ν the quantities For any two patterns µ = ν (in the following we drop their indices for simplicity) C 0 is obviously the number of inactive units they share, C 1 the number of active units which are shared and 7 in the same state and C 2 , on the other hand, the number of shared active units which are in different states. C 0 , C 1 and C 2 are the corresponding fractions, i.e. normalized to their respective maximum values. In [12] it was shown how to estimate the means and variances of these quantities, given the hierarchical algorithm for generating correlations. The statistics of the latching transitions We ran a large set of simulations using the dynamics explained in section 3.1. First of all, we created sets of p patterns using the algorithm described in section 3.2. Each simulation started by giving an initial cue to the network (as an additional term in the local field) in order to induce the retrieval of one of the stored patterns. The network was then left free to evolve until one of two stop conditions was reached: either the activity decayed to zero or else each unit was updated 50 000 times-keeping track of latching events. The simulation was run 5 or 100 times for each cued pattern, with different random seeds, and all p patterns were used as the cued pattern. In this way, we collected datasets of latching events, with which we constructed an estimate of the transition probability matrix M. Since we found that all statistical quantities stabilize within the shorter simulations, the longer ones were used merely as control data. For the simulations in this section we have set the parameters b was made implausibly fast so as to speed up the simulations and collect sufficient statistics. The probability matrix is a square matrix with p + 1 rows and columns, the additional one corresponding to the 'null' attractor, with each unit in the zero state. To estimate the transition probability between state µ and ν, we counted the times a latching event between these two attractors appeared in the dataset. We added a transition to the 'null' state whenever global activity decayed to zero and assumed a probability of 1 for the transition from the null state to itself. Finally, given that M i j represents the probability of having a latching transition from global attractors i to j, the sum of matrix elements over each row was normalized to 1. The role of the threshold In [12], we have studied the way latching dynamics depends on the threshold U . We reproduce figure 1, which shows examples of the latching behaviour for U = 0.3, 0.4 and 0.5. In terms of the transition matrix M, we have observed that, as expected, a high threshold U selects a subset of transitions, and the matrix has a few large and many zero or vanishingly small elements. As U decreases, more elements of M are nonzero, and they tend to span more of a continuum of values. We have also found that M is far from symmetric (even though the correlations between patterns are symmetric by definition). As the threshold U decreases and randomness grows, the transition probability matrix was observed to become somewhat more symmetric. The complexity of the transitions was also quantified with Shannon's information measure, computed over each row of M so that I µ ∼ 0 both if the attractor µ generates no latching (and thus decays to zero) or if it latches to another fixed attractor, deterministically; if instead the process of latching is completely random, I µ = 1. In terms of such complexity, we have observed that decreasing the threshold, from U = 0.5 to U = 0.3, I µ increases from a nearly deterministic mean value of I µ 0.03 to a largely random mean of I µ 0.7, thus suggesting that raising the threshold can effectively span the entire range from random to deterministic. Diversity of latching dynamics Here, we study the distinct types of latching transitions that can be observed even at a fixed value of the threshold U . Since correlations between patterns are obviously a major determinant of the transitions, we considered the distribution of correlations, parametrized for each µ and ν pair by the quantities C µν 0 , C µν 1 and C µν 2 defined above. We computed these distributions using (i) the whole set of patterns and (ii) the dataset of latching events. In the first case, each pair of patterns enters the average once and only once. In the second case, only pairs of attractors visited one after another in at least a latching event are considered, with a weight proportional to their frequency of occurrence in the dataset. In [12] we found that the distribution of C 0 values does not vary appreciably between (a) and (b) (to a large extent, quite probably, because its variance is limited). Hence, we focus here on the distribution of C 1 and C 2 , which vary significantly across pairs. We have found three different kinds of latching behaviour in the space formed by C 1 and C 2 . They are as shown in figure 2. We characterize these three regions based on another variable, which is the value of the retrieval overlap at which two consecutive latching patterns cross over each other (see e.g. figure 1). We call this new variable λ. It is seen that when the value of λ is high (between 0.6 and 0.8), the value of C 1 is generally higher than that of C 2 -latching occurs between patterns that are significantly correlated (note that, for uncorrelated patterns, C 1 C 2 /S). This region consists of the points shown in violet in figure 2. The points in orange are the ones with λ small (less than 0.2) and these fall, with the exception of a few points, into the region where C 1 C 2 . The line C 1 = C 2 separates the two The dichotomy can be intuitively understood because a high value of λ is expected when there is a large overlap between the corresponding patterns of activity (see figure 1) and this implies a high value of C 1 , since C 1 is the number of units which are shared between the two patterns, i.e. active and in the same state. The lone green data point falls into yet another 'region', since in this case the two latching patterns oscillate among themselves in activity and meet at a very high value of λ, of around 0.85 or more. Eigenvalue analysis As M is a transition probability matrix, the eigenvalues of M can be shown to have a modulus lower than or equal to one. Because of the construction of the matrix, the eigenvalue corresponding to the zero pattern, which projects entirely into itself, is λ 0 = 1. In the general case, when applying the transition matrix n times to an initial pattern µ, the result may as usual be decomposed as where D is the diagonal matrix with the same eigenvalues as M, A is the basis change matrix with the eigenvectors of M as columns, λ k is the kth eigenvalue of M, v k the corresponding eigenvector andx η is the unitary versor with elements (x η ) i = δ iη . Thus, we see that for large values of n activity will eventually decay to the 'null' attractor, unless some non-null eigenvector of M has an eigenvalue of 1. Whenever this is not the case, the decay time is given by the second largest eigenvalue of M. More specifically, for any eigenvalue λ k , the number of transitions for its eigenspace to decay, for example, to 0.1 of its original amplitude is given by In table 1, we show n dec for the second and the third largest eigenvalues, and for three different random number seeds. For each of the three seeds, the second largest eigenvalue corresponds to modes that do not decay over the entire length of the simulation (the convergence to an attractor and subsequent drift away from it take, with our parameters, between 300 and 500 updates of each unit, which multiplied by n dec 100 is of the same order as the 50 000 updates we set as the maximum duration of the simulation). So in each of the three examples, some sort of latching dynamics did occur indefinitely, although in the case of the first seed it was clearly of a peculiar type. The third largest eigenvalue, when also close to 1, indicates that there are at least two groups of states that dominate the long term behaviour, and are dynamically kept separate for a long time. In general, the emergence of unitary eigenvalues in the matrix, apart from the one corresponding to the null state, is of great interest, because it indicates the transition from high-order (but finite) recursion to infinite recursion. More analysis is obviously required to understand this phase transition, but it appears from our simulations that quenched disorder (the random seed generator, determining the exact realization of correlated attractors) can bring the system into either phase, even when all parameters take the same values. It remains to be seen whether in a large enough system the variability due to quenched disorder progressively vanishes. The way the probability of observing indefinite latching depends on connectivity parameters, like c M and S (which is indirectly a connectivity parameter in the local cortical network interpretation of each Potts unit) has been sketched in [7]. Non-ergodicity appears as distinct latching behaviours In the case of a particular seed (seed 1), latching was dominated by two patterns which fell in the green region of figure 2. This can be taken to be a somewhat pathological case, determined in part by the high correlation between the two patterns, and in part by the lack of a suitable 'escape route' from the limit cycle they effectively comprise. For the other two seeds, the latching patterns ranged through all the values of C 1 (note that we ran the simulations twice, one for 5 cycles with each external cue and the other for 100 cycles with each external cue, without noting any appreciable difference). These results are shown in figure 3. The interesting feature to note in these three examples is that the frequencies of C 1 values over latching pairs, relative to the general distribution, show an initial dip, beyond values of C 1 of 0 and 1: there are fewer transitions between pairs with C 1 = 2 and onward, than with C 1 = 0 or 1. Though this decreased frequency is a small effect, it stands in contrast with the notion, suggested by previous analyses, that the latching transition probability simply increases monotonically with correlation, i.e. with C 1 [12], an effect that is still valid for high values of C 1 . The dip is seen from the graphs to be due to the many transitions that have λ very small, i.e. the random transitions, that fall in the orange region of figure 2. Such transitions occur preferentially at very low correlation, C 1 = 0 or 1, and relatively less frequently between pairs of patterns with higher values of C 1 . This observation motivates the study of the detailed dynamics of individual transitions, both in the low-and in the intermediate-correlation regimes. Transition dynamics In order to study the dynamical behaviour of the system during a single latching transition, we may complement the simulation approach with an analytical one. Simulations were performed following the same updating rules defined in section 3.1, but on larger networks (N = 10 000) and with parameters more appropriate to follow the detailed dynamics of individual transitions. The analytical approach considers the field affecting each unit as a sum of 'signal' and 'noise' terms, as described in the following, and derives differential equations that govern the dynamics of each subgroup of units which receive the same signal. Signal-to-noise analysis We start by writing the expression of the field affecting each nonzero state of unit i, from section 3.1 in terms of the overlaps between the state of the system and each pattern µ, defined as (15) and neglecting both the self interaction terms and the quenched disorder implied by the sparse connectivity among Potts units; the mean-field expression for the field becomes The signal-to-noise analysis [11] proceeds by singling out any pattern with macroscopic overlap with the current state, and treating other patterns as contributing only to the noise. For example, when focusing on a latching transition between two patterns µ = 1 and ν = 2, one may write where the noise amplitude q reflects the global activity of the network, defined as follows α = p/c M parametrizes the storage load and the Gaussian variable η has zero mean and unitary variance [11]. Note that the description in terms of equations (17) and (18), which is a reasonable approximation when patterns are uncorrelated, is much more delicate in the presence of correlations. Even when p c M the effect of the 'noise' term may remain important, due to correlations with other patterns, that make equation (17) inappropriate. Choosing an asynchronous updating procedure, in which a unit i is randomly selected and all relevant dynamical variables (its own h i , r i and θ i , plus the global m and q) are updated at each micro-step, the update of the network takes of order N single updates. We take this timescale as the unitary (macroscopic) time step, and we focus on the equations detailing the changes occurring within a micro-step of duration t = 1/N . Considering the definition of m, we can write the overlap value at time t + t in terms of the old value and from this, we derive the differential equation for the overlaps as dm where unit i was randomly chosen to be updated at time t. Averaging over such random choices we write 13 where the notation t + means after updating the whole network. A similar procedure can be followed for the variable q to derive the equation Together with equation (19) and the updating of individual units in terms of current values of m and q, the above equation describes completely the dynamics of the system. For example, if we focus on a transition between two patterns with macroscopic overlap at time t, we may write The information we have about the patterns is only statistical, as explained in section 3.2. Thus, the key to using the above equations is to group the last three sets (which comprise (2S + 1) × N individual equations) for units and states that receive the same signal. Memory retrieval dynamics As a simple example, we may consider the retrieval of a single memory pattern by an external cue. In this case, we have to follow separately the dynamics of units which are active or quiescent in the memory pattern to be retrieved, by tracking the fields affecting their quiescent and active states. In the simulation shown in figure 4, the network is prepared at time 0 in a quiescent activity state, with zero fields and thresholds for the active states and r 0 = 1.5. An external cue arrives at time step t = 200, providing a signal term pointing at pattern µ = 1 in the field to each unit. We used p = 5 patterns, hence a low memory load regime, in which the noise due to interference with other patterns, which were constructed without correlations, may be safely neglected in the analytical treatment; thus we neglect the variable q. Other parameters were S = 3, a = 0.1, U = 1.5, b 1 = b 2 = 0.01 and b 3 = 0.0 (for simplicity we thus omit also the evolution of the field affecting the zero state). Even in this simple situation, we need to distinguish: 1. units that are active in ξ 1 and their corresponding fields in the state k = ξ 1 (magenta), in other active states l = ξ 1 (blue dots, slightly distinguishable) and in the zero state (light blue); 2. units that are inactive in ξ 1 and their corresponding fields in active states (green, not distinguishable) and in the zero state (orange). In figure 4, the variables that we identify as 'not distinguishable' do not contribute to the dynamics since their values remain close to zero. With these simplifications, we are left with five differential equations to integrate for the fields, five for the thresholds, and one for the single relevant overlap. Their integration leads to a time evolution of the different quantities (fields and thresholds are shown as black curves in figure 4) in excellent agreement with the simulations (the data points in colour-a data point corresponds to a unit being updated). The latter show some unit-to-unit variability, which can be reduced by taking a moving average over units updated at similar times (not shown). After retrieving the cued pattern, in this simulation the network relapses into the quiescent state. Quasi-random transitions One can extend the analysis above to the more interesting case of latching transitions between pairs of patterns. The grouping of units and states into coherent mean-field ensembles is much more tedious, however. We focus here on the relatively simple case of only two correlated patterns, between which we may observe latching, given C 1 units that share the same active state in both patterns, and we neglect to consider any other pattern. The C 1 units may be expected to be active during the retrieval of the first pattern and to remain active and in the same state during the retrieval of the second one. In any case, this group of units obviously follows its own dynamics, that differs from the one followed by other groups of units. In total, we need to consider five distinct groups of units, that correspond to: 1. units active in ξ 1 and in ξ 2 , and in the same state; 2. units active in ξ 1 and in ξ 2 , but in a different state; 3. units active in ξ 1 but not in ξ 2 ; 4. units inactive in ξ 1 but active in ξ 2 ; 5. units inactive both in ξ 1 and in ξ 2 ; and for these different groups, to distinguish the relevant states (among S + 1 ones), and consider their evolution separately; including equations for two overlaps and for the variable q, even in this most simplified situation we obtain 28 integrable differential equations, that we do not write down here (see Eleonora Russo, unpublished MSc Thesis, for the full derivation). In the simulation shown in figure 5(b), we identified an example of a latching transition characterized by a low correlation between the two patterns (in fact, an anticorrelation: they share in the same state only C 1 = 25 out of their 2500 active units, and another C 2 = 25 in different states). At time 0 all network units have the value 1 S+1 for all the states, with threshold θ i = σ i , and an external cue arrives at time step t = 500, providing a signal term pointing at pattern µ = 1 in the field to each unit. Other parameters were p = 2 (hence a single other pattern is present, to simplify the analysis of the noise), S = 3, a = 0.25, U = 0.1, b 1 = 0.05, b 2 = 0.001 and b 3 = 0.0005. The figure shows the overlaps with the two successive patterns crossing over at a vanishingly small value of λ (slightly negative, in fact), characteristic of a random transition. Remarkably, the fields in the direction of the second pattern build up slowly, in that the decaying first pattern does not provide any useful cue in the direction of the second. Once the fields reach a given effective value, however, a self-regenerating process is initiated and the second overlap rises very rapidly towards 1 (without fully reaching it). Transitions between correlated attractors In another simulation (figure 5(a)), we identified a high-crossover latching transition between two substantially correlated patterns, which were constructed to share C 1 = 475 out of their 2500 active units. Again, network units have at time 0 the value 1 S+1 for all the states, with threshold θ i = σ i , and an external cue arrives at time step t = 500, providing a signal term pointing at pattern µ = 1 in the field to each unit. Other parameters were p = 2 (again, a single other pattern is present), S = 3, a = 0.25, U = 0.1, b 1 = 0.005, b 2 = 0.001 and b 3 = 0.0001. One observes in figure 6 the latching transition occurring at a fairly high cross-over value λ 0.68. In fact, the overlap with the second pattern starts effectively at a nonzero values already imposed by the cue to the first pattern, with which it is significantly correlated. Its overlap then builds up gradually, eventually reaching the self-regeneration threshold. Interestingly, its accrual appears to sustain the overlap with the first patterns, rather than speeding up its demise. The overlap with the second patterns rises relatively slowly even after the self-regeneration process has started, and does not reach particularly high values either. Clearly, much more extensive work is required, in order to confirm the generality of these observations. Finally, to meet the results of section 4, we perform a search of latching dynamics for the solution of the differential equations in the region of parameters spanned by all possible values of C 1 and C 2 . To achieve this complete search we chose two combinations of these parameters that span between 0 and 1. Figure 6 shows the regions of latching for 0 (C 1 + C 2 )/(a N ) 1 on the y-axis and 0 C 1 /(C 1 + C 2 ) 1 on the x-axis. The region L1 shows the latching guided by C 1 , or, in other words, shared active units in the same state. It appears when the total amount of shared units C 1 + C 2 is rather small in comparison with a N and at the same time C 1 is larger than C 2 . The pathological L2 region is Figure 6. Three latching regions are found, parallel to the ones described in section 4. The axes show the two described combinations of C 1 and C 1 , chosen in such a way to have a range spanning between 0 and 1. For each combination of parameters, a numerical integration of the dynamical equations, similar to those shown in figure 5, was performed. The colour at each point indicates the maximum value of m 2 during this integration. Region L1 corresponds to the kind of transition shown in the violet points in figure 2, while region L2 corresponds to the green point and region L3 to the orange points. In each of the marked regions, m 1 < m 2 at the time in which m 2 reaches its maximum, but not in the area between L1 and L2, that could otherwise be regarded as a latching region itself. associated to C 1 ∼ (a N ), i.e. the number of shared units in the same active state is close to its saturation value. Finally, the region L3 corresponds to the condition C 1 C 2 (uncorrelated or anti-correlated patterns). This picture fits exactly what has been described in figure 2 through a completely different approach, suggesting a strategy to follow in future developments. While the dynamical equations can give a mechanistic description of latching in 'noiseless' situations with only two patterns, the observations can therefore be extrapolated to more general simulations for which only a statistical approach is possible, given the high dimensionality. Discussion The notion of dynamical attractors has recently emerged, in cognitive neuroscience, as having the potential to bridge the gap between the analog processing performed by individual neural elements and the digital operations subsumed in cognitive descriptions. In this context, dynamics which take place largely in the neighbourhood of 'quasi-attractor' states (of states that would be stable attractors were it not for a simple mechanism that destabilizes them, such as firing rate fatigue) offer a model for free associations in semantic space [15], perhaps including the highly constrained trajectories expressed in natural language. This emerging view calls for a quantitative, first principles modelling of higher order attractor-based processes, that has so far been only partially explored. Here, following up on our previous reports [7,11,12], we have begun to analyse the transitions between attractor states demonstrated by a simple Potts associative memory model, in the region of parameter space where it shows latching dynamics. The model itself is based on the idea that associative memory retrieval operates throughout the cortex at two levels [16], and as a generic functional mechanism rather than as a separate dedicated system [17]. In this spirit, we have earlier suggested a rough description of how attractor dynamics in the network model gives rise to a complex and structured set of transitions, that could be regarded as a model of infinite recursion. This complexity, grounded in the correlation between patterns, was shown to be controlled mainly by the threshold, that also sets the global activity in the network. An appropriate value of the threshold ensures the transient coexistence of decaying and newly emerging attractors at critical points in the retrieval process, when latching between attractors takes place. Here, we show that even for a given value of the threshold, one observes a considerable diversity of latching transitions. Apart from the extreme case of oscillations between nearly overlapping attractors, latching transitions can be roughly categorized between random ones, and those driven by positive correlations. It appears that the latter are responsible for embedding structure of a potentially usable form into the dynamics. There might, however, be finer structure also in the random transitions, as suggested by the prevalence, among those, of transitions between anticorrelated patterns (C 1 = 0 or 1). Understanding such finer structure is essential, if one aims at embedding real syntactic or semantic constraints in latching dynamics. Features that require continuity between successive elements, like number or gender in the syntax of predicates, or topic in semantic concatenation, have to be engineered to sustain their activation across latching events, while features like subject markers, if any, have to be engineered to be terminated at latching, and maybe to activate complementary markers as distinct local states of the same units. These aspects obviously require massive additional study, preliminary to which is a much more complete analysis of latching dynamics, that we could only begin to sketch here.
2015-06-01T23:46:22.000Z
2008-01-01T00:00:00.000
{ "year": 2008, "sha1": "c351161a9138e9d7c5f05cc14882daee1ed4694a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1367-2630/10/1/015008", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "faa2dddf58c798fe03fea9de98b5f2e4d8cd74fb", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
244901250
pes2o/s2orc
v3-fos-license
The Phenomenon of Human Trafficking in Indonesia: A Case Study in Kupang, East Nusa Tenggara, Indonesia Human trafficking cases are now a serious problem in Indonesia. This problem has reached remote areas. Victims of People Trafficking Crimes (TPPO) come from loweducated and poor families. The beginning of this victim is because an invitation from someone, usually from the family itself persuades the prospective victim to work in the land of people in the hope of earning a large income to change the family life. Most of the TPPO victims are about 15-20 years old and do not finish school, they are tempted by the promise of working abroad. In reality, what is promised is not realized. Kupang is one of the many regions in Indonesia that is the largest TPPO victim sender area in Indonesia. TPPO cannot be eradicated because of the involvement of Law Enforcement Officials in Indonesia, in addition to the lack of understanding of law enforcement officials related to the use of TPPO legislation. This study uses a qualitative method with a case study approach, the location of data collection is in Kupang, East Nusa Tenggara. This study concluded that TPPO law enforcement in the area is still not running to the maximum, some shortcomings occur in the field in addition to the inconsistency among law enforcement officials. As for the recommendations of this study is the need to provide intensive training for Law Enforcement Officers to have a good understanding of TPPO, the head of law enforcement officials must dare to dismantle the mafia practices of people trafficking in their institutions. INTRODUCTION Indonesia in the past 10 years has not only become a source of sending countries and transit places for victims of people trafficking but also has become a destination country for people traffickers to trade victims from outside Indonesia (Prawira, 2019). Today, most countries in the world have experienced trafficking in persons, albeit in different modes. Some countries are the destination of people trafficking, transit countries, or countries where human trafficking occurs. People trafficking today is one of the models of modern slavery, violating the dignity and dignity of a human being and also violating the humanistic side in general, therefore the country that will take legal policy in law enforcement efforts must be oriented towards human rights protection. The mode of people trafficking that often occurs in Indonesia is through the Agent of the Indonesian Labor Service Distributor (PJTKI). Other forms of people trafficking in Indonesia are child trafficking, human smuggling, pressured migration, child prostitution, adult female prostitution, forced sex work, domestic helpers, etc. TPPO is widely occurring in Indonesian regions such as Menado, Batam, Nunukan, Talaud, Kupang, and Medan. The International Organization of Migration (IOM) said that Indonesia ranks first in the number of TPPO victims or 79.25 percent. Malaysia is the main destination for the "entry" of TPPO victims. TPPO, still according to IOM data, is widely done by recruitment agencies, or by visa abuse modes such as for umrah, a pilgrimage, or even tourism (Kemenko PMK, 2017. According to the International Organization for Migration (IOM), it is said that the number of TPPO victims in Indonesia in 2005-2017 was 8876 people (www.indonesia.iom). Women became the first victims of people trafficking, then minors reached 15%, then some men were victims of TPPO who worked on foreign fishing boats. Meanwhile, the Indonesian Child Protection Commission (KPAI) stated that the number of TPPO cases targeting and exploiting children until August 2019 reached 154 cases. This trend is quite worrying, according to KPAI children as victims are framed through social media. The average victim is employed then exploited and becomes a victim of prostitution in apartments located in major cities of Indonesia. Boys are mostly victims of Child Commercial Sexual Exploitation (ESKA), and marital prostitution (bespoke brides, serial marriages, and contract marriages).As can be seen below: The dangers of people trafficking begin to enter into remote areas even to areas that fall into the poor category. One of the areas that became the object of this research is East Nusa Tenggara Province (NTT) especially in the Kupang region, both cities, and districts. Crime and the threat of people trafficking have been an issue that has always been hot in NTT in recent years. In recent years, NTT ranked first, said to be an area famous for victims of people trafficking crimes. Eradicating people trafficking in NTT is not easy and has been a concern of various circles. The Ministry of Social Affairs signaled that the problem of Indonesian Migrant Workers (PMI) in NTT has reached a critical point so that the handling step becomes urgent with all relevant stakeholders (Ever, 2017). The trafficking of people in NTT is currently said to be dangerous because many NTT citizens, especially 15year-old girls who are used as migrant workers in Malaysia, Singapore, Taiwan, and other countries. Referring to the background that has been described above by the author can be identified the problem in this study is the many modes of TPPO that occur in Indonesia through the services of the Indonesian Labor Supplier, then the target victims of TPPO are children aged 15-20 years who have limited knowledge and education. NTT especially Kupang became one of the cities/districts that took a lot of victims TPPO, the average of these victims came from poor families and loweducated. This real condition causes the number of TPPO victims to come from NTT and also most of them are victims of fraud from the next of kin who initially lured to work abroad and get a large salary. Law enforcement issues are also the main focus in efforts to eradicate people trafficking crimes, the presence of certain persons who help and participate in the smooth trade of people in NTT. In addition, the problem that is rooted in Kupang is the problem of the mindset of people who want to change their fate in any way including it being an Indonesian Migrant Worker either legally or illegally. The condition of law enforcement against people trafficking in Indonesia can be said to have not gone with ideal where this can be seen in the news in the media in Indonesia both in print and electronic media there are many cases of trafficking people identified as victims of TPPO but for tppo actors, it is difficult to be touched because seeing the modus operandi performed turns out that this perpetrator is already an accomplice, more than one person and corporate, then at the time one perpetrator was found more difficult to process to the court to get sanctions Pidanya because in terms of proof must refer to the criminal procedural law that is the Criminal Procedure Law that says that a judge can not give a criminal to a person unless there are at least two tools of valid evidence, and the judge has a belief that a criminal offense actually occurred and that the guilty defendant did so, as mentioned in article 183 of the Criminal Procedure Law (Kamea, 2016). On this basis, researchers conducted a study with the title Phenomenon of People Trafficking in Indonesia (Kupang Case Study, NTT), researchers consider that the phenomenon of people trafficking has become a serious concern in Indonesia, almost every year there are cases of people trafficking. B. METHOD Researchers use qualitative methods, according to Strauss and Corbin in the book Prof. Dr. Afrizal, M.A "Qualitative Research Method, An Effort to Support the Use of Qualitative Research In Various Disciplines" he defines qualitative research methods are types of research whose findings are not obtained through statistical procedures or other forms of calculation. This research is descriptive research with a case study approach. Descriptive research is a study that aims to describe in detail a problem in an area chosen by the researcher or at a certain time. The author tries to reveal the facts in the field by the rules of existing research. Through case study research, the "cases" studied can be expressed in detail and thoroughly and not only about their characteristics, but also through the process of finding the characteristics of the cases chosen by the researchers. This case study research explains and reveals the case that is used as the object of the overall and comprehensive research (Arifianto, 2016). Law Enforcement Jimly Asshidiqie said that law enforcement is a process of efforts to enforce and function legal norms in real terms as a guideline in behaving in public or state life. Judging from the subject, law enforcement can be done by a broad subject and can also be interpreted as law enforcement efforts by the subject in a limited or narrow sense. In a broad sense, a law enforcement process involves all legal subjects in any legal relationship. Anyone who enforces normative rules or does something or doesn't do something by basing themselves on the norms of the rule of law means he or she is running or enforcing the law. In a narrow sense, in terms of the subject, law enforcement is only interpreted as the efforts of certain law enforcement officials to ensure and ensure that a rule of law runs as it should. In ensuring the establishment of the law, if necessary, law enforcement officials are allowed to use force. Soekanto (2019) said that law enforcement is an activity of curating the relationship of values described in the rules or views of steady values and depreciative and action attitudes as a series of elaboration of the final stage values to create, maintain and maintain the peace of life association. The definition of law enforcement according to Harun M Husen in his book can also be interpreted as the implementation of the law by law enforcement officers and by everyone who has interests by their respective authorities according to the rule of law. Criminal law enforcement is a whole process that begins with the stages of investigation, arrest, detention, the trial of defendants and ends with the correctional of a convicted (Husen, 1990). Lawrence M Freidman asserts that the success or absence of law enforcement has a dependence on; legal substance; a legal structure or legal structure; and legal culture. The explanation is as follows: Legal substance: Freidman's theory is said to be a substantial system that determines whether or not the law can be implemented. The substance also means products produced by people who are in the legal system that includes the decisions they issue, the new rules they draft. The substance is also a living law (living law), not just the rules that exist in the law books). Indonesia is a country that adheres to the Civil Law system or continental European system affects the legal system in Indonesia, one of the influences is the existence of the principle of legality in the Criminal Code. Legal structure/legal structure: This legal structure is referred to as a structural system that determines whether or not the law can be implemented properly. The legal structure based on Law No. 8 of 1981 on Criminal Procedural Law is initiated from the Police, Prosecutors, Courts and Criminal Implementing Bodies (Prisons). The authority of law enforcement agencies is guaranteed by law, so that in the implementation of duties and responsibilities regardless of the intervention of government power and other influences. The law will not run if no law enforcement officer has credibility, competent and independent. No matter how good a rule of law maybe, if there is no support from good law enforcement officials, justice is only an ideal. The weak mentality of law enforcement officials resulted in law enforcement not running as expected. Legal culture: Freidman encourages the culture of law in terms of human attitudes towards the law and the legal system of belief, values, thoughts, and expectations. Legal culture is an atmosphere of social thought and social power that determines how laws are used, avoided, abused. Legal culture has a close relationship with public legal awareness, the higher the legal awareness of the community, the more good legal culture will be created and can change people's mindset about the law during this time (Freidman, 1975). Criminal Acts of People Trafficking In the past, people trafficking was a symbol of social status, where people of high social class (economic, power, and political status) would certainly have slaves or slaves bought, meaning slaves were people who were bought and made slaves, servants, or jongos. In ancient times everyone who employed slaves would be considered a person of high social status, so this was a common thing, which did not need to be studied from scientific developments (Nuraeny, 2013) In its history at first, the object of people trafficking was women. In ancient Greek society, women were used as a means of buying and selling transactions in the market, just like animal merchandise or other goods. The subsequent development of women in the ancient Greek era served as an impediment to sexual lust, women were completely worthless. This is evidenced in the famous legend in Greece, namely the story of the Goddess Aphrodite. Indonesia has, regulations on people trafficking and has been regulated in Law No. 21 of 2007 on Eradication of People Trafficking Crimes (TPPO Law). In this law it is explained that what is meant by TPPO is "the act of recruitment, transport, shelter, delivery, transfer or receipt of a person with threats of violence, use of force, kidnapping, confinement, forgery, fraud, abuse of violence or vulnerable position, bondage or payment or benefit, thus obtaining the consent of the person in control of the other person, whether committed within the country or between countries, to exploit or cause people to be exploited". Law No. 21 of 2007 also describes the scope of People Trafficking Crimes, namely: 1. Any action or set of actions that meet the elements of a criminal act specified in this law. In addition, this law also prohibits anyone who enters the territory of the Unitary State of the Republic of Indonesia (NKRI) to be exploited. 2. Bring Indonesian Citizens (WNI) outside the territory of the Republic of Indonesia for exploitation. 3. Raise a child by promising something or giving something for exploitation. Sending children into or out of the country in any way and any person who uses or exploits TPPO victims using sexual intercourse or fornication, employing victims for exploitation or taking advantage; 5. Any person who provides or enters false information on state documents or other documents to facilitate TPPO; 6. Any person who gives false testimony, submits false evidence or false evidence, or influences witnesses unlawfully; 7. Any person who physically attacks a witness or officer at a TPPO court case; any person who prevents, obstructs, or thwarts directly or indirectly, investigations, prosecutions, and trials in court hearings against suspects, defendants, or witnesses in TPPO cases; anyone who assists the escape of TPPO perpetrators; 8. Any person who gives the identity of a witness or victim when it should be kept secret. If it refers to the above definition, then there is no restriction that people's trade is only related to a certain gender or age. People trafficking is not a new phenomenon in Indonesia, and although the criminalization of this person's trade can be related to anyone, it often identifies it with the trafficking of women and children. This is quite reasonable because in many cases, victims of crowd trafficking come from women and children. TPPO in NTT TPPO in NTT originated from the issue of migrant workers who departed illegally abroad, starting from illegal in the end these migrant workers became victims of TPPO, then the problem of migrant workers who died abroad was also a concern. The mindset of the people in NTT is to want to improve their fortunes by working abroad, in any way they should be able to go abroad. In 2018 105 PMI victims died in Malaysia, the cause of death due to work accidents; drunk; pain; until eaten by crocodiles. A total of 105 people were almost entirely nonprocedural PMI, then in 2019 until October 89 people became the death toll in Malaysia, where 100% is nonprocedural PMI. Nonprosedural PMI is the forerunner of TPPO in the PMI sector. Close family such as parents or uncles are the main keys in TPPO. Parents permit touts who are looking for children to be given abroad, in addition, uncle/om also plays a role where this uncle/om who sends the child victims TPPO abroad. The tradition of betel nut money is used as a model for recruiters TPPO victims recruiters. This betel nut money is given to the parents of TPPO victims to permit their children to join touts working abroad. The amount of betel nut money varies, ranging from Rp. 500.000,-up to Rp. 2.000.000,-. If the tout gives betel nut money and is accepted by the victim's parents then, there is a bond that indirectly the parents feel indebted to the TPPO recruiter touts. The existence of people trafficking in NTT is influenced by the most prominent economic factors, NTT people's income is relatively low, in addition to the lack of available jobs in NTT resulted in many NTT people who have the mind of working abroad through legal or illegal means. In addition to economic factors, there are also educational factors, the level of education in NTT is relatively low. The victims of TPPO in NT mostly came from children who only graduated from elementary and junior high school, these victims did not have enough knowledge and also did not understand that they were used as objects of people trafficking. Victims of TPPO in NTT are mostly promised at the beginning of working as Indonesian Migrant Workers (PMI) abroad, but in reality, they do not work as PMI abroad, many of them working in the prostitution sector and exploited abroad. The current condition of TPPO in NTT has decreased a lot since NTT was held by Governor Victor Laiskodat, he implemented a moratorium policy on sending NTT PMI abroad. The previous regional head did not make a moratorium policy, causing many cases of TPPO in NTT. The impact of TPPO on NTT residents. Trauma Trauma is a psychological problem that afflicts an individual or group for a traumatic act. These types of trauma can come from acts of violence, torture, and other repressive acts that exert psychological distress. NTT residents, especially the women who were victims experienced deep trauma, alienation, and social problems due to the experience experienced. Social problems in the form of psychological pressures affect social activities and interactions in the community. Violence and Death Toll. The accumulation of problems arising from torture and violence experienced by victims of human trafficking presents a variety of social problems in society. Especially the problem experienced by TPPO victims in NTT is not only psychological pressure but more extreme can cause the victim to die. Law enforcement is still colored by unhealthy games conducted by people or groups of people (syndicates, mafia courts) involved in the scenario of setting cases in all stages of the judicial process, even the arrangement of cases that occurred at the time before the TPPO case was reported to the police, therefore there are many differences in the number of cases coming from the community with the number of cases that go into the stage of the criminal justice process. So it is a fairness of the TPPO case that has a large echo, but in reality, it becomes less and less and even becomes no case that is processed and brought to justice. TPPO Law Enforcement in Kupang Law Enforcement Officials (APH) have been having difficulty to collect evidence of TPPO victims, such as the example of someone who is not yet 17 years old but made an id card to be 20 years old where the birth certificate in question has been withheld by touts or may also have been discarded, then the manufacture of KTP is not made in the origin city of TPPO victims but in other cities, such as in Surabaya, Medan, and Batam. The prosecutor always asks the police investigators to collect the original evidence if the KTP documents are fake, the prosecutor refuses to accept documents duplicated with the legalization stamp from the village. D. CONCLUTION Based on the explanations that have been conveyed above about TPPO law enforcement in NTT, especially in Kupang, it was found that law enforcement officers involved in the TPPO case, this involvement can be in the form of the investigation process until it enters the court. In addition, there are also officials in NTT who help create false identities. People trafficking in NTT has been in the form of a network and so far only the touts in action in NTT, for agents or companies have not been caught even to the big bosses who in Malaysia have not been revealed. In terms of the ability of law enforcement officials to apply articles related to TPPO to date, some have not mastered the legislation governing TPPO, there is a case of TPPO but using migrant worker protection laws. This study concludes that TPPO law enforcement, especially in Kupang, can be said to have not gone as expected, some things must be seriously improved especially in the Law Enforcement Apparatus itself, no matter how good the law is made if there is no integrity for professionals in enforcing the law then the existing laws will be useless at all. The results of this study provide advice for 1) Law Enforcement Officials who have mastered TPPO should not be mutated first; 2) The need for training to equalize the perception of TPPO among Law Enforcement Officers; 3) The need for restrictions on PMI shipments in the region; 4) Local governments need to provide TPPO hazard counseling in the community massively.
2021-12-06T16:00:56.547Z
2021-06-03T00:00:00.000
{ "year": 2021, "sha1": "f440507464889c40f6addc2b398f09bcf728c198", "oa_license": "CCBYSA", "oa_url": "https://internationaljournal.net/index.php/endless/article/download/65/67", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "41cfbd132ecb3667705ae19aa51044db25769e7f", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
235385369
pes2o/s2orc
v3-fos-license
On the Kinetic Behavior of Recycling Precious Metals (Au, Ag, Pt, and Pd) Through Copper Smelting Process The recycling and recovery of precious metals from secondary materials, such as waste-printed circuit boards, are an important area of circular economy research due to the limited existing resources and increasing amount of e-waste produced by the rapid development of technology. In this study, the kinetic behavior of precious metals Au, Ag, Pt, and Pd between copper matte and iron-silicate slag was investigated at a typical flash smelting temperature of 1300 °C in both air and argon atmospheres. SEM–EDS, EPMA, and LA-ICP-MS-advanced analysis methods were used for sample characterization. The results indicate that precious metals favor the matte phase over slag, and the deportment to matte occurred swiftly within a short time after the system had reached the experimental temperature. With increasing contact times, the precious metals were distributed increasingly into the sulfide matte. The distribution coefficients, based on experimentally measured element concentrations, followed the order of palladium > platinum > gold > silver in both air and argon, and the matte acted as an efficient collector of these precious metals. The obtained results can be applied to industrial copper matte smelting processes, and they also help in upgrading CFD models to simulate the flash smelting process more precisely. Introduction Due to technological advances and the improvement of economic and social conditions around the globe, the number of electronic devices in total and per capita is increasing. Meanwhile, the lifecycle of devices has become shorter as a result of rapid technology updates and increasing disposable income [1]. The United Nations University has estimated that, for example, in 2016, the global generation of WEEE (waste electrical and electronic equipment) was 44.7 million tons [2] and in 2018, 49.8 million tons [3], and it is expected to reach 52.2 million tons by 2021 [4]. WEEE consists of a wide range of materials, some of which have very high economic value. Due to the huge amounts produced every year, WEEE is considered as a valuable urban resource. Various methods, such as physical separation, electrostatic separation, pyrometallurgical, and hydrometallurgical processes have been investigated with the focus of recycling the valuable metals from these secondary resources [5,6]. Reprocessing WEEE to recover the valuable metals is not a simple task. WEEE is a heterogeneous mixture of metallic and non-metallic fractions, where the components containing valuable metals are strongly bonded to other components that may have little or no value [7,8]. Furthermore, common electronic devices may include lead, mercury, and arsenic, which are hazardous elements. Proper management of e-waste is, therefore, a must, as it can pollute ground water, acidify the soil, generate toxic fumes and gases in burning, accumulate in municipal disposal areas, and release carcinogenic substances [9]. Waste-printed circuit boards (WPCBs) are an important part of WEEE: they account for 2-3% of its total mass [4,10]. WPCBs consist mostly of nonconductive insulator materials and several metals, such as copper and precious metals (PMs). The concentrations of PMs, specifically gold, silver, platinum, and palladium, in WPCBs can be considerably higher than in their primary sources [11]. Moreover, as primary precious metal resources are steadily depleting, the demand for these metals is growing in many sectors. Therefore, recycling of precious metals from WPCBs is of significant interest. The value of precious metals comprises 80% of the value of scrap electronic devices, despite their small quantities [12]. For these reasons, manufacturers, environmental agencies, and governments around the globe are attempting to find systematic and environmentally sustainable ways of recycling precious metals from WPCBs. Pyrometallurgical processes are capable of dealing with complex e-waste streams [13][14][15][16]. WPCBs, in addition to their high precious metal content, also contain more than 20% copper [17]. Considering the economic and technical aspects, copper smelting is one of the predominant routes for WPCB and e-waste recycling [18]. Copper flash smelting is a mature technology that has been studied and employed by industry for decades [19][20][21], and the combustion reactions of chalcopyrite and other sulfides in it have also been widely investigated [22][23][24][25]. WPCBs can be processed along with the primary feed of copper concentrates. However, the elemental composition of WPCBs and other e-waste is often such that it can affect the processing conditions, for instance by changing the liquidus temperature or viscosity of the slag or by making the slag more corrosive towards refractories. Another issue related to WPCB smelting is that the mix of elements in WPCB does not match that in common primary mineral concentrates, for which the smelting technology has been optimized over many years and for which a large database of information has accumulated [7,[26][27][28]. To obtain the optimal process parameters for WPCB smelting and precious metals recovery, the thermodynamics and kinetic behavior of the valuable elements and their reaction mechanisms must be known. Equilibrium research regarding the precious metal distribution between matte and different slags has been intensively carried out during the past few decades. The investigated slag systems include, for example, FeO x -CaO [29], FeO x -SiO 2 [30,31], FeO x -SiO 2 -MgO [32], and SiO 2 -CaO-FeO x -MgO [33]. The distribution characteristics have also been studied as a function of matte grade with various sulfur and oxygen potentials [34][35][36]. However, studies on the kinetic behavior of precious metals in the matte-slag system are lacking [37], and more research is urgently needed to optimize the methods to recover precious metals from WPCBs through the copper smelting process. The time dependency of precious metal (Au, Ag, Pt, and Pd) distribution was experimentally investigated in laboratory-scale copper matte smelting conditions. The results contribute to deeper understanding of PM distribution behavior during the settling process in the flash smelting furnace (FSF) and provide guidance in finding better strategies and methods for recovering the precious and platinum group metals from e-waste via pyrometallurgical routes. Materials The raw materials used in this study were industrial chalcopyrite copper concentrate, synthetic iron oxide-silica slag, and PM (Ag, Au, Pt, Pd) powders. The concentrate was provided by Boliden Harjavalta, Finland. Its chemical composition was analyzed by X-ray fluorescence (XRF) spectrometry (Malvern Panalytical B.V., Almelo, The Netherlands) and is shown in Table 1. The iron oxide-silica slag was prepared by mixing 65 wt% of hematite powder (Alfa Aesar, 99.99% purity) with 35 wt% of silica powder (Umicore, 99.99% purity). The preparation of the slag has been described in detail in a previous study [38]. Instead of real WPCB, pure Ag, Au, Pt, and Pd powders (Alfa Aesar, ≥ 99.9% purity) were used as sources of precious metals in the experiments. Apparatus and Procedures The experimental apparatus used in this study is shown in Fig. 1 and has been described in detail earlier [24]. The silica crucible, filled with sample material, was raised into the hot zone (temperature 1300 °C) with a Pt-hanging wire. After a preset time, the reactions were stopped by dropping the sample into a quenching vessel filled with ice water. For the experiments in an argon atmosphere, the work tube was sealed from the bottom, and a flow of 400-500 mL/min argon (AgA Linde, 99.999% purity) was used. A slag to concentrate ratio of 1.116 was chosen in accordance with industrial operation practice [24]; this corresponds to a SiO 2 /Fe flux ratio of 0.53. Usually, the PM concentrations in real WPCBs are relatively low (varying from 80 to 3300 ppm) [39]. In this study, the amount of each metal (gold, silver, palladium, and platinum) was measured to be 2.5 wt% of the amount of concentrate (Table 2) considering the detection limitations of EPMA and LA-ICP-MS analyses [30,34]. All the powders were ground together in a mortar to obtain a homogenous slag-concentrate-PM mixture. The mixture was then inserted into cone-shaped silica crucibles (Finnish Special Glass, Finland) with a mixture sample weight of 0.5236 g for each experiment, measured by an analytical balance (AB204, Mettler Toledo, USA). The contact times were chosen to be 10, 20, 30, 60, 150, 300, and 600 s in air. For experiments in an argon atmosphere, 5, 10, 20, and 40 min were chosen. Each experiment was repeated at least twice, and a sufficient number of phase areas was analyzed in every polished section of the samples in order to ensure the reproducibility and reliability of the results. Analyses The samples were prepared with basic metallographic methods and first analyzed with a Mira 3 scanning electron microscope (SEM, Tescan, Brno, Czech Republic) equipped with an UltraDry silicon drift energy dispersive X-ray spectrometer (EDS, Thermo Fisher Scientific, Waltham, MA, USA) coupled to NSS microanalysis software (Thermo Fisher Scientific, Waltham, MA, USA). Considering the insufficient detection limits of EDS [40,41] and PM concentration results from a previous equilibrium study, the chemical compositions of the matte phase were investigated by electron probe micro-analysis (EPMA) and the slag phase was analyzed by laser ablation-inductively coupled plasmamass spectrometry (LA-ICP-MS). LA-ICP-MS was essential for the slag phase due to its very low PM concentrations. The EPMA used in this work for matte phase analysis was a Cameca SX100 equipped with five wavelength dispersive spectrometers (Cameca SAS, Gennevilliers, France). The selected accelerating voltage was 20 kV, beam current 60 nA, and beam diameter 100 µm. The analyzed lines and standards (Astimex) used were as follows: Fe Kα and O Kα (hematite), Mg Kα (diopside), S Kα (pentlandite), Cu Kα (Cu), Zn Kα (sphalerite), Pb Lα (galena), Pd Lα (Pd), Ag Lα (Ag), Pt Lα (Pt), and Au Lα (Au). The equipment used for slag trace element analysis was a Photon Machines Analyte Excite laser ablation system with a 193 nm wavelength 4 ns ArF excimer laser (Teledyne CETAC Technologies, Omaha, USA) coupled to an Nu AttoM single collector sector field ICP-MS (Nu Instruments Ltd., Wrexham, UK). The laser spot size was selected as 65 µm for the air-atmosphere experiments, and 40 µm for the argon-atmosphere samples, due to the limited size of slag areas available. The obtained detection limits of EPMA and LA-ICP-MS are shown in Table 3, and more detailed information about the technique adopted in our study can be found in the references [34,40,[42][43][44]. Smelting Process and Sample Microstructures In this study, copper concentrate, iron oxide-silica slag, and PMs (Au, Ag, Pt, and Pd) were pre-mixed and reacted at a typical copper smelting temperature of 1300 °C in both air and argon atmospheres. The purpose of the airatmosphere experiments was to simulate the reaction shaft processes of the flash smelting furnace, while the argon atmosphere was used for simulating the reactions between matte and slag in low oxygen partial pressure (typically around 10 −8 to 10 −7 atm) on the slag-metal interface in the settler [45]. The SEM-backscattered electron micrographs of samples after different contact times in air are shown in Fig. 2. The reactions proceeded quickly, as the matte (A) and fayalite FeO x -SiO 2 slag (B) phases already began to form after 10 s, which is consistent with the rapid reaction phenomena occurring in the flash smelting process [46]. The structure of the sample was first extremely heterogeneous and the interfaces between the matte and slag phases were not clear. Unreacted ferric oxide (D) was also detected. After 20 s, the fayalite slag areas had increased in size and started to segregate more clearly from the sulfide matte. The matte was rather randomly distributed, which showed that coalescence and settling had not taken place at this point. After 30 s of contact time, most of the remaining un-melted silica (C) had reacted and dissolved into the slag, and a clear interface between matte and slag was formed. With increasing contact time, phase coalescence and matte settling progressed further. Some areas were detected in the form of a PMs-Cu-Fe droplet clusters (E), and after 30 s, these droplets were found only in the matte phase. From the micrographs in Fig. 2e-f, platinum metal preferred to form these metallic droplets with copper and iron enclosed within the matte phase, while gold, silver, and palladium dispersed more evenly into the matte phase. However, a few Au-, Ag-, and Pd-containing clusters were also detected in the matte phase. In the sample at 150 s, a larger high-platinum content area was found. At 300 s, a large droplet with a platinum core surrounded by a shell formed of copper, iron, and small amounts of PMs was observed in the sample. However, these larger PM clusters and droplets were avoided in the EPMA analyses, which was used to detect the average concentrations of PMs in the matte. Figure 3 shows the SEM micrographs of samples after different contact times in an argon atmosphere. As shown in Fig. 3a, the matte (A) and fayalite slag (B) phases had already formed and separated from each other after 5 min. PM-rich clusters (D) were detected with all contact times, and they were randomly distributed in the matte domains. Figure 3b-d illustrates the small change in the structure of the samples with increasing contact time from 10 to 40 min. At 40 min of contact time (Fig. 3d), the PM-rich clusters had coalesced into larger droplets and were specifically found near the slag-matte interfaces. As is to be expected with longer contact times, the PMs have more time to migrate and coalesce to form bigger droplets within the matte. Behavior of Major Elements in Matte and Slag Phases The concentrations of major elements in the matte (Cu-Fe-S) and slag (FeO x -SiO 2 ) are shown in Figs. 4 (air atmosphere) and 5 (argon atmosphere) as a function of contact time. In air, the matte grade (wt% Cu) increased continuously, seen also as a decrease in its iron concentration. The sulfur concentration remained relatively stable with a minor decreasing trend as a function of increasing contact time. After 300 s, the reaction rate decreased, and the matte grade reached a value of approximately 50 wt% Cu at 600 s. In the slag phase shown in Fig. 4b, the silica concentration kept increasing until 60 s, after which the slag composition remained constant. Corresponding to the microstructures shown in Fig. 2, during the first 60 s, the primary silica crystals reacted and dissolved into the slag, and more silica was also dissolved from the crucible. In the argon atmosphere, the concentrations of major elements in matte and slag phases remained relatively constant throughout the time series. The matte grade stabilized at a considerably lower level than in air, due to the absence of free oxygen. The slag composition was almost the same as that in air. These results were similar to the results of previous experiments conducted in the same conditions but with different trace elements [40]. Behavior of Precious Metals in Matte and Slag Phases The concentrations of the PMs Au, Ag, Pt, and Pd in matte as a function of time are shown in Fig. 6. In air atmosphere (Fig. 6a), the Au, Ag, and Pd concentrations in matte consistently increased with longer contact times. The platinum concentration began to decrease after 150 s, which can be explained by the formation of large platinum-rich droplets in the matte, shown in Fig. 3e-f. In the argon atmosphere (Fig. 6b), the PM concentrations remained relatively constant as a function of time. After the longest contact times, all the PM concentrations in matte were lower in argon than in air. The sequence of concentration levels in both atmospheres was Ag > Pd > Au > Pt. The concentrations of the PMs Au, Ag, Pt, and Pd in slag as a function of time are shown in Fig. 7. The slag chemistry plays an important role in the elimination of impurity elements and recovery of precious metals in copper smelting. In the FeO X -SiO 2 slag used in this study, all precious metal concentrations initially decreased as a function of time and then stabilized at a constant level. The sequence of PM concentrations in the slag was Ag > Au > Pd > Pt in both air (Fig. 7a) and argon (Fig. 7b) atmospheres. However, the PM concentrations were higher in air than in the argon atmosphere after reaching stable concentration levels. The distribution reaction for precious metal Me between matte and slag can be described using Eq. (1), when metals are deported into the slag phase due to the oxidation reaction: (1) can be presented using an equilibrium constant that consists of the activities of the species and the prevailing oxygen partial pressure, shown in Eq. (2). The activities can be expressed with the concentration of the species and its activity coefficient, as in Eqs. (3) and (4). where K is the equilibrium constant, and αMe and αMeO x represent the activities of metal Me and metal oxide MeO x , respectively. γMe is the activity coefficient of Me and γMeO is the activity coefficient of MeO x . N Me is the molar fraction of metal Me, n T is the total number of moles of monocationic constituents in 100 g of each phase, M me is the atomic mass of Me, and wt% Me is the weight percentage in matte and slag. In the equilibrium studies, the distribution coefficient of a PM Me between copper matte and slag is defined by Eq. (5), where [Me wt%] refers to the equilibrium concentration of Me in copper matte and (Me wt %) is the equilibrium concentration in slag in weight percentage [30]. As shown by Eq. (6), the distribution coefficient is a practical and independent thermodynamic parameter of the matte-slag system (only influenced by the temperature, oxygen potential, etc.), so the concentrations of the initial precious metals will not influence the distribution results. Even though these noble metal concentrations in this study are higher than those in real WPCBs, the distribution ratios between the matte and slag should be thermodynamically independent. In that case, the distribution coefficients between matte and slag are essential parameters for evaluating the recycling efficiencies of the PMs during copper smelting, by utilizing copper matte as a medium for their collection. In this kinetic study, the distribution coefficients values of PMs were determined at selected times by Eq. (7), where t refers to the contact time after which the sample was quenched and the reactions stopped [45]. The logarithmic values of distribution coefficients L m/s (Me) between matte and slag during the oxidation reactions in air are shown in Fig. 8. All PMs were found to deport strongly into the matte rather than into the slag, and with the increased contact time, their distribution coefficients followed a similar increasing tendency until 300 s, after which the values stabilized in the order of Pd > Pt > Au > Ag. At 20 s contact time, the average distribution coefficient values for Au, Ag, Pt, and Pd were 80, 70, 230, and 200, respectively, and at 300 s, these values surged to 600, 100, 1300, and 1200. During this time period, the matte grade also gradually increased. This also helped improve PM migration into the matte phase [36]. Tags a, b, c on the x-axis of Fig. 8 refer to the PM distribution results from equilibration studies between matte and iron-silicate slag where the temperature and matte grade were closest to the parameters in this work (a-Avarmaa et al. [30]; b-Chen et al. [34]; c-Shishin et al. [36]). The equilibration time in their studies was 3 h, 4 h, and 24 h, respectively. After 300-600 s, the distribution coefficients of the PMs obtained in this work agreed well with these recent equilibrium studies. Therefore, it seems that the equilibrium distribution coefficient values for PMs can be reached even in relatively short contact times. The distribution coefficients of PMs in argon are shown in Fig. 9. The values are lower than those in air, but they still follow the same order of Pd > Pt > Au > Ag. As shown in Fig. 5a, the matte grade (wt% Cu) was relatively constant in the argon atmosphere; however, the distribution coefficients continued to increase as a function of time. This means that sufficient time is needed for the PMs to migrate to the matte phase in low oxygen partial pressures. In air, the obtained distribution coefficients stabilized after 300 s, while the values in argon continued to increase slightly until the longest investigated contact time of 40 min. The average distribution coefficients values after 40 min for Au, Ag, Pt, and Pd were approximately 200, 90, 710, and 750, respectively, i.e., somewhat lower than the corresponding results in air. Conclusions No previous data exist on the kinetic behavior of precious metals (PMs) in copper flash smelting conditions. In this study, the behavior of Au, Ag, Pt, and Pd was investigated in laboratory-scale experiments at a typical smelting temperature of 1300 °C in both air and argon atmospheres. The samples were analyzed by SEM-EDS for visual and preliminary compositional information, and by EPMA (matte) and LA-ICP-MS (slag) for more accurate phase composition data. All the PMs studied in this work strongly preferred to deport into the matte rather than the slag phase. The novelty of the present research is the experimental proof that the studied PMs migrate to the matte phase almost instantly when the molten matte and slag begin to form. The PMs approached their reported equilibrium distribution coefficients after only 300 s contact time in the air atmosphere. However, it should be noted that the experimental conditions, mainly the gas atmospheres, in the earlier equilibrium studies did not fully correspond to those utilized in this work, and industrial copper concentrate was used in this study. In the argon atmosphere, the distribution coefficient values slightly increased during the entire contact time of 40 min. The calculated distribution coefficients L m/s (Me), based on experimentally measured element concentrations, followed the order of Pd > Pt > Au > Ag in both air and argon atmospheres. The distribution coefficients of PMs in air were higher than the results in argon, and PM concentrations in matte increased with longer contact time and higher matte grade. The oxygen potential had an impact on the PM migration rates and distribution coefficients. Based on the results in Logarithmic distribution coefficients of gold, silver, platinum, and palladium between matte and FeO x -SiO 2 slag as a function of time in argon atmosphere argon, considerably longer times are needed for the complete transfer of PMs to the matte at low oxygen potentials. After reaching stable levels, the PM concentrations in the slag were relatively low, at approximately 408 ppm (Ag), 27 ppm (Au), 17 ppm (Pd), and 9 ppm (Pt) in air, and 346 ppm (Ag), 42 ppm (Au), 24 ppm (Pd), and 3 ppm (Pt) in argon. These low concentrations of chemically dissolved PMs in the slag indicated that copper matte can perform as an excellent collector to recover precious metals. This suggests that the highest PM losses in industrial operations most likely arise from mechanical matte entrainment. These new experimental results regarding the effect of time on PM distribution behavior can be used for updating databases related to secondary raw materials processing via copper smelting and also for CFD models to simulate the behavior of precious metals in smelting processes more precisely.
2021-06-10T14:50:32.588Z
2021-06-09T00:00:00.000
{ "year": 2021, "sha1": "e088bc21e89e5a9b602f058bf651e640e45dab40", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40831-021-00388-6.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "e088bc21e89e5a9b602f058bf651e640e45dab40", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
237394288
pes2o/s2orc
v3-fos-license
Review of the Early–Middle Pleistocene boundary and Marine Isotope Stage 19 The Global Boundary Stratotype Section and Point (GSSP) defining the base of the Chibanian Stage and Middle Pleistocene Subseries at the Chiba section, Japan, was ratified on January 17, 2020. Although this completed a process initiated by the International Union for Quaternary Research in 1973, the term Middle Pleistocene had been in use since the 1860s. The Chiba GSSP occurs immediately below the top of Marine Isotope Substage (MIS) 19c and has an astronomical age of 774.1 ka. The Matuyama–Brunhes paleomagnetic reversal has a directional midpoint just 1.1 m above the GSSP and serves as the primary guide to the boundary. This reversal lies within the Early–Middle Pleistocene transition and has long been favoured to mark the base of the Middle Pleistocene. MIS 19 occurs within an interval of low-amplitude orbital eccentricity and was triggered by an obliquity cycle. It spans two insolation peaks resulting from precession minima and has a duration of ~ 28 to 33 kyr. MIS 19c begins ~ 791–787.5 ka, includes full interglacial conditions which lasted for ~ 8–12.5 kyr, and ends with glacial inception at ~ 774–777 ka. This inception has left an array of climatostratigraphic signals close to the Early–Middle Pleistocene boundary. MIS 19b–a contains a series of three or four interstadials often with rectangular-shaped waveforms and marked by abrupt (< 200 year) transitions. Intervening stadials including the inception of glaciation are linked to the calving of ice sheets into the northern North Atlantic and consequent disruption of the Atlantic meridional overturning circulation (AMOC), which by means of the thermal bipolar seesaw caused phase-lagged warming events in the Antarctic. The coherence of stadial–interstadial oscillations during MIS 19b–a across the Asian–Pacific and North Atlantic–Mediterranean realms suggests AMOC-originated shifts in the Intertropical Convergence Zone and pacing by equatorial insolation forcing. Low-latitude monsoon dynamics appear to have amplified responses regionally although high-latitude teleconnections may also have played a role. Introduction On January 17, 2020, the Executive Committee of the International Union of Geological Sciences ratified the Global Boundary Stratotype Section and Point (GSSP) defining the base of the Chibanian Stage and Middle Pleistocene Subseries at the Chiba section, Japan (Suganuma et al. in press), with an astronomically calibrated age of 774.1 ± 5.0 ka . This gave official recognition to the Middle Pleistocene, a term in use since the 1860s. The primary guide to this boundary is the Matuyama-Brunhes (M-B) paleomagnetic reversal which falls within Marine Isotope Stage (MIS) 19. Not only does MIS 19 allow the base of the Middle Pleistocene to be recognized independently of the M-B reversal and at millennial-scale resolution, but its earliest substage, MIS 19c, also serves as an orbital analogue for our own interglacial (e.g. Pol et al. 2010;Tzedakis et al. 2012aTzedakis et al. , 2012bYin and Berger 2015). This review examines the history behind the use of the term Middle Pleistocene, documents the procedure leading to the selection and ratification of the GSSP, examines and critiques the development of terminology used for MIS 19 and its subdivision, and synthesizes its climatic evolution on a global scale. The Middle Pleistocene and formal chronostratigraphy A GSSP is an internationally designated point within a stratotype. It serves as a global geostandard to define the base of an official unit (or coterminous units) within the International Chronostratigraphic Chart (Cohen et al. 2013 updated). This chart is administered by the International Commission on Stratigraphy (ICS), a constituent body of the International Union of Geological Sciences (IUGS), and it provides an officially approved framework for the geological time scale. The International Chronostratigraphic Chart is hierarchical in topology, with the base of each unit of higher rank defined by the base of the unit of next lower rank, a pattern that repeats down to the lowest unit definable by a GSSP, the stage (Salvador 1994;Remane et al. 1996). Accordingly, the Chiba GSSP defines both the Chibanian Stage and Middle Pleistocene Subseries (Fig. 1). A GSSP technically defines only the base of a chronostratigraphic unit, but in practice it marks the termination also of the top of the subjacent unit, in this case the Calabrian Stage and Lower Pleistocene Subseries. The top of the Chibanian Stage and Middle Pleistocene Subseries are presently not officially defined, except nominally by ratification of the term Upper Pleistocene Subseries which awaits official definition by a GSSP. The base of the Upper Pleistocene has a provisional age of~129 ka (Head et al. in press). It remains to be determined whether the Chibanian Stage will always precisely equate in extent with the Middle Pleistocene Subseries. There are good grounds for introducing a second stage for the Middle Pleistocene Subseries defined at its base by the onset of a major climatic event known as the "Mid-Brunhes Event" (Jansen et al. 1986) or "Mid-Brunhes Transition" (Yin 2013) which marks a step change in Quaternary climate. This climatic shift corresponds to an increase in the amplitude of quasi-100 kyr glacial-interglacial cycles and is marked by increases in interglacial sea-surface and Antarctic temperatures, atmospheric CO 2 , and CH 4 levels, all beginning with Marine Isotope Stage (MIS) 11 (Barth et al. 2018). The onset of this transition is globally synchronous and corresponds to that between MIS 12 and MIS 11 (Termination V), dating to~430 ka (Barth et al. 2018) (Fig. 2). It is readily identified in successions where astrochronology can be applied, including deepocean, ice-core, and European and Chinese loess records, and coincides with the base of Holsteinian Northwest European Stage, Likhvinian Russian Plain Stage and Zavadivian Ukrainian Loess Plain Stage (Cohen and Gibbard 2019). The Bermuda geomagnetic excursion, which lies at a prominent relative paleointensity minimum at 412 ka in MIS 11c (Channell et al. 2020), could serve as an additional stratigraphic marker (Fig. 2). However, for now the Chibanian Stage extends upwards to the base of the Upper Pleistocene Subseries (Fig. 1). The International Stratigraphic Guide distinguishes only between formal and informal stratigraphic terms. Formal terms "are properly defined and named according to an established or conventionally agreed scheme of classification ... The initial letter of the rank-or unitterm of named formal units is capitalized" (Salvador 1994, p. 14, see also p. 24). Those unit-terms appearing in the International Chronostratigraphic Chart are not merely formal terms but have also been approved by the ICS following extensive deliberation and then ratified by the Executive Committee of the IUGS (see Head 2019 for details of this process). These terms are here treated as "official" or "ratified" to distinguish them from formal terms lacking this approval (Head and Gibbard 2015a). History of the term Middle Pleistocene Charles Lyell in 1839 introduced the term Pleistocene (Greek, pleīstos, most; and kainos, recent) as a substitute for his Newer Pliocene (Lyell 1839, p. 621), but unlike his other series of the Cenozoic (Head et al. 2017), he refrained from dividing it into subseries. Indeed in 1863, Lyell proposed abandoning Pleistocene altogether on grounds that Forbes (1846) had popularized this term not in the sense of Lyell's Newer Pliocene but almost precisely with reference to the subsequent interval of time for which Lyell was now introducing the term Postpliocene (Lyell 1863, p. 6). By 1865, Lyell had conceded that if the term Pleistocene continued to be used, then it should not be as originally intended but in place of his "Post-pliocene" (Lyell 1865, footnote to p. 108). By the time Lyell had unconditionally accepted the Pleistocene in place of his Post-pliocene (Lyell 1873, p. 3, 4), this suggestion had already been generally adopted, with subdivision quickly following. The term "middle Pleistocene" for instance was employed informally by Harkness as early as 1869 (Harkness 1869), and the positional modifiers "early", "middle", and "late" have been used for the Pleistocene since at least the 1870s (e.g. Dawkins 1878). By 1900, this tripartite subdivision had become formalized in the English literature, with Osborn using the terms Lower Pleistocene (preglacial), Middle Pleistocene (glacial and interglacial, itself subdivided into lower, middle and upper), and Upper Pleistocene (postglacial and Recent) (Osborn 1900, p. 570, charts I and II) (Fig. 3). This use of subseries for the Pleistocene had become entrenched by the time of the Second International Conference of the Association pour l'étude du Quaternaire européen (the forerunner of the International Union for Quaternary Research [INQUA] meetings) held in Leningrad in 1932, and subseries terms were used in a formal sense by Zeuner (1935Zeuner ( , 1945 who in 1935 was already applying Milankovitch cyclicity and insolation curves to provide absolute dates for Pleistocene successions. In Head and Gibbard 2015b). The time scale is based on Fig. 1; geomagnetic polarity reversals and field paleointensity data are from Cohen and Gibbard (2019) and Channell et al. (2016Channell et al. ( , 2020 with ages of reversals based on orbital tuning of the sedimentary record (Channell et al. 2020); marine isotope record and numbering of marine isotope stages is from Lisiecki and Raymo (2005), with ages of terminations from Lisiecki (undated) and selected substages from Railsback et al. (2015); orbital parameters representing precession (Laskar et al. 2004), obliquity (Laskar et al. 2004), and eccentricity (Laskar et al. 2011) are from Head and Gibbard (2015b). Updated from Cohen and Gibbard (2019) 1945, he considered the base of the Middle Pleistocene to have an age of~425 ka. The Japanese geophysicist Motonori Matsuyama (1884-1958, as spelled and pronounced but mistransliterated in his own publications and others as Matuyama) was the first to document clearly from basalts in the Genbudō (basalt caves), Japan (Matuyama 1929), the reversed magnetic polarity interval from 2.58 to 0.773 Ma that we now call the Matuyama Reversed Polarity Chron. However, it was the emergence of the geomagnetic polarity reversal time scale for the Pleistocene in the 1960s (Cox et al. 1963(Cox et al. , 1964Opdyke et al. 1966;Ninkovich et al. 1966;Glass et al. 1967; see Watkins 1972 for historical review), and particularly the recognition and radiometric dating of the M-B reversal and Jaramillo "event" (Doell and Dalrymple 1966), that created new possibilities for global stratigraphic correlation and Pleistocene time scale calibration. Accordingly, participants at the Burg Wartenstein Symposium "Stratigraphy and Patterns of Cultural Change in the Middle Pleistocene", held in Austria in 1973, recommended that "The beginning of the Middle Pleistocene should be so defined as to either coincide with or be closely linked to the boundary between the Matuyama Reversed Epoch and the Brunhes Normal Epoch of paleomagnetic chronology" (Butzer and Isaac 1975, appendix 2, p. 901), as noted by Pillans (2003). In the same year, the INQUA Working Group on Major Subdivisions of the Pleistocene was established at the IX INQUA Congress in Christchurch, New Zealand, 1973, with its primary aim to define globally recognizable boundaries for the lower, middle, and upper Pleistocene subseries (Richmond 1996). The rank of subseries was adopted in preference to stage as the latter term was already used widely in Quaternary stratigraphy for locally and regionally defined units. At the XIIth INQUA Congress in Ottawa in 1987, the Working Group submitted a proposal, which was accepted by INQUA's stratigraphic commission and approved by the congress, that "As evolutionary biostratigraphy is not able to provide boundaries that are as globally applicable and time-parallel as are possible by other means, the Lower-Middle Pleistocene boundary should be taken provisionally at the Matuyama-Brunhes palaeomagnetic reversal ..." (Anonymous 1988, p. 228;Richmond 1996, p. 320). From then on, the M-B reversal became the preferred and indeed de facto marker for the Early-Middle Pleistocene boundary (e.g. Bowen 1988;Berggren et al. 1995;Pillans 2003;Gradstein et al. 2005;Head and Gibbard 2005, 2015a, 2015bCita et al. 2006, Cita 2008Head et al. 2008). Nonetheless, the Early-Middle Pleistocene boundary did not have official standing because this required the selection and approval of a GSSP. Selecting a primary guide for the base of the Middle Pleistocene Subseries The XIVth INQUA Congress in Berlin in 1995 focused on three potential candidate GSSPs: Chiba in Japan, Montalbano Jonico in Basilicata, Italy, and the Wanganui Basin in New Zealand (Pillans 2003), the latter being discounted because it contained unconformities (Head et al. 2008). Meanwhile, the ICS Subcommission on Quaternary Stratigraphy (SQS), in 2002 after a period of inactivity, established a Working Group to review all aspects of the Early-Middle Pleistocene boundary including the selection of a suitable GSSP (Head et al. 2008). At the 32nd International Geological Congress in Florence in 2004, the Early-Middle Pleistocene boundary Working Group recommended that (1) The boundary be defined in a marine section at a point "close to" the Matuyama-Brunhes palaeomagnetic reversal, where the definition of "close" was agreed to mean within plus or minus one isotope stage of the reversal; and (2) the GSSP should be located in a marine section exposed on land, not in a deep sea core (Head et al. 2008). A third potential candidate GSSP emerged at the Florence congress: the Valle di Manche section in Calabria, Italy (Capraro et al. 2004(Capraro et al. , 2005 (Fig. 4). Deciding upon the primary guide to the boundary should be made prior to the consideration of candidate sections because the expression of this guide in the GSSP must be exemplary (Remane et al. 1996). The Working Group's decisions at Florence were therefore crucial in moving the process forward. The M-B reversal with an age of~773 ka (Singer et al. 2019;Channell et al. 2020;Haneda et al. 2020a; and earlier reviews by Head andGibbard 2005, 2015b) was chosen in part because it (1) has an isochronous expression in most marine and terrestrial sediments and even in ice cores, (2) is the most prominent geomagnetic field reversal in the past 773 kyr, and (3) occurs within the Early-Middle Pleistocene transition (1.4-0.7 or 1.4-0.4 Ma; Fig. 2), aligning the Early-Middle Pleistocene boundary with a fundamental shift in Earth's history. This shift from a 41 ky to quasi-100 ky orbital rhythm was marked by increases in the amplitude of climate oscillations and in long-term average global ice volume, and by strong asymmetry in global ice volume cycles. It resulted in progressive and fundamental physical, chemical, climatic, and biotic adjustments to the planet (Head and Gibbard 2015b). Voting on candidates for the Middle Pleistocene Subseries GSSP The three final candidates for the Early-Middle Pleistocene GSSP were the Valle di Manche section in Calabria and the Ideale section at Montalbano Jonico in Basilicata, both in Italy, and the Chiba section in Japan (Head and Gibbard 2015a) (Fig. 4). Following field trips that allowed members of the Working Group to visit all three sites in advance of voting Okada and Suganuma 2018), the vigorous and exhaustive process of selecting a GSSP began on July 11, 2017, with the circulation of proposals to the membership of the Working Group (Table 1). It had been decided by all three proponents in advance that the proposals should remain confidential because they contained unpublished material. This confidentiality was respected through the entire selection process. Discussions started on July 25 and ended at the close of October 3, 2017, allowing an extended opportunity to exchange views. Discussions were wide-ranging, in acknowledgement that a GSSP must record an array of stratigraphic markers, but inevitably focused on the M-B reversal. A detailed commentary on these discussions is given in Head (2019) and only key aspects will be presented here. The M-B reversal in the Chiba composite section (CbCS) is expressed by directional changes (virtual geomagnetic pole [VGP] latitudes) and changes in the geomagnetic field intensity based on both the paleomagnetic record and a coherent record of its proxy, the authigenic 10 Be/ 9 Be record Okada et al. 2017;Simon et al. 2019;Head Progress in Earth and Planetary Science (2021) 8:50 Haneda et al. 2020a). These studies are based on an astronomical age model introduced by Suganuma et al. (2015) and refined by Okada et al. (2017) and again by Suganuma et al. (2018). Okada et al. (2017) determined the directional midpoint at 771.7 ka with a duration of 2.8 kyr; these values revised to 772.9 ka and 1.9 kyr on the age model of Suganuma et al. (2018). Simon et al. (2019) using new paleomagnetic data reported a directional switch between 773.9 and 771.9 ka, with a duration therefore of 2.0 kyr. Haneda et al. (2020a) using new paleomagnetic data combined with earlier studies Hyodo et al. 2016;Okada et al. 2017) determined the average directional midpoint at 772.9 ka with a duration of 1.1 kyr based on the age model of Suganuma et al. (2018). Allowing for a 5 kyr chronological uncertainty in the orbital tuning of the CbCS (4 kyr from Lisiecki and Raymo 2005, and 1 kyr from Elderfield et al. 2012;see Suganuma et al. in press) and a stratigraphic uncertainty of 0.4 ka (Haneda et al., 2020a), the astronomical age of the directional midpoint of the M-B reversal is 772.9 ± 5.4 ka, with a duration of up to~2 kyr. The close match between the geomagnetic field intensity and the 10 Be/ 9 Be record confirms that any lock-in depth offset (Roberts and Winklhofer 2004;Suganuma et al. 2010Suganuma et al. , 2011 at this high sedimentation rate site (89 cm/kyr across the boundary) is minimal. This age closely accords with ages of around 773 ka from other well constrained sites (Channell 2017;Channell et al. 2020;Singer et al. 2019;Valet et al. 2019;Haneda et al. 2020a; earlier records reviewed in Head and Gibbard 2015b). The geomagnetic field intensity record shows two pronounced minima, one at 772 ka near the polarity switch and the other at 764 ka . It is therefore evident that the position of the VGP switch cannot be precisely predicted using geomagnetic field intensity data alone. Montalbano Jonico lacks a paleomagnetic record owing to late diagenetic remagnetization associated with the growth of greigite (Sagnotti et al. 2010 record at this site serves as a proxy for the geomagnetic field intensity site and reveals a peak (field intensity minimum) at the approximate position of the M-B reversal as determined by the marine isotope record Nomade et al. 2019) and dated by an 40 Ar/ 39 Ar age of 774.1 ± 0.9 ka for tephra layer V4 which coincides with the 10 Be/ 9 Be peak . While this corroborates the position and age of the M-B reversal in this part of the Mediterranean, the geomagnetic field intensity alone is insufficient to identify the precise position of the polarity switch (see Channell et al. 2020), as demonstrated for Chiba and elsewhere. The M-B reversal as recorded at the Valle di Manche (Capraro et al. 2017) has been astronomically dated at 786.9 ± 5 ka , an anomalously old age when compared with most global records including the 10 Be/ 9 Be proxy record of Montalbano Jonico section just 135 km to the north (Head 2019). A 10 Be/ 9 Be record at the Valle di Manche section gives a peak in 10 Be concentration ∼3.5 m above the reported M-B reversal. This translates to a difference of ∼12 kyr and is coincident with the age of this reversal elsewhere. Lock-in depth seems unable to explain the spuriously low position of the reversal because sedimentation rates at ∼27 cm/kyr in this part of the Valle di Manche section are reasonably high . When the 10 Be/ 9 Be curves for the Valle di Manche and Montalbano Jonico sections are compared, they show strong agreement (Capraro et al. 2019). The 10 Be/ 9 Be peak therefore most likely marks the true position of the M-B Chron boundary at both sections, with the Valle di Manche paleomagnetic reversal ∼3.5 m below representing diagenetic overprinting and remagnetization (Head 2019; but see Capraro et al. 2019 for an alternative interpretation). This explanation would also account for the unusually rapid directional transition of this reversal in the order of 100 years or less at the Valle di Manche section . A similar relatively old (786.1 ± 1.5 ka) M-B reversal, perhaps with an even more rapid transition, reported from the Sulmona basin in central Italy (Sagnotti et al. 2014;Sagnotti et al. 2016) has been restudied and appears to carry an unreliable signal (Evans and Muxworthy 2018; but see Sagnotti et al. 2018). Another relatively old age (~779 ka) for the reversal has been reported from Site IODP U1385 off Portugal (Sánchez-Goñi et al. 2016). The position of this reversal has since been revised, and it is now provisionally placed higher in the core than reported from shipboard analysis (Xuan Chuang, pers. comm. 2018). Moreover, a reported M-B reversal age of 783.4 ± 0.6 ka at ODP Site 758 in the Indian Ocean ) has been challenged on grounds that the sedimentation rates and hence resolution of the isotope and magnetic stratigraphies are all too low for precise age determination (Channell and Hodell 2017). It had been decided in advance that the choice individual members made when voting within the Working Group would not be revealed, contrary to usual practice within ICS. Because of active and potential research collaborations within the group, to do otherwise might have compromised the vote. Voting by the SQS Working Group commenced on October 10, 2017, and concluded on November 10, 2017. As noted in Head (2019), the Chiba proposal was passed by supermajority, gaining 73% of the total votes cast. Final approval and ratification of the Chiba GSSP Following minor revision, the Chiba proposal was submitted to the SQS voting membership for discussion and voting, this process concluding on 16 November 2018 with a supermajority of 86% in favour of the Chiba proposal. Discussion within the ICS voting membership began on August 16, 2019, and closed on October 28, 2019. Voting concluded on November 28, 2019, with the results as follows: 17 in favour, 2 against, no abstentions, all ballots returned. The proposal was therefore carried with a supermajority of 89.5%. This ICS-approved proposal for the Chibanian Stage/Age and Middle Pleistocene Subseries/Subepoch was ratified in full by the IUGS EC on January 17, 2020, drawing to a close a process initiated by INQUA in 1973, some 47 years earlier. The GSSP is placed at the base of a regional lithostratigraphic marker, the Ontake-Byakubi-E (Byk-E) tephra bed (Takeshita et al. 2016), in the Chiba section. It has an astronomical age of 774.1 ka (Suganuma et al. in press) and a zircon U-Pb age of 772.7 ± 7.2 ka , occurring immediately below the top of Marine Isotope Substage 19c. The directional midpoint of the M-B reversal, serving as the primary guide to the boundary, is just 1.1 m above the GSSP and has an astronomical age of 772.9 ± 5.4 ka (Haneda et al. 2020a;Suganuma et al. in press). The numerous climatostratigraphic signals associated with the MIS 19c/b transition, which represents the inception of glaciation for MIS 19 (see below), provide additional means to identify this boundary precisely on a global scale. IUGS ratification of the Middle Pleistocene Subseries officially legitimized a unit-rank term already in wide and formal use within the Quaternary community (Head et al. 2017), and the ratification of an accompanying stage complied with the requirements of the International Commission on Stratigraphy. INQUA fully supported ratification of both stage and subseries (van Kolfschoten 2020). This also provided Japan with its first GSSP, coincidentally based on a paleomagnetic reversal first clearly documented in Japan by Motonori Matsuyama, an early Japanese pioneer of magnetostratigraphy. The achievements of Japanese geophysicist Naoto Kawai may also be recalled, as he was the first to record a paleomagnetic reversal in sedimentary rocks (Kawai 1951). Marine Isotope Stage 19 MIS 19 has long been associated with the M-B reversal, and this interglacial stage therefore provides a welldocumented cluster of additional stratigraphic signals to identify the base of the Chibanian Stage on a global scale. Its climatic evolution is also significant because MIS 19c serves as an orbital analogue for the present interglacial (e.g. Berger and Loutre 1991;Pol et al. 2010;Tzedakis et al. 2012aTzedakis et al. , 2012bYin and Berger 2015) and therefore provides a natural baseline for assessing our future climate. History of MIS 19 In labelling fluctuating percentages of carbonate in marine cores from the equatorial Pacific Ocean, Arrhenius (1952) introduced a numbering system in which even/ odd numbers represent glacial/interglacial cycles. Arrhenius correctly surmised that carbonate-rich layers represent increased productivity linked to upwelling driven by strengthened trade winds during glacial intervals. Arrhenius labelled 18 carbonate cycles, recording although not labelling older cycles including the equivalent of what was to be known as MIS 19 (Fig. 5). Hays et al. (1969) continued this research through additional cores in the Pacific. They labelled as B17 (where B = Brunhes) a carbonate-poor interglacial cycle coinciding with the base of the Brunhes Chron (Fig. 5). Emiliani's (1955Emiliani's ( , 1966 original oxygen isotope stages followed the numbering scheme of Arrhenius. Shackleton and Opdyke (1973) in their now famous oxygen isotope and magnetostratigraphic analysis of the Vema 28-238 core from the western equatorial Pacific Ocean (V28-238 in Fig. 4) extended to Stage 22 Emiliani's original oxygen isotope stages 1-14 (Emiliani 1955) and then 1-17 (Emiliani 1966). In doing so, Shackleton and Opdyke (1973) were the first to label MIS 19 (Fig. 6). They equated cycle B17 of Hays et al. (1969) with their MIS 19, confirming the association of this interglacial stage with the M-B reversal. Subdivision of MIS 19 The division of marine isotope stages into substages has a long history beginning with Shackleton (1969) who subdivided MIS 5 into five lettered substages, a-e (Railsback et al. 2015). As noted by Railsback et al. (2015), a parallel system of subdividing marine isotope stages into decimal-style numbered "events" has its roots in the labelling system of Arrhenius (1952) and was first applied to marine isotope stages by Prell et al. (1986;but see Railsback et al. 2015 for historical development) who reasoned that defining events (peaks and troughs) rather than stages (intervals of sediment or time) was more useful in applying tie points for age models. Although the two approaches tended to be used rather indiscriminately and interchangeably, Shackleton (2006) remarked that conceptually they are different and not interchangeable. He noted that "events" relied upon peak values in analyses that are more difficult to replicate in practice, and hence reliably correlate, than the midpoints of transitions that define substage boundaries. This midpoint approach is indeed is how stages themselves are defined following Emiliani (1955). Accordingly, Shackleton (2006), Railsback et al. (2015) in their extensive review, and the present study, have all favoured contiguous lettered subdivisions for marine isotope stages. Subdivision used in the present study The scheme used here is illustrated by its application to the CbCS record ( Fig. 7). Three substages,19c,19b,and 19a, are recognized. MIS 19c comprises full interglacial conditions together with the rise to lighter foraminiferal δ 18 O values at the beginning of MIS 19 (Termination IX) and the decline to heavier values towards the end of MIS 19c, terminating with a glacial inception (Tzedakis et al. 2012a(Tzedakis et al. , 2012b. MIS 19b represents a single interval of heavier foraminiferal isotopic values which is recognized at the CbCS within the benthic record (Haneda et al. 2020b). The benthic foraminiferal δ 18 O record of MIS 19a is represented by a series of millennial-scale oscillations, with as many as four peaks Hays et al. (1969) showing correlation between carbonate percentage in equatorial Pacific core RC11-209 and that of east equatorial Pacific core 58 of Arrhenius (1952). Carbonate cycle B17 in core RC11-209 corresponds to an unlabelled cycle in Arrhenius' core 58. This would have been cycle 19 had Arrhenius continued labelling. Cycle B17 aligns with the Matuyama-Brunhes paleomagnetic reversal and represents MIS 19 Fig. 6 Reproduction of fig. 9 in Shackleton and Opdyke (1973) showing the δ 18 O record of the planktonic foraminifera Globigerinoides sacculifera from core V28-238, western equatorial Pacific, from which MIS 19 was defined for the first time (Shackleton and Opdyke 1973). This study confirms the links between MIS 19, carbonate cycle B17 of Hays et al. (1969), and the Matuyama-Brunhes paleomagnetic reversal of lighter isotopic values here labelled as MIS 19a-o1 to MIS 19a-o4, where "o" refers to benthic isotope oscillation. MIS 19a begins with MIS 19a-o1 (Fig. 7). Superimposed on this benthic foraminiferal isotopic record through MIS 19b-a is as many as four stadialinterstadial alternations, here labelled MIS 19-s1 to MIS 19-s4 (stadials) and MIS 19-i1 to MIS 19-i4 (interstadials). MIS 19-s1 is the first of these millennial-scale climatic episodes and broadly coincides with the glacial inception marked by MIS 19b. They are recognized primarily in planktonic records including planktonic foraminiferal δ 18 O, but may be observed in pollen spectra and other terrestrial proxies. Figure 7 shows how the labelling scheme presented here differs from those of Nomade et al. (2019) and Haneda et al. (2020b) as applied to the isotopic record of the CbCS. The present scheme does not preclude the use of additional biozones and informal event stratigraphy through all or part of MIS 19 where such detail is needed. The rationale for this subdivision is discussed in Section 3.2.3. Bassinot et al. (1994) were the first to subdivide MIS 19 formally, defining MIS 19.1, 19.2, and 19.3 (Fig. 8b) on the basis of two pronounced planktonic foraminiferal δ 18 O peaks recorded from Core MD900963 in the tropical Indian Ocean (Fig. 4). No explanation was given for these two peaks although precession is strongly expressed in this core. Tzedakis et al. (2012aTzedakis et al. ( , 2012b seem to have initiated the application of lettered substages for MIS 19, with Tzedakis et al. (2012b, their fig. 4) applying MIS 19a, 19b, and 19c (AIMs) documented in the EPICA Dome C ice-core record (EPICA Community Members 2006). Hence, the two lowest interstadials, assigned to MIS 19b, were correlated to AIM C and B, and the uppermost interstadial, assigned to MIS 19a, was correlated to AIM A (Fig. 8c). Railsback et al. (2015) similarly subdivided MIS 19 into substages a, b, and c, but defined them with respect to the LR04 global benthic foraminiferal δ 18 O stack of Lisiecki and Raymo (2005) which only clearly distinguishes two peaks in the upper part of MIS 19. Railsback et al. (2015) assigned both peaks to MIS 19a and the preceding trough to MIS 19b (Fig. 8d). This scheme therefore differed significantly from that of Tzedakis et al. (2012b). Ferretti et al. (2015) published detailed benthic and planktonic foraminiferal δ 18 O records from IODP Site U1313 in the central North Atlantic (Fig. 4), although the upper part of MIS 19 could not be unambiguously resolved into three interstadials. MIS 19c and MIS 19a were therefore labelled only approximately and MIS 19b was omitted (Fig. 9b). Shackleton et al. (2003) for establishing the MIS 6/5e and MIS 5e/d boundaries. They then applied the same method to determine the positions of the MIS 19c/b and MIS 19b/a boundaries, finding that the midpoints were broadly similar to positions they had statistically identified by the "Change point method" of Zeileis et al. (2002Zeileis et al. ( , 2003. Therefore, these limits and the substage classification they embody may reflect significant changes in global ice volume (Sánchez-Goñi et al. 2016). This method does not appear to have been used on other foraminiferal benthic records of MIS 19 to test such a possibility, but the shape of the δ 18 O benthic foraminiferal record at the CbCS in Japan (Fig. 7), for example, is rather different from that at Site U1385 especially across the MIS 19b-a interval. Regattieri et al. (2019) in their study of the lacustrine Sulmona basin in central Italy (Figs. 4 and 10b) broadly followed the lettered substage classification of Sánchez-Goñi et al. (2016). Their MIS 19b, the base of which is placed between two reduced-precipitation events (V and VI), includes stadial s1, interstadial i1 and part of the following stadial, s2. The MIS 19b-a boundary is drawn midway through their event IIX, here labelled stadial s2. In total, three interstadials are recognized within the MIS 19b-a interval at Sulmona (Fig. 10b), as with IODP Site U1385. Division into substages The lacustrine Piànico-Sèllere basin of northern Italy (Fig. 4) contains a finely resolved pollen record established by Moscariello et al. (2000). Although previously assigned to MIS 11, it most likely represents MIS 19 based on tephrochronology (Pinti et al. 2001(Pinti et al. , 2007Roulleau et al. 2009) It is evident from the foregoing that most records of MIS 19 naturally allow subdivision into two parts, an earlier relatively stable phase representing MIS 19c and occurring within one precession cycle, and a later phase (the inconsistently applied MIS 19b and MIS 19a) usually featuring three or four millennial-scale isotopically light peaks and occurring within a second precession cycle. There might be merit in dividing MIS 19 into just two substages separated by the current MIS 19c/b boundary as this most reasonably represents the inception of glaciation (Tzedakis et al. 2012a), and indeed Nomade et al. (2019) considered MIS 19b as the first bipolar seesaw oscillation. However, the tripartite subdivision first introduced by Bassinot et al. (1994) has become entrenched. The approach used here is therefore to follow the substage classification of Nomade et al. (2019) and Haneda et al. (2020b) in which MIS 19b is restricted to the first interval of high benthic isotopic values following MIS 19c. Fine-scale subdivision of MIS 19 Millennial to centennial changes occurring within MIS 19 include both local and global signals. Recognizing events, for example of warming or drying, and numbering them consecutively without reference to their substage is the simplest approach. Sánchez-Goñi et al. (2016) applied constrained cluster analysis to subdivide their pollen record of Site U1385 into 20 numbered pollen biozones through MIS 19, and then used the relative abundances of Mediterranean pollen taxa to indicate intervals of Mediterranean forest contraction (Fig. 9c). Regattieri et al. (2019) in their study of the lacustrine Sulmona basin in Italy recognized nine events of reduced precipitation inferred from multiple paleoenvironmental proxies. These events were labelled I-IX and occur throughout MIS 19 (Fig. 10b). An additional approach is the recognition of numbered stadials and interstadials. These have been used traditionally to describe cooler and warmer episodes within glacial cycles and are therefore climatic subdivisions. Their use in the Pleistocene (Penck and Brückner 1909) considerably predates that of marine isotope stages and substages. The two systems while obviously complementary are often based on different criteria. Stadials and interstadials are then more logically used alongside marine isotope substages than to subdivide them. The benthic marine isotope record at Montalbano Jonico, Italy (Fig. 10d) shows at least three sharply defined lighter isotopic phases within MIS 19a, and Nomade et al. (2019) defined these as interstadial 19a-1, 19a-2, and 19a-3 (Fig. 10d). These interstadials are numbered in stratigraphically ascending rather than descending order, allowing them to be labelled consistently since a "fourth" interstadial at the end of MIS 19a is less pronounced and may not always be recognized. The Montalbano Jonico succession was deposited on the shelf in relatively shallow (~100-200 m) waters, and the benthic isotope record closely resembles other climatic proxies . Hence, in this case, the benthic isotopic record incorporates a localized climatic signal and serves to indicate stadial and interstadial conditions. Nomade et al. (2019) did not number adjacent stadials. Haneda et al. (2020b) extended the scheme of Nomade et al. (2019) by subdividing the latter half of MIS 19 at the CbCS in Japan into both stadials and interstadials, labelling them as MIO-Stadial 1 to 4 (MIO-S1 to MIO-S4) and MIO-Interstadial 1 to 4 (MIO-I1 to MIO-I4), where MIO stands for Millennial Isotopic Oscillation (Fig. 11i). The first stadial was understandably assigned to the cooling event of MIS 19b that marks the inception of glaciation (Tzedakis et al. 2012a). These stadial-interstadial designations are based on the planktonic isotopic record which is largely a sea-surface temperature signal. It is essentially a climatic subdivision for which a stadial-interstadial designation is indeed appropriate. However, the MIS stage and substage boundaries at Chiba are based on the benthic foraminiferal δ 18 O record following the approach of Lisiecki and Raymo (2005) and Railsback et al. (2015), and consequently may not align precisely with the stadial-interstadial boundaries which are based on surface to near-surface (planktonic) properties. Hence, the first stadial (MIO-Stadial 1) begins just after the start of MIS 19b, and the first interstadial straddles the MIS 19b-a boundary (Fig. 11i). The labelling scheme proposed here (Section 3.2.1; Fig. 7) extends and modifies the schemes of Nomade et al. (2019) and Haneda et al. (2020b). It treats separately the benthic isotopic record in which as many as four millennial-scale oscillations (MIS 19a-o1 to MIS 19a-o4) may be discerned, and the planktonic / terrestrial record in which as many as four stadials (MIS 19-s1 to MIS 19-s4) and four interstadials (MIS 19-i1 to MIS 19-i4) may be identified. It resolves the incompatibility between the benthic isotopic record which may contain a strong regional to global signal especially at deepocean sites and upon which MIS substages are often based, and the planktonic and terrestrial signals that emphasize more localized climatic variations and permit the most reliable characterization of stadials and interstadials. This approach can be used alongside informal climatic schemes that in some cases already facilitate the recognition of stadials and interstadials in the latter part of MIS 19 (e.g. Sánchez-Goñi et al. 2016;Regattieri et al. 2019;Figs. 9c and 10b). Climatic fluctuations associated with deglaciation across Termination IX have also been labelled as stadials and interstadials using terminology developed for the last deglaciation (Mangerud et al. 1974;Björck et al. 1998). Giaccio et al. (2015) labelled an abrupt cold and dry interval from the Sulmona basin as a Younger Dryas-like event (Fig. 10b). Maiorano et al. (2016) applied the terms Heinrich-like, Bølling-Allerød-like, and Younger Dryas-like to similar climate oscillations recorded at the Montalbano Jonico section, and this terminology (Med-H TIX , Med-BA TIX , Med-YD TIX , referencing Termination IX in the Mediterranean) was continued by Marino et al. (2020;Fig. 10d). With respect to the CbCS in Japan, Suganuma et al. (2018) documented a single cooling phase they labelled as a Younger Dryas-type cooling event, and Haneda et al. (2020b) distinguished two closely separated cooling episodes they labelled as Younger Dryas-type cooling sub-events 1 and 2, abbreviated to YDt-1 and YDt-2 (Fig. 11i). These are effectively stadial-interstadial alternations but their precise expression is perhaps too uncertain at present to warrant a standardized terminology. Age model calibration of MIS 19 records Lisiecki (undated) gives the bounding ages for MIS 19 as 790 and 761 ka, based on the Lisiecki and Raymo (2005) global benthic foraminiferal δ 18 O stack (LR04) which is tuned to the insolation curve for 65°N (Laskar et al. 1993). Several studies of MIS 19 have used the LR04 record as the tuning target for their age models, including Ferretti et al. (2015) for the central North Atlantic Site U1313 (Fig. 9b) and Sánchez-Goñi et al. (2016) for IODP Site U1385 off Portugal (Fig. 9c). However, a primary limitation of LR04 in this regard is its weak expression of millennial-scale oscillations that characterize MIS 19b-a (Fig. 13). It should also be noted that the LR04 stack while ostensibly a globally averaged record is in fact heavily biased towards the Atlantic and also contains a significant temperature component (Elderfield et al. 2012). Moreover, Pacific records lag the Atlantic by as much as~4 kyr ( , and 756.9 ka (MIS 19a-18) ( Table 2). Nonetheless, the original astrochronological age model of Suganuma et al. (2018) was used in Haneda et al. (2020b) and Suganuma et al. (in press). This age model is subject to an uncertainty of about 5 kyr, allowing for an uncertainty of 4 kyr in the Lisiecki and Raymo (2005) target curve used for ODP Site 1123 (supplementary material in Elderfield et al. 2012), and an estimated 1 kyr uncertainty in tuning the Chiba record to the ODP Site 1123 sea-level curve. In addition to the limitations discussed above of using a single global stack such as LR04 as an alignment target, Lisiecki and Stern (2016) cautioned that the LR04 stack appears to be 1 to 2 kyr too young throughout the Pleistocene. The radiometric dating of interbedded tephras, correlation to radiometrically dated regional climatic events in speleothem records, and for the Mediterranean the use of sapropels and sapropel-like beds, should therefore be incorporated into age model construction wherever possible. Varve counting where available is also invaluable for precisely estimating the duration of events within MIS 19. Nomade et al. (2019) implemented a hybrid chronology for the Ideal section of Montalbano Jonico in southern Italy that integrates both astronomical tie points, including a ghost sapropel tentatively representing insolation cycle 74 (Maiorano et al. 2016;Marino et al. 2020), and 40 Ar/ 39 Ar-dated tephra layers (Fig. 10d). Rigattieri et al. (2019) for the lacustrine Sulmona succession in central Italy used an age model based exclusively on 40 Ar/ 39 Ar-dated tephra layers, six of these occurring through an 805-753-ka interval spanning MIS 19 (Fig. 10b). The lacustrine deposits at Piànico-Sèllere in northern Italy allow a floating varve chronology to be combined with a K/Ar-dated tephra layer (Pinti et al. 2001;Roulleau et al. 2009;Nomade et al. 2019; Fig. 10c). All these approaches are subject to uncertainties, some of which cannot presently be estimated. Table 2 shows the age and duration of MIS 19c,19b,and 19a for each of the sites discussed using their own time scales to illustrate the variation recorded, which reflects tuning uncertainties as well as local and regional influences superimposed on a global ice volume signal. One example serves to illustrate these uncertainties. Other sites have been studied at lower stratigraphic resolution through MIS 19. Lake Baikal in southeastern Siberia (~53°N, Figs. 4 and 12d) represents an area with the highest sensitivity to insolation forcing on Earth, owing to its central position within Asia. By correlating biogenic silica peaks, representing lake productivity maxima, with precessional cycles (Laskar et al. 2004), an astronomically tuned composite record of the biogenic silica was obtained over the entire Pleistocene. Magnetostratigraphic boundaries enabled the cross-checking of this chronology (Prokopenko et al. 2006). The Holocene-Pliocene record of Lake El'gygytgyn (67°30′ N, 172°05′ E; Figs. 4 and 12c), located in the Far East Russian Arctic, was dated using a combination of magnetostratigraphic reversals and palaeoclimatic records tuned to summer insolation at 65°N (Laskar et al. 2004) and to the Lisiecki and Raymo (2005) LR04 global stack (Nowaczyk et al. 2013). It is worth reiterating here concerns about using the LR04 global stack for this kind of tuning (Lisiecki and Stern, 2016). The climatic evolution of MIS 19 MIS 19 has been studied intensively owing to the close similarity between its substage c and the present interglacial with respect to orbital configuration, rapid deglaciation history, and early peak Antarctic temperatures (Tzedakis et al. 2012b; Fig. 13). This similarity allows unambiguous alignment of MIS 19c with the present interglacial, thereby offering insights into our future climate (Tzedakis et al. 2012b). Earlier overviews of the climatic evolution of MIS 19 are given by Tzedakis et al. (2012a, b) and most recently with inter-site comparisons by Suganuma et al. (2018, in press), Nomade et al. (2019), Regattieri et al. (2019), and Haneda et al. (2020b). Sites yielding highly resolved paleoclimatic records are listed in Table 3 and show a concentration of sites in the northern hemisphere. The (Laskar et al. 2004;Ferretti et al. 2015;Haneda et al. 2020b). (c) Lake El'gygytgyn, northeast Siberia: XRF core scanning-derived Si/Ti ratio (Wennrich et al. 2014). (d) Lake Baikal, southern Siberia: Biogenic silica contents (Prokopenko et al. 2006). (e) Normalized Yimaguan and Luochuan (China) stacked loess-palaeosol proxy records for East Asian Summer Monsoon (EASM; frequency-dependent magnetic susceptibility, orange line) and East Asian Winter Monsoon (EAWM; > 32 μm particle content, blue line) (suppl. fig. 12 of Hao et al., 2012). (f) The MIS 19 subdivisional scheme used here (Fig. 7): interstadials i1 and i2 are labelled in red. All records are plotted on their own published time scales modelling study of Vavrus et al. (2018) adds spatial detail to this picture. MIS 19 compares with both MIS 11 and the present interglacial in having a reduced-amplitude 400 ky eccentricity cycle and consequent suppression of precessional forcing (Fig. 2). Precession is in phase for all three interglacials. However, whereas the obliquity peak closely aligns with the precession minimum for both MIS 19c and the present interglacial, it leads the precession minimum by about 9 kyr in MIS 11. As a result, June insolation at 65°N increases more slowly for MIS 11 than for MIS 19c and the present interglacial ( fig. 6 of Tzedakis 2010). MIS 19c is therefore the closest orbital analogue for the present interglacial and even though the amplitude of obliquity is lower for MIS 19c the alignment of their onset is unambiguous (Tzedakis et al. 2012b). This close similarity will begin to diverge in the future, as the amplitude of precession will decline more strongly than for late MIS 19, and June insolation at 65°N will be lower (Fig. 13). Although the phasing between precession and obliquity are closely similar for MIS 19c and the present interglacial, with the obliquity maximum close to the precession minimum, obliquity during MIS 19c increases less rapidly and hence to a lower amplitude than during the beginning of the present interglacial. Moreover, the LR04 foraminiferal isotopic record shows lighter peak values for MIS 1 (figs. 6 and 7 of Tzedakis 2010; Fig. 13) and agrees with observations from the CbCS that temperatures were cooler during MIS 19 than today . Ganopolski et al. (2016) proposed that higher CO 2 levels of around 280 ppm during the pre-industrial Late Holocene explain this temperature difference, and Studer et al. (2018) discussed reasons for the exceptional rise in CO 2 from 8 ka (Middle Holocene) onwards (Fig. 13). However, for the Early Holocene, CO 2 levels reached a maximum of only 270 ppm which is very close to the 269 ppm maximum for MIS 19 based on corrected CO 2 records for the EPICA Dome C core (Bereiter et al. 2015). Indeed, Early Holocene CO 2 levels (Fig. 13). MIS 19c therefore presents a good analogue at least for the Early Holocene based on CO 2 as well as orbital criteria. It should be noted that residual global ice volume might have been higher during MIS 19 than MIS 1 (Elderfield et al. 2012;Regattieri et al. 2019;Vavrus et al. 2018). This may have increased climate sensitivity during MIS 19 given the nonlinear relationship between astronomical forcing and ice volume during the Quaternary (Past Interglacials Working Group of PAGES 2016) and the fact that polar ice volume provides one of the most important feedback mechanisms in the climate response to radiative forcing (Berger et al. 2017;Westerhold et al. 2020). MIS 20-19 transition There is widespread evidence of climatic and oceanographic instability during late MIS 20 and across Termination IX. A Younger Dryas-type cooling event interrupts the deglaciation of Termination IX in several records, notably at Montalbano Jonico (Maiorano et al. 2016;Simon et al. 2017;Marino et al. 2020) and Sulmona (Giaccio et al. 2015;Regattieri et al. 2019) in Italy, and the CbCS in Japan Haneda et al. 2020b), and is dated at around 785-790 ka. A similar cooling event is also recorded in the Lake Baikal Fig. 9c). This suggests a direct Mediterranean response to climate dynamics in the high-latitude North Atlantic ). Cooling at Montalbano Jonico is followed by a Bølling-Allerød-like warming phase (Med-BA TIX ) and then Younger Dryas-like cooling (Med-YD TIX ) prior to final rapid warming early in MIS 19c. This same succession is observed at Sulmona (Regattieri et al. 2019; Fig. 10b) and in Core KC01B in the central Mediterranean Sea (Trotta et al. 2019;Marino et al. 2020; Fig. 4), suggesting a pattern of oscillations that is at least regional in extent. A subsequent brief climatic reversal at~785 ka observed in the pollen record of ODP Site 976 in the Capraro et al. (2004Capraro et al. ( , 2005 A combination of obliquity phasing with low precessional forcing amplitude may have been a precondition for the instability seen across the MIS 20-19 transition ). The actual trigger likely reflects a short-term disruption of the Atlantic meridional overturning circulation (AMOC) and would have connected to the Pacific through shifts in the Intertropical Convergence Zone (ITCZ) (Haneda et al. 2020b ; Fig. 4). The presence of two closely separated Younger Dryas-type subevents at the CbCS (Haneda et al. 2020b;Fig. 11i) attests to the complexity of processes in operation at the global scale. MIS 19c MIS 19c has among the lightest isotopic values and spans full interglacial conditions, the duration of which holds considerable interest in assessing the natural length of our own interglacial. MIS 19c extends from around 791-787.5 ka to the expansion of ice sheets (glacial inception) at 774-777 ka and represents a more stable episode mostly coinciding with full interglacial conditions (Table 2). In spite of weak eccentricity at this time, planktonic and benthic foraminiferal isotope data for central North Atlantic Site U1313 (Fig. 9b) reflect variation concentrated in the half-precession bandwidth (~11 kyr), and also quarter-precession bandwidth for the benthic foraminiferal isotope data, indicating low-latitude insolation forcing particularly when the amplitude of precession is at its greatest, which is during MIS 19c . The second harmonic of precession occurs when the perihelion of Earth's orbit coincides with the spring or autumn equinox ( fig. 10 of Ferretti et al. 2015; Fig. 9a). Dinoflagellate cyst analysis indicates peak interglacial conditions between 790.5 and 784.0 during which time Site U1313 was fully under the influence of the subtropical gyre (Abomriga 2018). Coccolithophore assemblages reflect modern-type warmer North Atlantic Transitional Waters between~788 and 782 ka, with glacial inception occurring at~779 ka Emanuele et al. 2015). Glacial inception appears to have occurred quite early but with enhanced iceberg discharges from7 76 ka onwards suggesting multiple ice-sheet calving events . Fig. 9c) that represent cooling and drying events not significantly reflected in the proxies for seasurface freshwater input or temperature, which are relatively stable and warm through MIS 19c. This decoupling of terrestrial and marine climate through MIS 19c implies that the westerlies supplying moisture to southern Portugal through most of MIS 19c were periodically diverted northwards along with moisture that would have contributed to the growth of high-latitude ice sheets. These were expanding progressively through MIS 19c, aided by decreasing boreal summer insolation. These forest contraction events occur throughout MIS 19 with a 5 kyr periodicity and appear to represent a response to the fourth harmonic of precession, implying as with central North Atlantic Site U1313 the influence of low-latitude insolation forcing at this location. These decoupling events are not restricted to MIS 19 as they occur at other times during the Quaternary along the European margin (Sánchez Goñi et al., 2018). Within the Mediterranean region, ODP Site 976 in the Alboran Sea (Fig. 4) is influenced directly by Atlantic inflow and has yielded detailed pollen, coccolithophore, foraminiferal assemblage, and planktonic foraminiferal δ 18 O records through MIS 19 (Toti et al. 2020; Fig. 10e). Millennial-scale climate oscillations occur throughout MIS 19c and are registered synchronously in both marine and terrestrial proxies. Brief episodes of forest contraction centred at 781, 780, 777, and 775 ka through MIS 19c attest to short-term climate fluctuations (less humid winter conditions) presumably caused by the periodic northward deflection of westerlies as proposed by Sánchez-Goñi et al. (2016) for Site U1385 off Portugal. Similar fluctuations are observed in the Sulmona succession in Italy, suggesting that the entire western Mediterranean was affected by precession-driven episodes of drought through MIS 19c (Toti et al. 2020, and see below). In Italy, the marine Montalbano Jonico composite section and the lacustrine Sulmona basin succession have provided exceptionally detailed and well constrained records through MIS 19. Montalbano Jonico includes foraminiferal isotopes, various marine proxies, and pollen Nomade et al. 2019;Marino et al. 2020; Fig. 10d). The onset of full interglacial conditions shortly after the start of MIS 19c is marked by a faint sapropel-like feature ("ghost sapropel") assigned tentatively to i-cycle 74 by Maiorano et al. (2016) which has a midpoint age of 784 ka based on a phase lag of 3 kyr relative to maximum insolation in June at 65°N Head Progress in Earth and Planetary Science (2021) 8:50 (Lourens 2004). Emeis et al. (2000) assigned a comparable red horizon from the eastern Mediterranean to icycle 74 although this might represent an obliquity maximum rather than a precession minimum (Konijnendijk et al. 2014), although the two are in phase for MIS 19c. This recalls the often overlooked influence of highlatitude obliquity on Mediterranean climate (Konijnendijk et al. 2014) even in the absence of high-latitude ice sheet dynamics (Bosmans et al. 2015). The ghost sapropel at Montalbano Jonico lasted for about 2.5 kyr, and represents an interval of water-column stratification resulting from freshwater inflow related to strengthened North African summer monsoon conditions during the insolation maximum Marino et al. 2020). The presence at ODP Site 975 in the western Mediterranean ( Fig. 4) of an organic-rich layer within MIS 19c suggests this to be a basin-wide phenomenon . Pollen records from the Ideale section at Montalbano Jonico evidence an interval of full interglacial conditions (climatic optimum) marked by the expansion of temperate forests. These forests were dominated by broadleaved trees and indicate a warm and relatively humid climate. This climatic optimum extends from the sapropel-like layer to the top of MIS 19c at~774 ka, with a duration of 11.5 ± 3.4 kyr ; Fig. 10d). Three mesothermic forest expansions are recognized and are almost concurrent with higher seasurface temperature phases as reflected by alkenone SST reconstructions and the abundance of calcareous nannofossils. These phases are labelled I, II, and III in Marino et al. (2020; Fig. 10d). Spectral analysis also shows climate oscillations occurring with a periodicity of about 5.4 kyr throughout MIS 19 although these are dampened in MIS 19c ). The increasing benthic foraminiferal isotope values towards the end of MIS 19c suggest a strong global component in spite of the shallow (not more than 180-200 m deep) marine setting. The benthic foraminiferal record registers a maximum depth between~778.1 and 773.4 ka which spans the 774-ka timing of maximum global sea level given by Elderfield et al. (2012). The lacustrine Sulmona basin record (Regattieri et al. 2019; Fig. 10b) is just 295 km to the northwest of Montalbano Jonico and is based on a 40 Ar/ 39 Ar timescale fully independent of orbital tuning and with a mean uncertainty of ± 2.6 kyr. A duration of 11 kyr for full interglacial conditions agrees with the 11.5 ± 3.4 kyr duration recorded for the Ideale section at Montalbano Jonico. The stable isotope record at Sulmona has a temporal resolution of~60 years, allowing MIS 19c to be analysed in exceptional detail. Rapidly increasing precipitation after 788 ka, reflecting deglaciation, reached a peak at~786 ka but was interrupted by a prominent 0.8 kyr-long drier interval starting at 785 ka (event I of Regattieri et al. 2019) and speculated to be analagous with the 8.2 ka event in the Holocene. This is followed by additional events of increasing dryness. Regattieri et al. (2019) linked event I with a reduction in deep-water ventilation at ODP Site 983 (Fig. 8c), implying a connection with a brief interruption of the AMOC. These authors therefore proposed that these drying events in MIS 19c were causally linked to deep hydrography in the northern North Atlantic although were not able to tie them specifically to precession forcing. A detailed pollen record from the CbCS in Japan shows a well-defined rise and fall in deciduous broadleaved trees between 785.0 and 775.1 ka (pollen subzone CbCS-2a in fig. 7 of Suganuma et al. 2018), suggesting 9.9 kyr for the duration of full interglacial conditions at this site. The benthic foraminiferal δ 18 O record (Fig. 11g) shows a steep rise to lighter values at the beginning of MIS 19c and a more gradual decline towards the end, with otherwise relatively little variation (Haneda et al. 2020b;Suganuma et al. in press). In contrast, G. bulloides (planktonic) foraminiferal δ 18 O values show considerable fluctuations (Fig. 11g), and a high-resolution study of the dinoflagellate cysts across MIS 19c reveals instability and latitudinal shifts in the Kuroshio Extension system at this site (Balota et al. 2021). This instability presumably reflects the close proximity of the CbCS to the convergence of the warm Kuroshio and cold Oyashio currents that forms an extreme hydrographic gradient in this part of the western North Pacific . A spectral and wavelet analysis of the planktonic foraminiferal δ 18 O record and an index for watercolumn stratification reveal 9.6 kyr cycles throughout MIS 19 but expressed particularly strongly through MIS 19c (Haneda et al. 2020b). This is interpreted as the second harmonic of precession. As with North Atlantic Site U1313 , it seems related to equatorial insolation forcing and similarly appears to have been greatest during MIS 19c when precession at the Equator was at its highest amplitude (Fig. 11a). MIS 19b and a A critical feature of MIS 19b and 19a, together representing approximately the second half of MIS 19, is the establishment of three or four stadial/interstadial alternations broadly coinciding with a second precessional minimum that is in antiphase with obliquity, resulting in a damped insolation peak (Fig. 7a). This is transposed onto a longer-term trend of increasing global ice volume. Several mechanisms have been proposed for the bistability in MIS 19b-a. Tzedakis et al. (2012b) posited that the termination of full interglacial conditions at the end of MIS 19c would have coincided with the expansion of ice sheets leading to iceberg discharges into the North Atlantic and hence disruption of the AMOC. This in turn will have led, after some delay, to warming over Antarctica by means of the thermal bipolar seesaw mechanism (Stocker and Johnsen 2003). This happens when heat normally transported northward into the North Atlantic Ocean instead accumulates to the south in the global interior ocean, resulting after a short delay in its advection southwards into the Southern Ocean (Pedro et al. 2018 Fig. 8c). They noted that for the MIS 19b-a interval, three minima in the planktonic foraminiferal δ 18 O records match temperature peaks in the Antarctic ice-core record and that minima in the planktonic and benthic foraminiferal δ 18 O records are phaseshifted in a manner that invokes the thermal seesaw. Moreover, peaks of ice-rafted debris at ODP Site 983 occurring after the end of MIS 19c (Fig. 8c) seem to support the contention that ice-sheet calving had triggered AMOC disruption, leading to the proposal that thermal seesaw bistability caused the pronounced minima in the second half of MIS 19 at ODP Site 983. Further support for this mechanism comes from the sortable silt and benthic δ 13 C records at Site 983 ( Fig. 8c) that point to a slowdown of North Atlantic Deepwater Formation which is an essential component of the AMOC (Kleiven et al. 2011). A characteristic feature of the planktonic foraminiferal δ 18 O record is the rectangular-shaped waveform that seems to result from abrupt ice-sheet calving followed by the rapid return to strength of the AMOC. This contrasts with the much slower response of Antarctica via AMOC perturbations and the bipolar seesaw which translates via deep-water advection to a time lagged and v-shaped corresponding benthic record (Shackleton et al. 2000;Tzedakis et al. 2012b). Ferretti et al. (2015) in their analysis of foraminiferal isotopes (Fig. 9b) and alkenones from IODP U1313 in the central North Atlantic found strong variability at frequencies of~11 kyr, in both surface and deep-water records, and 5.8 kyr in the benthic oxygen signal suggesting forcing mediated by the second and fourth harmonics of precession. Because the harmonics of precession are important features of insolation in the tropics, resulting in two insolation peaks for every precessional cycle (Fig. 9a), this implies that low-latitude astronomical forcing and other processes are important drivers of millennial-scale climate variability even at a time when the effects of precession on insolation are subdued . At IODP Site U1385 off southwestern Portugal, a forest contraction beginning at 775 ka marks the end of the Tajo interglacial (Sánchez-Goñi et al. 2016; Fig. 9c). Glacial inception terminating MIS 19c is marked by a significant marine cooling event at 769 ka on the time scale of Sánchez-Goñi et al. (2016), based on correlation with the independently dated Sulmona basin record in Italy by Regatierri et al. (2019). This event, which also represents a forest contraction, is dated at~772 ka on the Sulmona time scale. It represents MIS 19b and is labelled as stadial s1 on Fig. 9c. This stadial is followed by a further two. All three stadials represent forest contractions and reflect cooling and drying episodes as with MIS 19c but are accompanied by heavier planktonic and benthic foraminiferal δ 18 O values. Moreover, alkenone results indicate both cooling and freshening of the surface waters (Fig. 9c) and may align with three IRD peaks at ODP Site 983 on the Gardar Drift in the subpolar North Atlantic (Fig. 8c). With the expansion of ice sheets, a threshold must have been crossed allowing the triggering of successive iceberg discharges and accompanying freshening and cooling of the surface waters off Portugal (Sánchez-Goñi et al. 2016). Spectral analysis of records from Site U1385 reveals~5 kyr periodicity throughout MIS 19, suggesting the fourth harmonic of precession and hence the influence of equatorial insolation. The relatively high ice-volume baseline conditions for MIS 19 may have increased the sensitivity of this interglacial to high-frequency climate oscillations. Sánchez-Goñi et al. (2016) fig. 8a), calling into question some of the accepted similarities between the two marine isotope stages. At Montalbano Jonico, MIS 19c ends at~774 ka and is succeeded by four abrupt oscillations (o1-o4) that define two stable states between higher and lower benthic foraminiferal δ 18 O values ; Fig. 10d). The transitions from one state to the other took less than 200 years. These oscillations almost precisely coincide with alkenone SST reconstructions and are supported by simultaneous expansions and contractions of the mesothermic forest (warm/wet versus cool/dry climates), allowing the recognition of three interstadials in Nomade et al. (2019) and Marino et al. (2020) and four here (i1-i4; Fig. 10d). These interstadials and their intervening stadials are superimposed on a longer-term trend of dryer and cooler climates through the latter part of MIS 19. They arise and decline with the same distinctive rapidity observed in the high-resolution records at Sulmona and Piànico-Sèllere, also in Italy. They occur within a single precession cycle and are the amplified part of a 5-6 kyr cyclicity detectable throughout MIS 19. Nomade et al. (2019) emphasized that while a direct linkage of these oscillations to Northern Hemisphere ice sheet dynamics and North Atlantic IRD events is clear, local factors are also needed to explain the large amplitude of these oscillations and their abrupt nature. The influence of the African monsoon on Mediterranean climate is already well illustrated by the development of sapropels during precession minima (insolation maxima). Nomade et al. (2019) therefore raised the possibility that interstadials might reflect oscillatory northward shifts of the ITCZ (Fig. 4) over the Mediterranean region. This would have brought increased summer moisture and temperature during the African monsoon. Two lines of evidence support this connection. Firstly, the interstadials, along with MIS 19c, are represented by dark grey silty clays, whereas the intervening stadials and MIS 19b are light grey. The darker clays have more negative δ 13 C values and indicate reduced water-column ventilation which may be explained by freshwater input through monsoon rains in the same way that sapropels are formed. As noted by Nomade et al. (2019), the three most pronounced interstadials closely match the three methane peaks in the EPICA Dome C ice-core record. The West African monsoon has a major effect on global methane production (Kleinen et al. 2020) and provides a potential link between the Mediterranean interstadials and Antarctic ice-core methane during MIS 19a. Nomade et al. (2019) therefore suggested that the wet/warm oscillations found at Montalbano Jonico but also at Sulmona and Piànico-Sèllere correspond to worldwide climatic phenomena associated with the tropical monsoon regime modulated by latitudinal shifts in the ITCZ. In the Sulmona basin record (Giaccio et al. 2013(Giaccio et al. , 2015Regattieri et al. 2019), at least three interstadials can be recognized within the second half of MIS 19, and these clearly correlate to interstadials i1, i2, and i3 at Montalbano Jonico, with a fourth possibly also recognizable at Sulmona (Fig. 10b). Regattieri et al. (2019) noted the concordance between their reducedprecipitation events VII, IIX [sic], and IX (stadials s1, s2, and s3) and the subpolar record of IRD and linked these events to disruptions of the AMOC. Hence, whereas Nomade et al. (2019) proposed northward shifts in the ITCZ and increased influence of the monsoon to explain millennial-scale shifts in climate at Montalbano Jonico, Regattieri et al. (2019) invoked the direct influence of AMOC weakening on the Sulmona record, noting numerous examples of high-latitude forcing on the Mediterranean climate during the Quaternary. The influence of the African monsoon on Mediterranean hydrography nonetheless remains uncontestable. ODP Site 976 in the Alboran Sea (Toti et al. 2020;Fig. 4) is situated close to the Strait of Gibralter and is strongly influenced by hydrographic exchanges with the North Atlantic. Its pattern of stadial-interstadial alternations compares closely with that recorded in other Mediterranean sites (Fig. 10) and with Site U1385 off southwestern Portugal (Fig. 9c). Cyclic northern shifts of the ITCZ explain the increased winter precipitation needed for the expansion of Mediterranean forests during interstadials Toti et al. 2020). These northern shifts also facilitated the inflow of warm waters from the Azores Current into the Alboran Sea, as evidenced by increases in the abundances of warm water coccolithophore and foraminiferal taxa. Stadials conversely are characterized by increases in polar to subpolar foraminifera indicating the inflow of reduced-salinity subpolar waters from the North Atlantic. ODP Site 976 has therefore recorded the combined influences of North Atlantic inflow and atmospheric processes throughout MIS 19. Sites located beyond the direct influence of the North Atlantic circulation and the AMOC are crucial in determining additional factors that might have driven the development of MIS 19, especially the stadial/interstadial oscillations during MIS 19b-a. The CbCS provides the most detailed MIS 19 record in the Pacific realm (Suganuma et al. 2018, in press;Haneda et al. 2020b;Kameo et al. 2020;Izumi et al. 2021, Balota et al. 2021Kubota et al. 2021;Fig. 11). Here, climatic oscillations characterizing MIS 19b-a are well defined and reveal harmonics of precession that point to low-latitude forcing. Latitudinal shifts of the ITCZ (Haneda et al. 2020b) along with fluctuations in the Siberian High-Aleutian Low atmospheric system that controls the East Asian winter monsoon and Westerly Jet Kubota et al. 2021) seem to have been driving factors. Oscillations of the ITCZ in particular would explain the similar stadial-interstadial pattern recorded at the CbCS and the North Atlantic-Mediterranean sites (Haneda et al. 2020b). The CbCS is discussed in detail in the next section. Further evidence for climatic oscillations in the latter part of MIS 19 is found in the higher-latitude biogenic silica records of Lake Baikal (Prokopenko et al. 2006) and Si/Ti records of Lake El'gygytgyn (Wennrich et al. 2014). These records reflect elevated diatom production during the spring-fall and show pronounced interstadials i1 and i2 (Fig. 12c, d), recalling similar oscillations from the CbCS, North Atlantic Ocean, Mediterranean region, and Antarctic ice-core records. The oscillations at Lake Baikal are paced by the harmonics of precession, the influence of precession reaching Lake Baikal at 51°-53°N (although declining with increasing latitude; Prokopenko et al. 2006). The pronounced oscillations in MIS 19b-a may represent amplifications connected to global ice volume that had been increasing since the latter part of MIS 19c. The lake El'gygytgyn record of northeastern Siberia (Wennrich et al. 2014; Fig. 12c), while showing climate oscillations similar to those at Chiba during MIS 19b-a, should have been influenced directly by the northern Siberia ice sheet at this time (Vavrus et al. 2018). As with the IRD record of ODP Site 983 just south of Iceland (Fig. 8c), the mechanisms driving these high-latitude oscillations are not well understood but appear phase linked across the northern hemisphere and suggest highlatitude atmospheric teleconnections, as discussed in Section 4.2 below. The Chiba composite section and GSSP The Chiba section, located in the central part of the Boso Peninsula and within the Chiba Prefecture, contains the GSSP (35°17′ 39.6″ N, 140°08′ 47.6″ E) for the Middle Pleistocene Subseries/Subepoch and Chibanian Stage/Age (Fig. 14). The Chiba section is a segment of the Yoro River section which itself is one of five outcrops that comprise the CbCS (west to east): the Urajiro, Yanagawa, Yoro River, Yoro-Tabuchi, and Kokusabata sections (Table 1 of Haneda et al. 2020a;Suganuma et al. in press). A borehole, TB-2, near the Yoro-Tabuchi outcrop and 190 m northeast of the Chiba section (Hyodo et al. 2016(Hyodo et al. , 2017 contributes to this composite section. Collectively, these sections span a distance of 7.4 km along strike and are stratigraphically linked by a series of tephra beds . The GSSP is located at the base of the Byk-E tephra bed (Fig. 14d), a conspicuous regional marker 1 to 3 cm thick in the Chiba section (Nishida et al. 2016) that has been correlated with the YUT5 bed erupted from the Older Ontake volcano in the central part of Honshu approximately 250 km to the west (Takeshita et al. 2016). Geological background The Chiba section exposes the middle of the Kokumoto Formation which itself is within the middle part of the Kazusa Group. The Kazusa Group is approximately 3000 m in thickness and represents the Lower and Middle Pleistocene infill of a forearc basin, the Kanto Tectonic Basin (Fig. 14b), resulting from the west-northwestward subduction of the Pacific plate beneath the Eurasian plate at the Izu-Bonin trench (Ito and Katsura 1992). Uplift began about one million years ago, resulting in the deeply incised gorges that characterize the Boso Peninsula. Interbedded sandstones and siltstones dominate the lithology, with the sandstones typically being turbiditic. The depositional environment has been variously interpreted as basin plain, lower fan and base of slope in the lower part of the Kazusa Group ( Fig. 14b) but gradually shallowing upwards to upper slope and shelf environments at the top . Deep-water massive sandstones in the middle and upper parts of the Kazusa Group represent hyperpycnal and sediment gravity flows originating from shelf-margin deltas or fan deltas. These flows were activated during the falling and lowstand stages of sea-level oscillations controlled primarily by glacioeustacy (Takao et al. 2020). The Kokumoto Formation is approximately 350 m thick along the Yoro River and represents MIS 21-18 (~860-720 ka). It comprises thick silty beds along with alternating sand and thinner silt beds ). Where exposed along the CbCS, it is a muddy unit deposited from suspension under stable and calm bottom-water conditions, with sedimentary structures and trace fossil assemblages together indicating a continental slope setting (Nishida et al. 2016). In particular, the presence of the ichnogenus Zoophycos in the CbCS (Nishida et al. 2016) implies a water depth of more than 800-1000 m based on its modern bathymetric distribution ( fig. 4 in Löwemark and Werner 2001), as noted by Izumi et al. (2021). Parallel bedding observed at the CbCS (Fig. 14c) attests to continuous deposition without slumping. This muddy unit, underlain and overlain by deep-water massive sandstones, is a relatively condensed section formed in the uppermost part of a transgressive systems tract (Takao et al. 2020) on the continental slope. Modern oceanography and climate The area off the CbCS today experiences the highest oceanographic gradients in the western Pacific owing to the confluence of two major western boundary currents: the warm, nutrient-deficient north-flowing Kuroshio Current and the cold south-flowing nutrient-rich and less saline Oyashio Current. After converging, the Kuroshio Current flows eastwards as a jet known as the Kuroshio Extension along a frontal zone that separates the North Pacific Subpolar Gyre from the North Pacific Subtropical Gyre. This frontal system consists of the Subarctic Front to the north and Kuroshio Extension Front to the south, forming an intervening zone called the Kuroshio-Oyashio Interfrontal Zone (KOIZ) (Komatsu and Hiroe, 2019;Fig. 15). This area is therefore highly sensitive to changes in the strength and latitudinal position of the Kuroshio Extension which itself is driven by North Pacific, East Asian, and global climate oscillations on seasonal, decadal, and orbital time scales. The CbCS then is well positioned to record the evolving behaviour of this major oceanographic boundary system throughout MIS 20-18 Haneda et al. 2020b;Kameo et al. 2020;Izumi et al. (2021), Balota et al. 2021;Kubota et al. 2021). Monthly satellite imagery of the western North Pacific for 2019 shows the position of the Kuroshio Extension Front and pronounced seasonal changes affecting seasurface temperature and primary productivity (Fig. 16). Although the latitudinal position and flow speed of the Kuroshio Extension change little through the year, the magnitude of the Kuroshio Extension Front strength, as measured by the horizontal temperature gradient, is greatest during the cold season and least during the warm season (Chen 2008, Kida et al. 2015, Yu et al. 2020. Mesoscale perturbations along the Kuroshio Extension Front are also greater in winter (Wei et al. 2017). Seasonal variation in frontal strength is greatest off Japan and hence relevant to the CbCS. The Oyashio Current as observed along the continental fig. 1 of Kazaoka et al. 2015). (c) Chiba section showing faint but discernible parallel bedding in this massive siltstone unit; the location of the GSSP is indicated by a red star (from fig. 10 of Suganuma et al. in press). (d) Detail of the Ontake-Byakubi-E (Byk-E) tephra bed in the Chiba section at the position of the GSSP, showing bioturbation. The GSSP is located at the base of the tephra bed (photo by the author). The GSSP has an astronomical age of 774.1 ± 5 ka and is 1.1 m below the directional midpoint of the Matuyama-Brunhes paleomagnetic reversal (Suganuma et al. in press) Head Progress in Earth and Planetary Science (2021) 8:50 slope off Hokkaido flows more weakly in summer and autumn, its total volume transport reaching 20-30 Sv in winter and spring but only 3-4 Sv in summer and autumn (Qiu 2019). As a result of these seasonal variations, the mixed layer in the Kuroshio region is deep (> 100 m) during winter, promoting the supply of nutrients to the surface, whereas in summer it is shallower (< 15 m) and nutrient depleted (Komatsu and Hiroe 2019). These seasonal differences are significant for the interpretation of climate proxies in the CbCS. The Kuroshio Extension Front varies significantly at interannual to decadal frequencies with respect to strength, latitude, and elongated versus convoluted pathway Yu et al. 2020), and its annual mean position off Japan has shifted between 33°and 37°N over the period 1993 to 2013 ). This variability strongly correlates with the North Pacific Oscillation, a north-south seesaw between the Aleutian Low and the Pacific High below it (Fig. 4). The Aleutian Low intensifies during its positive phase and shifts northwards (Sugimoto and Hanawa 2009;Yu et al. 2020), favouring a strengthened and northward-moving Kuroshio Extension Front. Changes in the latitude of the Aleutian Low, a cold-season phenomenon that dissipates almost entirely in summer, may therefore be linked to the position of the Kuroshio Extension Front during MIS 19. The Siberian High, North Pacific Oscillation, Arctic Oscillation, and North Atlantic Oscillation are major interlinked sea-level pressure systems in the northern hemisphere. The Arctic Oscillation (Thompson and Wallace 1998) is a large-scale surface-pressure system linked to the stratospheric polar vortex and alternating between a negative and positive mode. In its positive mode, surface pressure is high in the polar region which causes the encircling mid-latitude jet stream to intensify and confine cold air within this region. In negative mode, the surface pressure falls and the resulting zonal winds become weaker and more distorted, allowing cold arctic air masses to flow into the mid-latitudes. A coupling between the Siberian High and Aleutian Low has been proposed by Huang et al. (2016) and Kumar et al. (2019). Climate modelling experiments have also revealed linkages between the EAWM, which reflects the intensity of the Siberian High, and both the North Pacific Oscillation and Arctic Oscillation (Miao et al. 2020). Moreover, the coupling strength between the Arctic Oscillation and the EAWM is enhanced by increased ice cover over the East Siberian Seas (Wie et al. 2019). The Arctic Oscillation is also strongly linked to the North Atlantic Oscillation comprising the Azores High and the Iceland Low (Hamouda et al. 2021;Fig. 4). The connections between these major systems are not straightforward and may vary with global temperatures (Hamouda et al. 2021). Nonetheless, the Arctic Oscillation represents a plausible high-latitude link between North Atlantic and North Pacific climate processes Fig. 15) is evident from sharply enhanced productivity around and north of~30°N; greatest productivity in the mid-latitudes east and north of Chiba is during the boreal spring (March-May) with a subdued rise in the autumn (September-October). g-l Sea-surface temperature; the KEF moves only slightly northwards during summer-autumn but the latitudinal gradient across the KEF lessens significantly in summer with considerable northward diffusion of warm water during July-September. Note mesoscale eddies pinching off along the KEF and advecting heat northwards. NASA Earth Observations (2020) during MIS 19 especially at times of increased ice cover over Siberia. These high-latitude linkages, along with the modelled (e.g. Moreno-Chamarro et al. 2020) and historical (Chen et al. 2019) relationship between the North Atlantic Oscillation, AMOC and position of the ITCZ, illustrate the tightly integrated nature of the global climate system. Chiba paleoceanography and paleoclimate through MIS 19 The CbCS has sedimentation rates of~89 cm/ky across the GSSP (Suganuma et al. in press) and represents one of the most intensely researched intervals available to understand the climatic development of MIS 19. A benthic and planktonic foraminiferal isotope record at high stratigraphic resolution (Haneda et al. 2020b) has a structure remarkably similar to that of North Atlantic and Mediterranean records throughout MIS 19. Similarities include power spectra containing the harmonics of precession which implies low-latitude forcing throughout, and a gradual trend to higher benthic foraminiferal δ 18 O values in the latter part of MIS 19c that reflects a progressive increase in global ice volume. At the CbCS, a sharp increase in benthic and especially planktonic foraminiferal δ 18 O values marks the onset of MIS 19b and is taken to reflect the inception of glaciation. As with some Mediterranean records, MIS 19a is represented by four interstadials, i1-i4, and by four corresponding benthic marine isotope oscillations (MIS 19a-o1-4) (Fig. 11g, h). Haneda et al. (2020b) invoked shifts in the ITCZ to explain the coherence between the Chiba and Mediterranean records, as noted above. The CbCS documents changes in the Kuroshio Extension system through MIS 19, including rapid north-south shifts associated with stadial/interstadial alternations during MIS 19b-a (Haneda et al. 2020b). Foraminiferal isotopes and other proxies reflect both warming and increased surface and near-surface water-column stratification during the interstadials (Fig. 11d-f). Because these proxies represent winter oceanographic conditions, they imply weaker mixing of the water column during winter. The East Asian Winter Monsoon (Fig. 4) largely controls winter wind strength today and was therefore likely weaker during the warmer intervals of MIS 19a . The intensity of the East Asian Winter Monsoon system is driven by the thermal contrast between the Siberian High (cold) and Aleutian Low (warm) pressure systems, both of which develop primarily during winter. A weak EAWM occurs when winter conditions in Siberia are relatively warm (low pressure) and those over the northwestern Pacific Ocean are relatively cold. A weak EAWM during the latter part of MIS 19 is supported by evidence from the Chinese loess record and has been attributed to a weak minimum in summer insolation at 65°N resulting in reduced ice accumulation over Siberia (Hao et al. 2012;Fig. 12e). Suganuma et al. (2018) proposed that under ice-free conditions, enhanced winter insolation at 50°N might also have contributed to a weak Siberian High, and consequently a weak EAWM in the loweccentricity configuration of MIS 19. A weak EAWM will then have caused the position of the westerly jet to advance northwards. The latitudinal position of the Kuroshio Extension system is strongly influenced by that of the westerly jet and will have shifted northwards accordingly. During interstadials, therefore, the Kuroshio Extension system would have migrated northwards, bringing warm, stratified water masses to the CbCS ( fig. 15 in Suganuma et al. 2018;Fig. 17). While this mechanism explains the continued presence of warm, stratified waters at the CbCS late in MIS 19, it does not address the numerous abrupt stadialinterstadial alternations that characterize MIS 19b-a at this site or the similarity with North Atlantic-Mediterranean records, and it does not account for potential variation in the intensity of the Aleutian Low. The loess records of China show a continually weak EAWM and EASM through the latter half of MIS 19 (Hao et al. 2012;Peng et al. 2020) and while this provides evidence for the persistence of unusually warm conditions late in MIS 19, the millennial-scale stadialinterstadial alternations that characterize MIS 19b-a are not clearly expressed. A combination of pedogenic processes and relatively low sedimentation rates likely accounts for this absence of abrupt changes. This itself introduces uncertainty into age models for the latter part of MIS 19 and hinders insights into the mechanisms driving stadial-interstadial alternations. The S7 palaeosol, now widely accepted as equating with MIS 19, has since been studied in detail by Zhang et al. (2020) who examined successions on the Chinese Loess Plateau for their mollusc content. They determined the earlier part of MIS 19 to be slightly warmer than at present, and the later part similar to present but with stronger climatic variability. In the east, warm conditions continued~15 kyr into MIS 18, suggesting a relatively strong summer monsoon through MIS 19 and well into MIS 18, contrary to Hao et al. (2012, suppl. fig. 12) who showed progressive weakening through MIS 19. Again, however, clear coherent oscillations in the latter part of MIS 19 are not apparent. The CbCS planktonic foraminiferal isotope data through MIS 19b-a (Haneda et al. 2020b) show four pronounced stadials (MIS 19-s1 to s4) and interstadials (MIS 19-i1 to i4) (Fig. 11). Less than 1000 years after the beginning of MIS 19b, the onset of stadial s1 is identified by a rapid and significant increase in planktonic δ 18 O values accompanied by increases in dinoflagellate cyst concentrations and especially a sharp (within~300 (Fig. 11). The abundance of Protoceratium reticulatum appears to indicate the influence of cooler, mixed, nutrient-rich waters of the Kuroshio-Oyashio Interfrontal Zone resulting from a southward shift of the Kuroshio Extension (Balota et al. 2021; Fig. 15). Future ultra-high-resolution multiproxy studies are needed to fully understand these rapid climate changes and their respective planktonic and benthic expressions through MIS 19b-a. In any case, these stadials in the Chiba record appear to correspond well with cool/dry stadials in the Mediterranean and with intervals in the North Atlantic record characterized by IRD, meltwater, cooling, and reduced ventilation, all indicative of AMOC disruptions. This suggests a teleconnection between the North Atlantic and the western Pacific at this time. Haneda et al. (2020b) used very high-resolution benthic and planktonic foraminiferal δ 18 O analyses of the CbCS throughout MIS 19 to estimate water-column stratification by comparing δ 18 O values between benthic foraminifera, the deep-dwelling (~300-400 m depth in the subtropical North Pacific) Globorotalia inflata, and the surface-dwelling Globigerina bulloides. While all records show millennial-scale alternations during MIS 19b-a, the amplitude of the G. bulloides record reasonably suggests that the source of these alternations was from surface-water processes (Haneda et al. 2020b). However, the onset of benthic isotope oscillation o2 appears to lead slightly that of interstadial i2 (Fig. 11g). Detailed multiproxy studies at ultra-high stratigraphic resolution are needed to resolve these small discrepancies. Variations in δ 18 O values have been considered primarily to reflect temperature, with ΔT (δ 18 O benthic minus δ 18 O bulloides ; Fig. 11e) representing the difference between bottom and surface water temperature (Haneda et al. 2020b(Haneda et al. , 2020c fig. 2d of Kubota et al. 2021;Fig. 11f) and the relative abundance of the calcareous nannofossil Florisphaera profunda (Kameo et al. 2020; Fig. 11d) provide additional measures of surface and near-surface water-column stratification. Haneda et al. (2020b) showed that increasing stratification occurred during interstadials and vice versa for stadials, supporting the notion that stadial-interstadial oscillations represent latitudinal displacements of the Kuroshio Extension Front at the CbCS. Total organic carbon (TOC) values generally mirror the stadial-interstadial alternations of the latter part of MIS 19a, with increased values coinciding with interstadials i2, i3, and i4 Fig. 11b). Izumi et al. (2021) tentatively proposed that peaks at interstadials i2 and i3 represent enhanced organic matter preservation owing to water-column stratification rather than increased surface water productivity. This follows the reasoning that surface water productivity should be lower during interstadials as they mark the northward shift of the Kuroshio Current which is relatively poor in nutrients. Enhanced organic matter preservation is supported by geochemical data and evidence from trace fossils that indicate reduced oxygen levels in the bottom waters. However, among the highest values of TOC occur within stadial s1 which was influenced by the nutrient-rich Oyashio Current, as evidenced by the dinoflagellate cyst record (Balota et al. 2021; Fig. 11c). Izumi et al. (2021) acknowledged that more research is needed to understand the important relationship between surface water productivity, organic matter preservation, and stadial-interstadial alternations. Latitudinal changes in the ITCZ best explain the millennial-scale oscillations at the CbCS and their teleconnection to the North Atlantic and Mediterranean records (Haneda et al. 2020b;Fig. 4). Disruptions to AMOC by meltwater release into the North Atlantic, and consequent triggering of the thermal bipolar seesaw, creates a strong thermal contrast between the northern and southern hemispheres. This causes the ITCZ to move southwards and the trade winds to intensify in association with a deepened Aleutian Low. The midlatitude prevailing westerlies move in parallel and the consequent atmospheric reorganization causes the Kuroshio Extension also to shift southwards during stadials. Using a combination of Antarctic ice-core records and climate modelling over the past 720 kyr, Kawamura et al. (2017) showed that Antarctic warming events, linked to activation of the thermal bipolar seesaw, are most frequent when the climate is intermediate between glacial and interglacial states, as occurs at the beginning of MIS 19 (the Younger Dryas-type interruption of Termination IX) and the stadial-interstadial states of MIS 19b-a. These instabilities recorded at the CbCS and elsewhere therefore appear to reflect an intrinsic oscillation within the Earth system amplified during an intermediate glacial-interglacial state. Modelling results suggests that reduced CO 2 concentrations along with extended Northern Hemisphere ice sheets, as must have been developing after the glacial inception near the end of MIS 19c (see above, also Vavrus et al. 2018), are prerequisites for such instability (Kawamura et al. 2017). To understand the rapid warming from stadial to interstadial states during MIS 19a, it may be relevant to examine the similarly rapid transition from the Heinrich 1 stadial to the Bølling-Allerød interstadial during the last deglaciation. Modelling studies show that gradually changing conditions can lead to an abrupt recovery of the AMOC and consequent rapid warming that marks the onset of the Bølling-Allerød interstadial (Obase and Abe-Ouchi 2019). Time series analyses of planktonic foraminiferal δ 18 O records (Fig. 11g) including records of water-column stratification (ΔT; Fig. 11e) for the ChCS have revealed periodicities of a half-precession cycle throughout MIS 19 but also higher frequencies including those of the fourth harmonic of precession for MIS 19b-a (Haneda et al. 2020b). These harmonics of precession in the time series are derived from equatorial insolation (Fig. 11a). The influence of precession is strong at Lake Baikal at 51°-53°N, although it declines with increasing latitude (Prokopenko et al. 2006). The half-precession cycle at the CbCS is strongest within MIS 19c presumably owing to the absence of major ice melting events and AMOC disruption in the North Atlantic at this time. These high-latitude North Atlantic events then account for the stadial-interstadial alternations of MIS 19b-a via latitudinal displacements of the ITCZ, although paced by low-latitude insolation variations (Haneda et al. 2020b). Despite the significant advances reviewed above , several aspects of the CbCS paleoenvironmental record remain incompletely known. Most proxies available at high stratigraphic resolution, including the foraminiferal isotopes, reflect the cooler seasons Haneda et al. 2020b). Summer sea-surface temperatures and hence seasonal contrasts through MIS 19 are poorly understood but were presumably considerable then as they are now (Fig. 16). Over the western Pacific at present, a significant divergence exists between winter and summer positions of the ITCZ owing to summer heating across Asia. For example, the ITCZ occurs over northern India in summer but shifts to just north of Australia in winter (Fig. 4) along with its associated trade winds. However, northern hemisphere ice sheets will have depressed this divergence and reduced the northward shift of the ITCZ in summer (Chiang and Friedman 2012;Schneider et al. 2014). In the CbCS, sea-surface temperature and nearsurface stratification proxies are based on calcareous microfossils that accumulate calcite during winter Haneda et al. 2020b). These will have differed substantially from proxies representing the summer months, although both will have been affected by the bipolar thermal seesaw. More research is needed to characterize summer land and ocean temperatures in the CbCS, especially during MIS 19b-a. The dinoflagellate cyst record may represent in part a late spring-early autumn signal and shows significant fluctuations through MIS 19c and across stadial s1 into interstadial i1 within MIS 19a (zones Df7 and Df8 in Balota et al. 2021;Fig. 11c). These indicate an abrupt southward shift of the Kuroshio extension during glacial inception (stadial s1) and a similarly abrupt northward shift within interstadial i1 (biozone Df7 of Balota et al. 2021). For the latter part of MIS 19, it would then appear that the dynamics of the westerly jet and ITCZ are also reflected by warmseason as well as cold-season proxies. While this is to be expected, the remaining alternations of MIS 19a have yet to be analysed in detail for indicators of warm-season sea-surface conditions, and therefore variations in seasonality are not known. A high-resolution pollen record at the CbCS similarly extends only to the MIS 19b-a boundary , although the terrestrial vegetation will have been sensitive to changes in spring-summer warmth and precipitation as well as winter frost during the stadial-interstadial oscillations of MIS 19a. An extension of this high-resolution pollen record through MIS 19a should yield information about variations in strength of the EASM across stadial-interstadial alternations. Indeed, many proxy records lack the ultra-high stratigraphic resolution needed to fully characterize changes through MIS 19b-a, although abundant opportunities are afforded by the high sedimentation rates (> 89 cm/kyr, Suganuma et al. 2018) in this part of the succession. Further evidence for climatic oscillations in the latter part of MIS 19 during summer is found in the higherlatitude biogenic silica records of Lake Baikal which was strongly influenced by precessional variation (Prokopenko et al. 2006) and Si/Ti records of Lake El'gygytgyn (Wennrich et al. 2014) which seem to represent direct highlatitude teleconnections with the northern North Atlantic. These records reflect warm-season productivity and show pronounced interstadials i1 and i2 (Fig. 12c, d). Summary and conclusions The GSSP defining the base of the Chibanian Stage and Middle Pleistocene Subseries at the Chiba section, Japan, was ratified on January 17, 2020, by the Executive Committee of the IUGS (Suganuma et al. in press). The GSSP occurs immediately below the top of MIS 19c and has an astronomical age of 774.1 ± 5.0 ka. The M-B reversal, with a directional midpoint just 1.1 m above the GSSP, an astronomical age of 772.9 ± 5.4 ka, and a duration of up to~2.0 kyr, serves as the primary guide to the boundary. The two other candidate sections, the Ideale section at Montalbano Jonico and the Valle di Manche, both in Italy, were deemed to have equivocal (Valle di Manche) or imprecise (Montalbano Jonico) reversal records (Head 2019). This finalizes a process initiated by INQUA in 1973 to define the base of the Middle Pleistocene, a term in use since at least 1869. Although the Chibanian Stage is presently concurrent with the Middle Pleistocene Subseries, the introduction of a second stage for the Middle Pleistocene Subseries with its base near the onset of the mid-Brunhes event (MIS 12-11 transition,~424 ka; Fig. 2) should be considered. The M-B reversal facilitates global recognition in marine, terrestrial, and ice-core records, and places the GSSP appropriately in the middle of the Early-Middle Pleistocene Transition, an interval of profound and lasting climatic change (Head and Gibbard 2015b). MIS 19 was first labelled by Shackleton and Opdyke (1973; Fig. 6) who confirmed an earlier association of this interglacial stage with the M-B reversal (Hays et al. 1969;Fig. 5). Bassinot et al. (1994) were the first to subdivide MIS 19 formally, labelling events MIS 19.1, 19.2, and 19.3 (Fig. 8b). Tzedakis et al. (2012aTzedakis et al. ( , 2012b were apparently the first to subdivide MIS 19 into lettered substages, MIS 19a,19b,and 19c (Fig. 8c). Most subsequent authors have followed this three-lettered scheme but the boundary between MIS 19b and 19a has been applied inconsistently. The approach here is to restrict MIS 19b to the first interval of high foraminiferal isotopic values following MIS 19c (Fig. 7) following Nomade et al. (2019). The fine-scale subdivision of MIS 19 has been treated in various ways. However, climatostratigraphic units based potentially on multiple paleoenvironmental criteria, for which stadial-interstadial terminology is appropriate, are conceptually different from the benthic isotope signal which includes a global ice-volume component. For the latter part of MIS 19, it is proposed that stadial-interstadial labelling MIS 19-s1 to -s4 and MIS 19-i1 to -i4 be used independently of substages, with the three or four peaks in the benthic isotope record of MIS 19a separately labelled as benthic isotope oscillations MIS 19a-o1 to -o4 (Fig. 7). MIS 19 is characterized by a reduced-amplitude 400 ky eccentricity cycle similar to our present interglacial ( Fig. 2) but obliquity increased less rapidly and to a lower amplitude, and peak temperatures seem to have been generally lower than for the pre-industrial Holocene. CO 2 levels seem to have been similar until about 8000 years ago when they began to rise in the Middle Holocene (Fig. 13). A Younger Dryas-like oscillation interrupts the deglaciation of Termination IX at several sites, possibly triggered by a brief AMOC disruption under these unusual orbital conditions (Haneda et al. 2020b;Marino et al. 2020). The onset of MIS 19 was driven by a steep rise in June insolation at 65°N with maximum obliquity in phase with minimum precession. MIS 19c extends from 790 to 785 ka to the expansion of ice sheets (glacial inception) at 774-777 ka, the timing influenced by the time scales used, and spans full interglacial conditions which lasted for around 10 to 12.5 kyr. Records (Table 2) confirm the brevity of full interglacial conditions during MIS 19 compared with most later interglacials, including MIS 11 which has a similar orbital configuration, and results from an unusually early glacial inception relative to the obliquity cycle (Tzedakis et al. 2012a, their fig. 6). Any comparisons with later interglacials should consider not just orbital configuration but also the causes and effects of increased quasi-100 kyr periodicity during and after the Early-Middle Pleistocene transition (Head and Gibbard 2015b). During MIS 19c, both Pacific and North Atlantic-Mediterranean planktonic records show variability in the halfor quarter-precession bandwidth, indicating the influence of equatorial insolation variation at low and mid-latitudes at a time when AMOC disruption was minimal. The interruption of westerlies carrying moisture to the western Mediterranean may have led to the northward transport of moisture, feeding ice sheets which were progressively expanding through much of MIS 19c as boreal summer insolation decreased (Sánchez-Goñi et al. 2016). The inception of glaciation (774-777 ka) at the end of MIS 19c presents a cluster of climatostratigraphic signals that can assist in identifying the Early-Middle Pleistocene boundary (774.1 ka) globally. MIS 19b-a corresponds to a second precessional minimum, in antiphase with obliquity, that results in a suppressed insolation peak (Fig. 13) transposed onto a longer-term trend of increasing global ice volume. Global ice accumulation during the latter part of MIS 19c appears to have crossed a climate threshold, with MIS 19b marking the first of three or four AMOC disruptions triggered by ice-calving and freshwater release into the northern North Atlantic, and indicated by ice-rafted debris and sortable silt records at ODP Site 983 and elsewhere. These AMOC disruptions led to activation of the thermal bipolar seesaw, explaining a slightly lagged phase relationship with three AIMs (warming events) in the Antarctic ice-core record (Tzedakis et al. 2012b;Fig. 8c). As a result of these oscillations, three or four interstadials with characteristically abrupt transitions are represented during MIS 19b-a, both in the Asian-Pacific and North Atlantic-Mediterranean realms. The coherence of these oscillations on a global scale is best explained by shifts in the ITCZ (Fig. 4) as a result of thermal contrast between northern and southern hemispheres created by the bipolar seesaw (Chiang and Friedman 2012;Schneider et al. 2014;Nomade et al. 2019;Haneda et al. 2020b). For example, a warming of the southern hemisphere would displace the ITCZ southwards along with the mid-latitude westerlies. This in turn would have shifted the Subarctic Front southwards, initiating stadial conditions in the northern hemisphere. Stadial-interstadial oscillations occur at or close to the harmonics of precession, and their transition from one state to the other may take less than 200 years. These AMOC-triggered oscillations may therefore have been paced by equatorial insolation forcing, even at a time when the effects of precession were subdued , and in some regions amplified by the monsoon system Haneda et al. 2020b). Although MIS 19c represents a close orbital analogue to the present interglacial, the dominant~5 kyr cyclicity in global records of MIS 19 contrasts with the dominant 2.5 kyr cyclicity of MIS 1, calling into question the assumed close similarity of these two interglacial stages (Sánchez-Goñi et al. 2016). The precise mechanism driving AMOC events in the northern North Atlantic particularly during MIS 19b-a, and the teleconnections linking these events with stadial-interstadial oscillations in the eastern Siberian El-gygytgyn record (Fig. 12c), is not known. The Siberian High, North Pacific Oscillation, Arctic Oscillation, and North Atlantic Oscillation, representing the northern hemisphere's major atmospheric pressure systems, appear to be interlinked over historical time scales and may have provided high-latitude teleconnections during MIS 19. Additional high-resolution studies are needed to explore these potential connections during MIS 19. Detailed marine records of MIS 19 are conspicuously missing from the southern hemisphere (Fig. 4) yet such information is needed to test hypotheses involving interhemispheric processes.
2021-09-03T13:29:31.967Z
2021-09-03T00:00:00.000
{ "year": 2021, "sha1": "b6440f5a0e34dfd9e8f5d6a4227f78c78985898f", "oa_license": "CCBY", "oa_url": "https://progearthplanetsci.springeropen.com/track/pdf/10.1186/s40645-021-00439-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1394d57393a00d7bba34c058deb8fd114724bd05", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Medicine" ] }
237430893
pes2o/s2orc
v3-fos-license
Evaluation of microalbuminuria as a prognostic indicator after a TIA or minor stroke in an outpatient setting: the prognostic role of microalbuminuria in TIA evolution (ProMOTE) study Objective Transient ischaemic attacks (TIA) and minor strokes are important risk factors for further vascular events. We explored the role of albumin creatinine ratio (ACR) in improving risk prediction after a first event. Setting Rapid access stroke clinics in the UK. Participants 2202 patients attending with TIA or minor stroke diagnosed by the attending stroke physician, able to provide a urine sample to evaluate ACR using a near-patient testing device. Primary and secondary outcomes Primary outcome was major adverse cardiac events (MACE: recurrent stroke, myocardial infarction or cardiovascular death) at 90 days. The key secondary outcome was to determine whether urinary ACR could contribute to a risk prediction tool for use in a clinic setting. Results 151 MACE occurred in 144 participants within 90 days. Participants with MACE had higher ACR than those without. A composite score awarding a point each for age >80 years, previous stroke/TIA and presence of microalbuminuria identified those at low risk and high risk. 90% of patients were at low risk (scoring 0 or 1). Their 90-day risk of MACE was 5.7%. Of the remaining ‘high-risk’ population (scoring 2 or 3) 12.4% experienced MACE over 90 days (p<0.001 compared with the low-risk population). The need for acute admission in the first 7 days was twofold elevated in the high-risk group compared with the low-risk group (3.23% vs 1.43%; p=0.05). These findings were validated in an independent historic sample. Conclusion A risk score comprising age, previous stroke/TIA and microalbuminuria predicts future MACE while identifying those at low risk of a recurrent event. This tool shows promise in the risk stratification of patients to avoid the admission of low-risk patients. INTRODUCTION Stroke is the second most common cause of death and a leading cause of disability worldwide. 1 Despite, or possibly because of, recent trends in reducing stroke mortality, the health and social disability burden of stroke is increasing. 2 After advancing age, transient ischaemic attacks (TIA) and minor strokes are the most important risk factors for recurrent stroke and predict long-term mortality. 3 4 About 23% of patients presenting with stroke have a history of TIA in the 3 months prior to the index event. 5 This is a key population to target for secondary prevention, but these patients represent <10% of all those who present to rapid access TIA clinics. 6 7 Notably, half of all completed strokes occur within the first week after TIA or minor stroke. 5 Accurate identification of those patients presenting to TIA clinics with TIA or minor stroke who are most at risk for future events is important to (1) intensify treatments such as giving dual antiplatelet therapy, 8 9 (2) guide the need for urgent admission to facilitate the detection and urgent surgical correction Strengths and limitations of this study ► The pragmatic design of this study provides good generalisability to clinical practice. ► The predictive role of the combination of urinary albumin creatinine ratio (ACR) in combination with very basic demographics would be able to reassure the 90% of the population that there was only approximately a 1 in 20 risk of an event recurring, whereas the higher risk population had a 1 in 8 chance of a major adverse cardiac events (MACE) outcome. ► The study is limited by the availability of data which were available to the physicians at the time of the stroke appointment. This limits the ability to determine mechanistic association between ACR and MACE outcome. ► Further work is required to determine whether therapies that reduce ACR can modify the risk of subsequent MACE. Open access of severe internal carotid artery (ICA) stenosis 10 and (3) reassure patients with lowest risk. Several risk stratification tools are already used in such clinics such as ABCD2 11 12 (awarding points for Age, presenting Blood pressure, Clinical features of unilateral weakness or aphasia, Duration of symptoms and Diabetes), California score 6 and imaging-based scoring systems. 10 13 These tools lack optimal sensitivity and specificity. 12 14 Indeed some studies suggest patients with a 'low-risk' ABCD2 score (<4/7) have similar 90-day stroke risk as patients deemed high risk with an ABCD2 score >4/7, 15 while missing up to 40% of patients with severe ICA stenosis. 16 As such, recent consensus guidelines have advised against their use. [17][18][19] Increased urinary albumin excretion rate (AER) has been shown to predict incident stroke and heart failure in people with diabetes. In the general population, AER predicts cardiovascular disease and mortality post stroke independent of conventional cardiovascular risk factors such as hypertension, diabetes and smoking. [20][21][22][23] Urinary albumin creatinine ratio (ACR) is a well-recognised proxy for urinary AER, and can now be assessed using simple and inexpensive point-of-care equipment. In an earlier pilot study of 142 patients with minor stroke/TIA, 24 we identified the potential role of urinary ACR in identifying those at highest risk of a recurrent event. Although statistically significant, this study was not large enough to explore potential confounding. We therefore performed a definitive study to assess whether urinary ACR improved the risk stratification of patients presenting with TIA or minor stroke to UK acute stroke clinics. METHODS Patients diagnosed with suspected minor stroke/TIA by a consultant stroke physician, attending a rapid access clinics in one of the 12 UK hospitals were recruited at the end of their clinic consultation. As these clinics are rapid access, often within the first 24 hours of symptoms, it was impossible to accurately differentiate between TIA (with symptoms <24 hours) and minor stroke in all cases. Only individuals with events that had occurred within the previous month were included in the study. Routine clinical care, including urgent revascularisation in those with severe ICA stenosis (>50% diameter stenosis by the North American Symptomatic Carotid Endarterectomy Trial method), 25 was initiated prior to enrollment. If a cardioembolic source was identified, anticoagulants were initiated according to local protocols. After written, informed consent was obtained in the clinic, demographics, including age, sex, height, weight, medical history and ABCD2 score were recorded. Times from onset of symptoms to assessment and enrollment in the study were documented. A clean specimen of urine was collected from participants in the clinic and tested with a Unistix dipstick and, if there was no indication of urinary tract infection (presence of any two of leucocytes, nitrites and/or protein), ACR was measured using a point-of-care analyser (The Afinion AS100 Analyzer; Axis Shield, Dundee, UK). This system uses an immunometric membrane flow-through principle for albumin measurement and an enzymatic colorimetric test for creatinine quantification 26 and reports an ACR in approximately 5 min within the range 0.1-140 mg/mmol with a coefficient of variation of 4.6%-6%. 26 Participants with values of <0.1 mg/mmol were recorded as having the lowest recordable value of ACR (0.1 mg/mmol). Participants were followed up by telephone on day 7, 30 and 90, and any history of any further vascular events, hospitalisation or death was obtained. If contact with participants was not possible initially, the presence or absence of further events was verified during additional attempts at contact or during future follow-up. If no further contact was possible with the participant (eg, in the case of significant stroke or death), medical history was collected from the next of kin and verified from the primary care physician and hospital records. All reported clinical events were adjudicated according to the standardised diagnostic criteria by a data monitoring committee including two independent stroke physicians, blinded to the ACR, by examination of clinical records if they attended hospital, or their primary care physician, or by verifying their clinical symptoms from the research records if patients did not seek further medical advice. In the event of disagreement, a third independent stroke physician acted as adjudicator. The primary outcome was to determine the utility of microalbuminuria in predicting the time to first major adverse cardiovascular event (MACE: recurrent stroke, myocardial infarction or death) within 90 days. Secondary outcomes were time to explore the predictive role of microalbuminuria on recurrent stroke, total mortality (even if these were not the first MACE), the presence of first MACE within 7 days, and the need for hospitalisation within the first 7 days. All events were adjudicated blind to urinary ACR by an independent data monitoring committee. Ethics statement The study protocol was approved by the national research ethics committee (approval 14/EE/1106). All participants provided written informed consent prior to enrolment and confirmed their willingness to continue participation at each telephone consultation. Patient and public involvement Patients were involved in the design and conduct of this research. During the feasibility stage, priority of the research question and methods of recruitment were informed by discussions with patients through a focus group session and three structured interviews. During the trial, two patients joined the independent trial steering committee. Once the trial has been published, participants will be informed of the results through a dedicated newsletter suitable for a non-specialist audience, led by one of the patients on the steering committee. Open access Statistical analysis Data were treated as continuous variables wherever possible to maximise power. All normally distributed data are presented as mean±SD. Skewed data were appropriately transformed and presented as geometric mean scores (with 95% CIs). Statistical significance for categorical variables was calculated using the χ 2 test and the Student's t-test for continuous variables. Time to event was measured from the clinic consultation rather than the index event, in keeping with the pragmatic nature of the study. Independence of ACR (as a risk predictor) from diabetes and other measures of the ABCD2 score was assessed using logistic regression. Microalbuminuria was defined using the currently accepted clinical thresholds for microalbuminuria used in people with diabetes 3.5 mg/mol for women and 2.5 mg/mol for men. Identification of independent predictors of recurrent MACE was performed by backwards stepwise logistic regression, commencing with all information available for the complete dataset, representing all information that would routinely be available to the clinician in the clinic setting. Where multiple related measures were available (eg, systolic, diastolic and mean arterial blood pressure), the variable that accounted for the greatest degree of variance in a univariate analysis was included. Fractional polynomial regression modelling was used to identify sex-specific thresholds of ACR to define 'microalbuminuria' in the context of stroke risk stratification. Prediction of events were calculated using a Cox proportional HR with time to first MACE as the primary outcome. In keeping with the recommendations of Cupples et al 27 and Rothman, 28 the measured significance of the variables of interest is reported without adjustment for multiple testing. Statistical significance was considered at p<0.05. Statistical analysis was performed using Stata SE V14.2 (Mac version: StataCorp LLC, College Station, TX, USA). RESULTS A total of 2408 patients with a diagnosis of definite or probable minor stroke/TIA were recruited; 149 participants were subsequently excluded after the diagnosis Open access was revised to that of a stroke mimic, 8 withdrew consent and a further 49 (2%) were excluded due to intercurrent urinary tract infection rendering the assessment of ACR invalid. No patient was lost to follow-up. Baseline characteristics are presented in table 1. Of the 2202 included in the final analysis, most were male (64.9%). All patients were commenced on secondary prevention with appropriate antiplatelet or anticoagulant therapy, and an appropriately dosed statin in accordance with contemporaneous national guidelines. Over 90 days, 151 primary outcome events (MACE) occurred in 144 participants (6.7% of patients) including 8 cardiovascular deaths. All MACE were atherosclerotics with no haemorrhagic events occurring in the 3 months. There were also eight non-cardiovascular deaths. Within 7 days, there were 38 MACE in 36 participants. Those with MACE were more likely to have had previous stroke/TIA, however, with this exception, there were no significant differences in routinely collected clinic data (table 1). There was no difference in mean ABCD2 score or the proportion of participants with a 'high-risk' score (4/7 or more) between those who did and did not have events by 90 days. When looking exclusively at the 1374 participants identified with a 'high-risk' ABCD2 score (63% of the study sample; 93% male), a high score did not distinguish those at high risk of MACE compared with those with an ABCD2 <4 (HR 1. 15 figure 1). Evaluation of the prognostic role of microalbuminuria alone Fractional polynomial regression modelling identified the sex-specific threshold of risk for ACR aligned with the currently accepted thresholds for microalbuminuria used in people with diabetes of 2.5 mg/mmol for men and 3.5 mg/ mol for women. These thresholds identified 562 participants (25.4% of the study sample; 75% male) with microalbuminuria. These participants were more likely than those with a low ACR to experience a primary outcome event (HR 1.66 (95% CI 1.19 to 2.34); p=0.003) and had a greater than fivefold increase in 90-day mortality (HR 5.44 (1.86-15.90); p=0.0003). Alone, however, microalbuminuria did not have sufficient positive or negative predictive value to justify its clinical utility (positive predictive value (PPV): 9.1%, specificity: 75.4%). Generating a composite risk score for 90-day risk stratification In a stepwise multivariate model, the only independent predictors of MACE were elevated ACR, age >80 years and history of stroke or TIA. Similar variance within the model was explained microalbuminuria (ie, an ACR >2.5 mg/ mol and 3.5 mg/mol, respectively, in men and women) as being older than 80 years of age and history of stroke/ TIA, so they were afforded equal weighting. This generated a 4-point scale between 0 and 3 (termed the 'APA' score representing age, prior stroke or TIA and elevated ACR), with the lowest risk participants having a score of 0 (online supplemental table 1). There was a sequential increase in risk with increasing score such that those with a score of 0 having a 90-day risk of 4.86% rising to a 30% risk for those with the highest score (online supplemental table 2). The population was then divided into a low-risk group scoring 0 or 1 or high-risk scoring 2 or 3. A total of 1985 (90.1%) of participants were identified as low risk compared with 217 (9.9%) in the high-risk group. When comparing the PPV sensitivity and specificity of the APA score compared with the previously available parameters of age and previous stroke demonstrated a clinically meaningful improvement over each measure on its own (online supplemental table 3A). Compared with using previous stroke alone, APA score had a similar PPV (12.4% for APA score vs 11.9% for previous stroke) and specificity (91.2% vs 96.4%, respectively), however superior sensitivity (19.3% vs 7.1%, respectively). Sensitivity was improved using a composite of previous stroke and being above the age of 80 years; however, this was at the expense of a reduction of PPV to 8.4%. The high-risk group was older, with higher systolic blood pressure and pulse pressure compared with the low-risk group (table 2). The 90-day risk of MACE outcome was 5.7% in the low-risk group versus 12.4% in the high-risk group (p<0.001), translating to a HR of 2.12 (95% CI 1.38 to 3.25) for the high-risk score compared with low-risk group (p<0.001; figure 2). This was predominantly driven by an increase in recurrent stroke/TIA (3.3% vs 8.8%; HR 2.67 (1.58-4.51); p<0.001). This difference was apparent within Figure 1 Age and sex-adjusted albumin creatinine ratio (mean±95% CI) stratified by the occurrence of MACE in the first 90 days post initial event. MACE, major adverse cardiovascular events such as recurrent cerebrovascular event, myocardial infarction or cardiovascular death. 7 days of the clinic, such that those with a high APA score had a 2.2-fold increased risk of a recurrent event. One thousand and forty-nine participants were reviewed within 48 hours of their index event. For these individuals, the predictive role of the APA score was numerically superior (HR 2.84, 95% CI 1.32 to 6.14; p=0.008) than those who were seen with a longer delay. There was no significant interaction between time from event to appointment and predictive role of the APA score. Validation of APA score on a historic sample To validate the APA score, we applied the tool to the individuals who participated in the pilot study but did not contribute their data to this dataset. Details of this pilot study have been published elsewhere. 24 In brief, the pilot study recruited 139 participants over 9 months from one of the centres involved in the definitive study. In this population, there were 13 recurrent events (9.35%) within the 90 days, 9 of which occurred in the first 7 days after attendance to the stroke clinic (table 3). The APA risk score replicated a similar distribution of number of participants at low risk (73%) and high risk (27%) as in the larger study. Compared with the component parts, the APA score had superior positive predictive value (24.4% vs 18.2% for previous stroke or 13.3% for previous stroke and being aged >80 years) and superior sensitivity (76.9% vs 46.2% vs 15.4%, respectively; online supplemental table 3B). At 90 days, 5.9% of the low APA group had recurrent events compared with 18.4% of the in the high APA group (p=0.02). Again, within the first 7 days this was apparent with only 1.9% of the low APA group experiencing the primary outcome, compared with 13.2% in the high APA group (p=0.04). DISCUSSION In this large UK multicentre prospective study with blinded outcome adjudication, we have demonstrated the utility of an elevated ACR, measured using a simple point-of-care analyser, in combination with simple clinical data (age and history of stroke or TIA) in the prediction of MACE and death. This differentiates between low-risk and high-risk individuals. Specifically, >90% of individuals were identified as low risk with<1.5% risk of a MACE or death over the next 7 days, increasing to 5.7% by 90 days, whereas 9.9% of the population were identified as high Open access risk with a 12.4% event rate over 90 days. Although only a small number, our findings were validated in a separate historic population who participated in the pilot study. The utility of APA scoring may go beyond simply guiding conversations regarding prognosis in a clinic setting. The Platelet-Oriented Inhibition in New TIA and Minor Ischaemic Stroke (POINT) Trial 8 and (Clopidogrel in High-Risk Patients with Acute Nondisabling Cerebrovascular Events) CHANCE trial 9 demonstrated the benefit of dual antiplatelet therapy in the first 21 days after high-risk TIA. There was, however, a 0.5% absolute increase in major intracranial haemorrhage. We would propose that an assessment of the APA score could be evaluated as a stratification tool to determine whether it predicts those who benefit the most. In those with a lowrisk APA score, even the 25% relative risk reduction in the POINT trial would result in only a marginally beneficial risk:benefit ratio, assuming that microalbuminuria is a predictor of thromboembolic disease processes. Those with a high APA score, however, would likely benefit from dual antiplatelet therapy with a number need to treat of <30 to prevent an event. This suggestion, however, does require confirmation in an independent study. The APA score also gives additional information that is of use in the more acute setting. Over the first 7 days, a low-risk APA score had a negative predictive value of >98.5% for the risk of MACE-a number very similar to the predictive effect of troponin when evaluating suspected cardiac chest pain. 29 When considering the populations identified by the APA score, there are similarities to the ABCD2 score. Indeed, the demographics of the high-risk group were older, with higher blood pressure and a trend towards more diabetes. We would suggest, that microalbuminuria represents vascular susceptibility to the adverse consequences of hypertension and diabetes rather than simply an indication of the presence of hypertension and/or diabetes in general. This would explain why the APA score predicted further MACE and total mortality at 7 and 90 days, whereas in our large sample the ABCD2 score did not. Elevated ACR is a recognised marker of generalised endothelial and microvascular dysfunction. The increased filtration of albumin through the renal glomerular filtration barrier is thought to be due to changes in the chemical and physical properties of this endothelial barrier and its glycocalyx. [30][31][32] The mechanism explaining the association between microalbuminuria and incident cardiovascular events is thought to depend on its role as a marker of systemically increased vascular permeability and altered homeostasis, coagulation and endothelial function. [33][34][35][36][37][38] However, after acute ischaemic events such as myocardial infarction, or as demonstrated here TIA or minor stroke, it is likely that elevated urinary ACR is, at least in part, dependent on the systemic inflammatory response to the original insult. Although unlikely to be mechanistically involved in the progression of disease, the question as to whether it is a good surrogate of therapeutic effect remains. Improving systemic inflammation, such as through the pleiotropic effect of statins, has been associated with simultaneous improvements in urinary albumin excretion and cardiovascular event rates in patients with elevated high sensitivity C reactive protein. [39][40][41] Further studies are needed to determine whether microalbuminuria after acute stroke represents a therapeutic target amenable to treatment, and if reductions in microalbuminuria are associated with reductions in the vascular events they predict. Limitations and strengths of this study Recurrent event rate was lower than anticipated from our pilot study and contemporaneous clinical trials such Open access as CHANCE (11.7% in the control arm), 9 although are comparable with the recently reported randomised controlled POINT trial (6%). 8 In usual practice it is usually 24-48 hours between the index event and clinic attendance. Given that data from previous studies has demonstrated the highest risk occurs in the first 48 hours, it is likely that some events will have occurred between the index event and attendance in the clinic in our study. 42 The pragmatic design of the study, however, is also a strength given that the predictive role highlights those that may benefit from dual antiplatelet therapy or hospitalisation when the patient is seen in the acute stroke clinic. There is no consensus on the gold standard for diagnosing TIA in a rapid access stroke clinic. 43 In order to maintain the generalisability of the study, all attendees of the stroke clinic that were diagnosed with a probable or definite TIA were invited to participate. Only a small number of patients were subsequently withdrawn as being stroke mimics. Outcome events were rigorously adjudicated against published diagnostic criteria to ensure consistency and reliability. Finally, the use of a single urinary ACR sample obtained during the clinic assessment using a near patient testing kit is not as robust as a sample tested in a central laboratory. Although there is significant diurnal variability in urinary AER within individuals, potentially limiting the scientific or mechanistic merit of the study compared with studies using overnight or 24-hour urine collection, the pragmatic design makes this study more applicable to general clinic populations. Furthermore, the relatively low cost of the near patient test kit makes it accessible in most emergency settings, in primary and secondary care. CONCLUSION We have demonstrated for the first time the potential utility of a single point-of-care test of urinary ACR in patients presenting with TIA and minor stroke as a prediction tool to assist triaging patients. When used in combination with patient age and history of stroke or TIA, this generated a risk stratification score (the APA score: age >80 years, previous stroke/TIA and elevated ACR) that could reliably identify the large proportion of patients with a low risk of recurrent vascular event over the next 90 days. The value of ACR that is most predictive is aligned with the conventional definition of microalbuminuria used in diabetes, suggesting that in clinical practice the existing technology for evaluating microalbuminuria including urine dipsticks may be used. A high APA score was associated with a doubling of risk of MACE outcome and a fivefold increase in mortality. This APA score was validated in an independent population. We believe the APA score may represent a practical tool for clinicians engaged in the acute ambulatory assessment of TIA and minor stroke, to assist in the assessment of risk:benefit ratio for the use dual antiplatelet therapy or to admit for more urgent investigation. Further work is required to determine whether the increase in ACR represents a therapeutic target or solely a prognostic indicator.
2021-09-08T06:16:55.986Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "373265200021cba175c6e6464db57c912f5de63f", "oa_license": "CCBY", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/11/9/e043253.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "b42b0d034639298d2c9cd51f1b892e611c06f0f0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
73424902
pes2o/s2orc
v3-fos-license
Identification and characterization of NanH2 and NanH3, enzymes responsible for sialidase activity in the vaginal bacterium Gardnerella vaginalis Gardnerella vaginalis is abundant in bacterial vaginosis (BV), a condition associated with adverse reproductive health. Sialidase activity is a diagnostic feature of BV and is produced by a subset of G. vaginalis strains. Although its genetic basis has not been formally identified, sialidase activity is presumed to derive from the sialidase A gene, named here nanH1. In this study, BLAST searches predicted two additional G. vaginalis sialidases, NanH2 and NanH3. When expressed in Escherichia coli, NanH2 and NanH3 both displayed broad abilities to cleave sialic acids from α2-3- and α2-6-linked N- and O-linked sialoglycans, including relevant mucosal substrates. In contrast, recombinant NanH1 had limited activity against synthetic and mucosal substrates under the conditions tested. Recombinant NanH2 was much more effective than NanH3 in cleaving sialic acids bearing a 9-O-acetyl ester. Similarly, G. vaginalis strains encoding NanH2 cleaved and foraged significantly more Neu5,9Ac2 than strains encoding only NanH3. Among a collection of 34 G. vaginalis isolates, nanH2, nanH3, or both were present in all 15 sialidase-positive strains but absent from all 19 sialidase-negative isolates, including 16 strains that were nanH1-positive. We conclude that NanH2 and NanH3 are the primary sources of sialidase activity in G. vaginalis and that these two enzymes can account for the previously described substrate breadth cleaved by sialidases in human vaginal specimens of women with BV. Finally, PCRs of nanH2 or nanH3 from human vaginal specimens had 81% sensitivity and 78% specificity in distinguishing between Lactobacillus dominance and BV, as determined by Nugent scoring. Bacterial vaginosis (BV) 2 is a common condition in which the vagina contains few "healthy" lactobacilli and is overpopulated by diverse anaerobes (1,2). BV has been associated with a wide array of adverse health outcomes, including increased risks of sexually transmitted infections, placental and amniotic fluid infections, and preterm birth (3)(4)(5)(6). Several bacterial enzymes have been proposed to be virulence factors in BV, including phospholipases, cytolysins, proteases, and sialidases (7)(8)(9)(10). In particular, sialidase (also referred to as neuraminidase, E.C.3.2.1.18) activity in vaginal fluids is considered a hallmark of BV (10 -12). Sialidases act on glycan chains capped with sialic acid residues (13), which are abundant at mucosal surfaces, including the reproductive tract. Sialidase activity has been used as a diagnostic marker for BV (14,15) and has been independently associated with adverse pregnancy outcomes, including ascending intrauterine infection and preterm birth (16 -18). Sialidase production by isolates of BV-associated bacteria suggests that the enzyme activity in vaginal fluids is bacterial in origin (10,19). It is widely believed that mucus degradation by BV bacteria contributes to the characteristically "thin" consistency of vaginal fluid in BV (9,20) and has been postulated to contribute to the increased risks of sexually transmitted and ascending infections in women with BV (10,16,19,21,22). Gardnerella vaginalis is one of the most common bacterial species to overgrow in BV (2,(23)(24)(25). Consistent with the notion that G. vaginalis is a pathogen, this bacterium has been isolated from invasive perinatal infections (26,27) and, in one study, was found in 26% of infected placentas from cases of preterm birth (28). The pathogenic potential of G. vaginalis isolates has also been demonstrated in vitro (e.g. cell adhesion and invasion, cytolytic toxin production/pore formation, and biofilm formation) (29 -32). We have shown that a G. vaginalis strain isolated from a woman with BV is sufficient to elicit several features of BV (or health complications that have been . Although all of these relationships pertain to our specific expertise on bacterial vaginosis, none are directly related to the specific topics or assays of interest in this manuscript. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. This article contains Figs. S1-S6. 1 To whom correspondence should be addressed. Tel.: 314-286-0016; E-mail: allewis@wustl.edu. associated with BV) in mouse models of vaginal infection (33)(34)(35). These features include vaginal sialidase activity, evidence of mucus degradation, absence of overt histological inflammation, epithelial exfoliation, and the presence of "clue-like" epithelial cells with attached bacteria in vaginal washes (33,34). Health complications that can be reproduced in murine models by administering G. vaginalis include ascending uterine infection with G. vaginalis and other potential pathogens as well as recurrent urinary tract infection caused by Escherichia coli (33,35). Although many strains of G. vaginalis do not produce sialidase activity under laboratory conditions, others express sialidase activity that is both surface-bound as well as secreted (34). These strains are able to liberate sialic acids from glycoproteins in culture medium and then deplete the resulting free monosaccharide. In contrast, sialidase-negative strains cannot liberate or consume sialic acids provided in the bound form. For years, it has been assumed that the sialidase activity in G. vaginalis is encoded by the gene originally annotated in strain ATCC 14019 as "sialidase A" (34, 36 -40). In support of this idea, a recent report demonstrated activity of recombinant sialidase A protein against a synthetic substrate in vitro (41). However, the lack of genetic tools in Gardnerella has prevented the construction of sialidase mutants to formally test the extent to which sialidase A contributes to the sialidase activity observed in cultured strains. Although sialidase A appears to be found in all sialidase-positive strains of G. vaginalis, the intact ORF is also present in many sialidase-negative isolates. This inconsistency has prompted multiple research groups to question whether sialidase A accounts for the enzyme activity observed in G. vaginalis cultures (37,39,42). Here we describe two previously unappreciated sialidases in G. vaginalis, NanH2 and NanH3, and show that these enzymes exhibit a broad range of activity not only against synthetic substrates but also against mucosal glycoproteins relevant to the human vaginal environment. Moreover, we show that the presence of nanH2 or nanH3 in the genomes of G. vaginalis strains perfectly reflects their ability to produce sialidase activity in culture. Thus, we conclude that NanH2 and NanH3 are the main sources of sialidase activity in G. vaginalis. Results Here we set out to identify the genetic basis for sialidase activity in G. vaginalis. Given that sialidase A is present in many strains that do not produce sialidase activity in culture, we suspected that genes other than sialidase A might encode the activity produced by sialidase-positive isolates. Therefore, we performed BLASTp searches of the predicted G. vaginalis proteome to identify additional sialidase homologs. To our knowledge, Bifidobacterium longum is the species most closely related to G. vaginalis in which a sialidase has been functionally characterized. The NanH2 sialidase of B. longum subsp. infantis strain ATCC15697 cleaves sialic acids in both ␣2-3 and ␣2-6 linkages and is active against milk oligosaccharides (43). Using B. longum ATCC15697 NanH2 as a query sequence, a BLASTp search of the proteome of G. vaginalis JCP8151B (34), a sialidase-positive strain, revealed two sialidase homologs in addition to sialidase A. The first result (accession number WP_016798291) was 65% identical over 349 residues to B. longum NanH2 and was subsequently designated NanH2. The second result (accession number WP_016792322) was 60% identical to B. longum NanH2 over 372 residues and was designated NanH3. Aligning the three NanH homologs revealed that the regions of high identity centered around the sialidase domain of each protein (Fig. S1). Within JCP8151B, NanH2 and NanH3 displayed 49% identity over 572 residues, whereas sialidase A was only 29% identical over 251 residues to NanH2 and 24% identical over 292 residues to NanH3. B. longum ATCC15697 encodes another sialidase, NanH1, which can also cleave sialic acids in ␣2-3 and ␣2-6 linkages but is more than 100-fold less active than NanH2 (43). A BLASTp search of the JCP8151B proteome with the ATCC15697 NanH1 sequence identified sialidase A as the first result, with 43% identity between them. Thus, we propose renaming G. vaginalis sialidase A NanH1 and will refer to it as such from this point forward. To test whether G. vaginalis nanH1, nanH2, or nanH3 encode active sialidases, the genes were cloned from JCP8151B and expressed as His 6 -tagged proteins in E. coli. Kinetic assays on IPTG-induced cultures demonstrated that both NanH2 and NanH3 were able to cleave the fluorogenic substrate 4-MU-Nacetylneuraminic acid (N-acetylneuraminic acid or Neu5Ac is the most common type of sialic acid found in nature). In contrast, NanH1 activity was undetectable under these conditions (Fig. 1A). The absence of NanH1 activity could not be attributed to a lack of protein expression or stability because Western blot analysis with anti-His 6 monoclonal antibodies revealed a prominent band at the expected molecular mass of 100 kDa (Fig. S2). Bacterial sialidase domains typically contain an N-terminal RIP (Arg-Ile/Leu-Pro) motif, four or five aspartate box repeats (Ser/Thr-X-Asp-X-Gly-X-Thr-Trp/Phe), and seven conserved active site residues (44). These features are present in the amino acid sequences of all three G. vaginalis JCP8151B NanH proteins, with the exception of NanH3, which lacks the C-terminal auxiliary glutamate residue (Fig. 1B). NanH1 has an N-terminal concanavalin A-like lectin domain and a C-terminal sialidase domain but lacks predicted secretion signals, transmembrane regions, or cell wall anchoring motifs. In contrast, the 96-kDa NanH2 protein displays a predicted 51-residue N-terminal Secdependent signal peptide, a region with homology to Sec-independent translocases, and a C-terminal transmembrane ␣-helix, suggesting that NanH2 may be secreted but remain tethered to the bacterial surface (Fig. 1B). Homology modeling of NanH2 based on the sialidase crystal structure from Micromonospora viridifaciens (PDB code 1WCQ) (45), which also belongs to the Actinobacteria, revealed a ␤ propeller fold characteristic of sialidases. The model positioned amino acid side chains to create a predicted active site consisting of conserved catalytic residues, including Glu-407 and Tyr-515, in addition to positively charged arginine residues (Arg-206, Arg-423, and Arg-487), likely responsible for binding negatively charged sialic acid (Fig. 1C). Corresponding putative active-site residues were also found in NanH3. Sequence analysis of the 80-kDa JCP8151B NanH3 protein revealed a predicted C-terminal membrane Identification of two sialidases in Gardnerella vaginalis helix but failed to identify an N-terminal signal peptide or other secretion signals. Bacterial sialidase genes are often found near genes encoding proteins involved in sialic acid uptake and catabolism (46,47). In JCP8151B, nanH1 is found adjacent to such a gene cluster encoding putative enzyme activities involved in sialic acid foraging, including GlcNAc-6-phosphate deacetylase (nagA), a glucosamine-6-phosphate deaminase (nagB), three ABC transporter subunits, and an N-acetylneuraminate lyase (nanA). In contrast, the genes flanking nanH2 and nanH3 appear to encode functions unrelated to glycan degradation or sialic acid catabolism (Fig. S3). NanH2 and NanH3 act on sialoglycans relevant to the vaginal mucosa Given the high sialidase activity of NanH2 and NanH3 against 4-MU-Neu5Ac in E. coli cultures, we next investigated the substrate specificity of these two proteins. Previous analyses of the sialidase activity in vaginal specimens from women with BV demonstrated a broad capacity for cleaving sialic acids in many different contexts, including ␣2-3and ␣2-6 -linked sialic acids present within both N-linked and O-linked glycan substrates (48). To determine whether NanH2 and NanH3 could account for this broad range of activity, we incubated preparations of the recombinant sialidases with several different substrates and measured the resulting free sialic acids by fluorescent derivatization and HPLC. To ensure similar amounts of activity between NanH2 and NanH3, dilutions of the two enzymes were normalized in 4-MU-Neu5Ac assays before each experiment. NanH2 and NanH3 both cleaved 3Јand 6Ј-sialyllactose, and neither enzyme exhibited a marked preference for one linkage over the other (Fig. 2). Both sialidases also cleaved sialic acid from secretory IgA (SIgA, Fig. 3A), which contains mostly N-linked sialoglycans, as well as mucin from bovine submaxillary gland (BSM), which contains mostly O-linked sialoglycans (Fig. 3B). In addition to removing Neu5Ac from these substrates, NanH2 and NanH3 were also effective in liberating N-glycolylneuraminic acid from BSM (Fig. 3C). We also assessed the ability of NanH2 and NanH3 to cleave ␣2-3-linked sialic acids on the capsular polysaccharide Identification of two sialidases in Gardnerella vaginalis of the vaginal bacterium group B Streptococcus (GBS). GBS is an important potential pathogen during pregnancy (49) and is often found colonizing the vagina in women with high Nugent scores (50), a method of laboratory diagnosis for BV (23). As with the other sialoglycan substrates, both NanH2 and NanH3 cleaved Neu5Ac from the GBS capsule (Fig. 3D). Based on these experiments, we conclude that G. vaginalis NanH2 and NanH3 can cleave sialic acids from several substrates relevant to the vaginal mucosa. NanH2 is more effective than NanH3 at cleaving 9-O-acetylated sialic acid Sialic acids on mucosal sialoglycans may be modified with O-acetyl esters at carbon positions 7, 8, and 9. O-acetylated sialic acids are known to resist the action of many sialidases, with the extent of the inhibition depending on the position of the modification (51)(52)(53). To determine whether NanH2 or NanH3 could cleave O-acetylated sialic acids, we incubated each sialidase with BSM or intact GBS cells, both of which contain sialoglycan chains modified with 7-O-and 9-O-acetyl esters. For these experiments, we used a GBS strain with high levels of O-acetylation resulting from an active site mutation in the neuA esterase gene (54). In time course assays, NanH2 and NanH3 released similar quantities of the 7-O-acetylated sialic acid N-acetyl-7-O-acetylneuraminic acid (Neu5,7Ac 2 ) from both substrates (Fig. 4, A and B). However, NanH2 was far more effective than NanH3 in releasing the 9-O-acetylated sialic acid N-acetyl-9-O-acetylneuraminic acid (Neu5,9Ac 2 ) from both BSM and GBS cells (Fig. 4, C and D). Activity of purified NanH1 In a study characterizing the NanH1 and NanH2 sialidases of B. longum subsp. infantis, Sela et al. (43) reported that NanH1 was much less active than NanH2 against ␣2-3 and ␣2-6 sialyllactosyl 4-methylumbelliferol and against a library of p-nitrophenol-tagged sialylgalactosides. However, when used at a 80-to 160-fold higher concentration than NanH2, purified B. longum NanH1 exhibited detectable activity against these substrates. Therefore, in a final attempt to examine possible sialidase activity of G. vaginalis NanH1, we used nickel purification to isolate large quantities of His 6 -tagged JCP8151B NanH1. When used at 10 g/ml in 4-MU-Neu5Ac assays, NanH1 was able to cleave the fluorescent substrate at low levels in vitro (Fig. 5A). We next wanted to compare the activity of purified recombinant NanH1 with that of purified NanH2 and NanH3. However, in our initial experiments, recombinant NanH2 was localized to the supernatant of E. coli cultures, whereas NanH3 was found in the cytoplasm, and neither protein could be isolated in significant quantities using nickel affinity resin. We surmised that the soluble fraction of each protein had lost its C-terminal transmembrane region and His 6 tag, whereas proteins retaining the tag were likely insoluble because of the hydrophobic transmembrane regions of both NanH2 and NanH3. Thus, we generated additional nanH2 and nanH3 constructs lacking the Identification of two sialidases in Gardnerella vaginalis transmembrane regions but retaining the C-terminal His 6 tags. The putative signal sequence of NanH2 was also removed, allowing all three proteins to be isolated from the E. coli cytoplasm using the same method. The truncated versions of NanH2 and NanH3 were expressed in E. coli, and protein was enriched by nickel purification similar to NanH1. SDS-PAGE followed by staining with Coomassie Blue revealed expected bands of ϳ100 kDa for NanH1 and NanH2 and 77 kDa for NanH3 (Fig. 5B). Next we incubated the three sialidases with 4-MU-Neu5Ac, BSM, colostrum IgA, or sialyllactose and measured the liberation of free sialic acids by DMB-HPLC. Although the amount of each enzyme was roughly equivalent, as determined by Coomassie staining and BCA assays, only NanH2 and NanH3 released large quantities of sialic acid from these substrates (Fig. 5C). NanH1 released little Neu5Ac from 4-MU-Neu5Ac, BSM, or colostrum IgA and was completely inactive against BSM Neu5,7Ac 2 , BSM Neu5,9Ac 2 , or sialyllactose under the conditions tested. Because some sialidases require divalent cations for full activity, we also performed sialic acid release assays on 4-MU-Neu5Ac and colostrum IgA in the presence of 1 mM CaCl 2 or MgCl 2 , but NanH1 showed no increase in activity in the presence of divalent cations (Fig. S4). Additional experiments ruled out the possibility that NanH1 preparations were contaminated with N-acetylneuraminate lyase activity (present in all E. coli strains), which could theo-retically degrade liberated Neu5Ac and mask sialidase activity in NanH1-treated samples analyzed by HPLC. 3 Release of Neu5,9Ac 2 by G. vaginalis strains suggests NanH2 expression Given the in vitro data showing that NanH2 was better than NanH3 at cleaving 9-O-acetylated sialic acids, we hypothesized that if nanH2 is expressed in G. vaginalis, then NanH2-encoding strains might be better able to liberate 9-O-acetylated sialic acids than strains encoding only NanH3. To test this hypothesis, we used NCBI sequence data to identify G. vaginalis strains encoding NanH2 (JCP8151A, JCP8151B, JCP8522, and GED7760B; designated "nanH2ϩ") and strains encoding NanH3 but not NanH2 (JCP7276, JCP8017, JCP8066, and JCP8070; designated "nanH3 only"). A nanH3 gene is also present in three of the four nanH2ϩ strains (JCP8151A, JCP8151B, and JCP8522), whereas GED7760B is nanH3-negative. G. vaginalis isolates were grown overnight in NYCIII medium supplemented with BSM as a source of O-acetylated sialic acids. Supernatants from spent cultures were then analyzed by DMB-HPLC to determine how much of each sialic acid species was liberated. A separate portion of each sample was treated with Arthrobacter ureafaciens sialidase (AUS) to release and measure sialic acids that remained bound to gly- Identification of two sialidases in Gardnerella vaginalis can substrates following overnight growth while preserving their acetylation patterns (55). As expected, all eight G. vaginalis strains were able to release and consume Neu5Ac and N-glycolylneuraminic acid from the medium (Fig. S5). The nanH3-only strains released and consumed significantly more Neu5,7Ac 2 than the nanH2ϩ strains (Fig. 6, A and C). In contrast, and consistent with the in vitro enzyme data, nanH2ϩ strains released and consumed more Neu5,9Ac 2 than the nanH3-only strains (Fig. 6, B and D), supporting the interpretation that NanH2 was expressed and secreted under these conditions. The presence of nanH2 or nanH3 accounts for sialidase activity in G. vaginalis cultures In light of our findings that recombinant NanH1 had only weak sialidase activity in vitro, we investigated the possibility that nanH2 and nanH3 may better account for culture sialidase activity among the 34 G. vaginalis isolates in our strain repository ( Table 1). Many of these strains have been tested previously for sialidase activity (34). Additional strains were tested for activity against 4-MU-Neu5Ac in kinetic assays. In total, 19 strains were sialidase-negative, and 15 were sialidase-positive (Fig. S6A). Next we used NCBI sequence data to identify nanH1, nanH2, and nanH3 in the genomes of published isolates. PCR with primers specific for each nanH gene was used to determine their presence or absence in strains that had not been sequenced previously as well as in sequenced strains whose genomes are not closed (Fig. S6B). In each case where PCR and genome sequences were available, we found that both strategies gave complimentary results. None of the 19 sialidase-negative strains encoded either NanH2 or NanH3, although most of them possessed nanH1 (Table 2). Conversely, all 15 sialidase-positive strains encoded NanH2, NanH3, or both. We therefore conclude that nanH2 and nanH3 account for the sialidase activity observed in cultured G. vaginalis. The presence of nanH2 or nanH3 in vaginal specimens from women with and without bacterial vaginosis In a final set of experiments, we wanted to find out whether the presence of nanH2 or nanH3 in vaginal samples might also have diagnostic applications. To test this, we used genomic DNA isolated from a subset of a previously described cohort of women (56) as a template for PCR as described under "Experimental procedures." We evaluated a total of 67 specimens from women with BV (Nugent 7-10, n ϭ 21), "no BV" (Nugent 0 -3, n ϭ 23), and intermediate phenotypes (Nugent 4 -6, n ϭ 24). We performed two distinct PCR assays for nanH2 and nanH3 with appropriate controls, including no template or genomic DNA from G. vaginalis isolates with neither nanH2 nor nanH3 (JCP8108), nanH2 only (GED7760B), or nanH3 only (JCP8066). We then categorized each sample in a blinded fashion by the vaginalis clinical isolates in NYCIII medium supplemented with BSM, supernatants were analyzed by DMB-HPLC for free and total acetylated sialic acids. Released Neu5,7Ac 2 (A) and Neu5,9Ac 2 (B) were calculated by subtracting the concentration of bound sialic acid in the spent culture supernatants from that of the uninoculated medium control. Consumed Neu5,7Ac 2 (C) and Neu5,9Ac 2 (D) were calculated by subtracting the concentration of total sialic acid in each culture supernatant from the total sialic acid in the uninoculated medium control. ****, p Ͻ 0.0001; **, p Ͻ 0.008 in two-tailed Mann-Whitney tests. Identification of two sialidases in Gardnerella vaginalis presence or absence of nanH2 or nanH3 bands in at least two independent PCR experiments. Women were then classified as being positive for either nanH2 or nanH3 or as having neither nanH2 nor nanH3. Comparison of these results in relation to Nugent categories (0 -3, 4 -6, and 7-10) revealed a striking pattern. Although 17 of 21 women with BV were positive for nanH2 or nanH3, only 5 of 23 Lactobacillus-dominant samples showed the presence of nanH2 or nanH3. Interestingly, among the intermediate samples, 13 of 24 were positive for nanH2 or nanH3 ( 2 p ϭ 0.0004). Considering women with BV or no BV, Discussion Sialidase activity is consistently observed in the vaginal fluid of women with BV, and the enzyme could benefit vaginal microorganisms in several ways. First, sialidases provide a carbon source to Gardnerella by liberating terminal sialic acid residues from vaginal glycoproteins, thus allowing their uptake and catabolism (34). Second, sialidase activity in the vagina may reveal cryptic receptors for adhesins and toxins by uncapping underlying sugars, such as galactose residues, as they do for bacteria in the mouth and airway (57)(58)(59). Third, vaginal sialidases could alter the physical properties of mucus, allowing more intimate association with the epithelium, as occurs with viral sialidases (60). Finally, sialidase activity could have immunomodulatory consequences for receptors on mammalian cells. Sialic acids serve as ligands for receptors called Siglecs expressed on innate immune cells, and removal of sialic acids changes the inflammatory potential of these immune cells (61,62). One or more of these mechanisms could promote the growth or colonization of sialidase-expressing vaginal bacteria, including Prevotella and Bacteroides as well as G. vaginalis (9,10,19). In addition to influencing interactions between sialidase producers and the host, such functions could predispose the vaginal microbiome toward dysbiosis by shifting the physical and/or immunological milieu to favor BV-associated bacteria. Here we report three lines of evidence that NanH2 and NanH3 (and not sialidase A, which we renamed NanH1) are the major contributors to sialidase activity in G. vaginalis cultures. First, NanH2 and NanH3 were much more similar in sequence to the enzymatically active B. longum sialidase NanH2 than was NanH1, whereas G. vaginalis NanH1 was more similar to the relatively inactive (43) B. longum NanH1. Second, although G. vaginalis NanH1 had minimal activity against numerous sialic acid substrates even at high concentrations, NanH2 and NanH3 were able to cleave sialic acids in many different molecular contexts, such as ␣2-3and ␣2-6 -linked sialic acids as well as N-and O-linked sialoglycans found on SIgA and mucin. Thus, both NanH2 and NanH3 have a sufficient range of substrate specificity to account for the full breadth of sialidase activities observed previously in BV specimens (48). Finally, among 34 G. vaginalis clinical isolates, the ability to cleave sialic acids corresponded with the presence of nanH2 or nanH3 in 100% of cases, whereas the presence of nanH1 in the genome was observed in many sialidase-negative strains. We also demonstrate that a preliminary PCR assay targeting nanH2/nanH3 in human vaginal specimens performs reasonably well in BV diagnosis based on the gold standard method of laboratory diagnosis (based on the Nugent method). Our data show that PCR detection of nanH2/nanH3 in human vaginal specimens has ϳ80% sensitivity and specificity to predict BV diagnosis. Nugent scoring is widely used for laboratory BV diagnosis, and it has brought the field forward in several important ways. However, it is a crude metric that many in the field are trying to replace with more objective and quantitative molecular methods (24,63,64). Because the PCR assays presented here focus on genes that likely encode virulence functions, they may have advantages over Nugent scoring in the potential prediction of adverse gynecological and obstetric outcomes. Previous studies have shown that vaginal sialidase activity is associated with greater risks of pregnancy loss, preterm birth, and placental infection (16 -18). NanH2 and NanH3 are similar to each other and other functionally characterized sialidases in many ways, but the two proteins also have some important differences. For example, most bacterial sialidases are either freely secreted into the extracellular environment or secreted and retained at the cell surface, where they can access host sialoglycans (65). Sialidases typically Table 2 The presence of nanH2 or nanH3 corresponds with culture sialidase activity in G. vaginalis clinical isolates The ability of each strain to produce sialidase activity is shown, as is the presence or absence of the three nanH genes. There are five (boxed) genotype-phenotype relationships. Identification of two sialidases in Gardnerella vaginalis have an N-terminal signal peptide (66) that directs secretion through the Sec translocase (67). All four G. vaginalis NanH2 proteins in the NCBI database have a 51-amino acid signal peptide, as predicted by SignalP, a program that identifies putative signal sequences and distinguishes them from likely transmembrane regions (68), and are thus likely to be extracellular. Consistent with this prediction, when full-length JCP8151B NanH2 was expressed in E. coli, the majority of the sialidase activity was found in the culture supernatant. The NanH2 amino acid sequence also contains a 78-residue PRK00708 domain downstream of the sialidase domain. PRK00708 domains are associated with Sec-independent translocases (69), suggesting that NanH2 secretion may involve additional mechanisms besides that provided by the Sec machinery. In contrast, the JCP8151B NanH3 protein does not have a predicted signal peptide, nor do the NanH3 proteins in the other nanH2ϩ, nanH3ϩ strains, JCP8151A and JCP8522. However, the NanH3 proteins in most G. vaginalis strains lacking nanH2 have longer N termini that include signal peptides according to SignalP, suggesting that NanH3 may be secreted in some G. vaginalis strains and intracellular or unexpressed in others. Both NanH2 and NanH3 have predicted transmembrane ␣-helices at their C termini. Our initial attempts to purify recombinant NanH2 and NanH3 from E. coli involved fulllength, C-terminal His-tagged versions of the two proteins. However, we were unable to isolate significant quantities of either enzyme from the soluble fraction of E. coli lysates or supernatants using Ni 2ϩ affinity resin despite the presence of sialidase activity in these fractions (and its absence from fractions derived from E. coli with vector alone). We suspect that the full-length polypeptides were insoluble because of the C-terminal hydrophobic ␣-helices but that some percentage of molecules was proteolytically cleaved upstream of the transmembrane domain, separating them from the His 6 tag and releasing them into solution. Indeed, plasmids encoding Histagged NanH2 or NanH3 lacking the C-terminal transmembrane regions yielded soluble protein that readily bound to nickel affinity resin. These findings suggest that, following secretion, some portion of membrane-bound NanH2 and NanH3 may be released into the environment because of proteolytic sensitivity upstream of their putative transmembrane ␣-helices. This is consistent with our previous observation that a significant fraction of G. vaginalis sialidase activity is found in culture supernatants (34). Similarly, the membrane-bound NanA sialidase of Streptococcus pneumoniae can be proteolytically cleaved upstream of its LPXTG cell-anchoring motif without appreciable loss of activity (70). The one significant functional difference we noted between NanH2 and NanH3 was that NanH2 was much more effective than NanH3 at cleaving 9-O-acetylated sialic acids on BSM and GBS whole cells. This in vitro difference was also evident in vivo, as G. vaginalis strains encoding NanH2 were better able to release and consume 9-O-acetylated sialic acids than strains encoding NanH3 only. Although GBS is a common vaginal commensal and often has high levels of O-acetylated sialic acids, elutions from human vaginal swabs had little, if any, 9-O-acetylated sialic acid. 4 However, the ability to liberate 9-Oacetylated sialic acids may help nanH2ϩ G. vaginalis strains to colonize niches outside of the vagina. For example, in men, women, and children, G. vaginalis has been reported to be an inhabitant of the distal gastrointestinal tract (anal swabs) (71)(72)(73), where 9-O acetylation of sialic acid reaches high levels (colon, rectum, and anus) (52, 53). Given that 9-O acetylation impedes the activity of many bacterial sialidases (74,75), NanH2-expressing strains of G. vaginalis may have a competitive advantage in the rectum because of an expanded capacity to forage on 9-O-acetylated sialic acids. NanH3 may be the protein reported by von Nicolai et al. (76), who described a G. vaginalis membrane-bound sialidase that was released into the soluble fraction of bacterial suspensions by sonication and was active against a broad range of substrates, including sialyllactose, fetuin, and BSM. Gel chromatography revealed a molecular mass of around 75 kDa, which is only slightly smaller than full-length G. vaginalis JCP8151B NanH3 (80 kDa) and almost identical in size to NanH3 lacking its putative transmembrane helix (77 kDa, Fig. 5B). To our knowledge, the G. vaginalis strain used in this previous study has not been sequenced, precluding efforts to confirm the presence of nanH3 in its genome, but the similarities in molecular mass, cellular localization, and enzymatic activity suggest that the purified protein was indeed NanH3. Our data strongly suggest that NanH1 is not responsible for the sialidase activity observed in G. vaginalis. At least three other research groups have also found that the presence of nanH1 does not predict culture sialidase activity in many G. vaginalis strains (37,39,42). In particular, one study reported that fewer than half of 77 nanH1ϩ strains produced detectable sialidase activity (42). Nevertheless, to our knowledge, nanH1 is present in all sialidase-positive strains of G. vaginalis examined to date. A recent investigation speculated that the lack of sialidase activity observed in many nanH1ϩ strains could be due to transcriptional regulation of nanH1 (41). Although not an unreasonable hypothesis, we were unable to find a published analysis of transcription or upstream sequence differences to support this idea. Interestingly, the apparent deficit in NanH1 sialidase activity is not easily explained by its amino acid sequence alone. The protein contains five aspartate boxes and all seven catalytic residues typical of bacterial sialidases as well as the conserved RIP motif at the N terminus of the sialidase domain. Furthermore, the persistence of an intact nanH1 ORF in at least 20 distinct G. vaginalis isolates points toward an important function for this gene. NanH1 lacks a predicted N-terminal signal sequence, implying intracellular localization. In mammals, intracellular sialidases function in metabolism and regulation of inflammatory states (61). In some bacteria (e.g. Bifidobacterium), these sialidases are thought play a purely metabolic role, such as cleavage of oligosaccharides after they are transported into the cytoplasm (43). Alternatively, intracellular sialidases sometimes escape into the extracellular environment through cell lysis. For example, the Clostridium perfringens NanH sialidase lacks a predicted signal sequence and accumulates intracellularly in log phase cultures (77) but can be found in the supernatant of death phase cultures. This mechanism is especially plausible in the case of G. vaginalis because of its thin cell wall (78). The NanH1 sialidase of B. longum may provide insight into the function of G. vaginalis NanH1. The two proteins are 44% identical, and both have an N-terminal concanavalin A-like domain while lacking discernable secretion sequences and membrane-anchoring regions (43). Concanavalin A is a lectin, and therefore the N-terminal domain is likely involved in substrate binding. Also, both nanH1 open reading frames are found adjacent to a predicted catabolic gene cluster. Consistent with our finding that G. vaginalis NanH1 is a relatively inactive sialidase, B. longum NanH1 was shown to have a 175-fold lower k cat than B. longum NanH2 in the presence of ␣2-6 -linked sialyllactosyl 4-MU and a 140-fold lower k cat in the presence of ␣2-3-linked sialyllactosyl 4-MU (43). However, sialidases containing lectin domains can exhibit a K m for polyvalent substrates that is 100-fold lower than their K m for monovalent derivatives (79). Although JCP8151B NanH1 failed to release significant quantities of sialic acid from the two polyvalent substrates used in this study (colostrum IgA and BSM), it may be more active against a different polyvalent sialoside that is recognized by NanH1's putative lectin domain. Several other Gram-positive bacteria encode multiple sialidases their genomes. S. pneumoniae has three sialidases, NanA, NanB, and NanC, all of which are secreted and have N-terminal lectin domains (80 -83). C. perfringens also encodes three sialidases, NanH, NanI, and NanJ (84), and Tannerella forsythia encodes at least two sialidases, SiaHI and NanH (85). In each of these species, one sialidase accounts for the majority of the observed sialidase activity, and one sialidase is relatively inactive under the conditions tested (86,87). Thus, G. vaginalis is not unique in having a sialidase homolog of unknown function. Future work should focus on defining the specific roles of the three G. vaginalis sialidases. Such work would be greatly aided by the development of effective genetic tools for this organism. Sequence analyses Conserved domains and active-site residues in protein sequences were predicted using RPS-BLAST (NCBI). Sequence alignments were performed with Clustal Omega (EMBL-EBI). SignalP and Phobius (EMBL-EBI) were used to identify putative signal peptides and transmembrane regions. Strains and culture conditions Strains, plasmids, and primers used in this study are shown in Table 1. Published strains of G. vaginalis were isolated as described previously (34). Unpublished strains were isolated from vaginal samples collected under University of Alabama IRB Protocol F140410006 (initial approval date, June 27, 2014). Briefly, samples were collected using BD BBL TM Liquid Amies Copan CultureSwab TM swabs and transported to the laboratory for processing the same day. Samples were streaked onto Gardnerella Selective Agar plates (Hardy Diagnostics) and incubated at 35°C for 24 -48 h in an atmosphere containing 5% CO 2 . G. vaginalis colonies demonstrated yellowing of the medium surrounding the colonies. Three colonies per Gardnerella Selective Agar plate were used to inoculate BD HBT (Human Blood Tween TM ) bilayer plates and incubated at 35°C in a 5% CO 2 -enriched atmosphere for 24 -48 h. Small white colonies surrounded by a ␤-hemolytic zone with a diffuse edge were selected for further purification and testing. BD BBL oxidase reagent droppers were used to perform an oxidase test on small, translucent, ␤-hemolytic colonies. These colonies were also subjected to a catalase test using anaerobic catalase reagent 15% hydrogen peroxide. Oxidase-and catalase-negative specimens were Gram-stained and examined at high power (ϫ100) under a microscope. Gram-negative to Gram-variable pleomorphic coccobacilli were selected for cryopreservation. Following isolation, G. vaginalis strains were grown in a vinyl anaerobic chamber (Coy Products) at 37°C in NYCIII medium (per liter: 15 g of proteose peptone no. 3, 3.75 g of yeast extract, 5 g of NaCl, 5 g of glucose, 17 ml of 1 M HEPES, 100 ml of heat-inactivate horse serum) or on HBT agar plates (Fisher Scientific). Both with our own freezer stocks as well as those from multiple other investigators, we occasionally found that strains thought to be completely isolated had multiple colony types when streaked on solid medium. This was especially true when the stock was grown under different conditions or on a different medium type than the original isolation employed. To ensure the purity of G. vaginalis strains, they were streaked from Ϫ80°C stocks onto NYCIII or HBT bilayer plates and assessed visually for consistent colony size, color, and morphology. Colonies that varied by these criteria were picked and restreaked on fresh plates until uniformity was established. Then species identity was confirmed by colony PCR using the G. vaginalisspecific primers G. vag tuf F1 and G. vag tuf R1. E. coli was grown while shaking in lysogeny broth (LB) at 37°C or as indicated, with antibiotic selection where required. E. coli Top10 was used for cloning putative sialidase-encoding genes into pET101/D-Topo or pTrc99A as described below. E. coli BL21(DE3) or LSR4 (MG1655 nanA (75)) were used for protein expression. For plasmid maintenance in E. coli, ampicillin was used at 100 g/ml (pTrc99A-and pET101/D-Topobased plasmids). GBS was grown standing aerobically at 37°C in Todd Hewitt broth with 5 g/ml erythromycin to maintain the pDCerm plasmid. DNA manipulations PCR products for cloning were generated with Phusion polymerase (New England Biolabs) and a purified genomic DNA template. Restriction enzymes were also from New England Biolabs. The full-length sialidase A/nanH1 (accession number ATJH01000171), nanH2 (accession number ATJH01000056), and nanH3 (accession number ATJH01000033) genes from G. vaginalis JCP8151B were amplified with the primer pairs 8151B nanH1 F Nco/8151B nanH1 his R Bam, 8151B nanH2 F Nco/8151B nanH2 his R Bgl2, and 8151B nanH3 F Nco/8151B nanH3 his R Pst, respectively. For sialidase purification, truncated nanH2 and nanH3 genes lacking the predicted C-terminal transmembrane segment (both genes) and N-terminal signal peptide (nanH2 only) were amplified with the primer pairs 8151B nanH2 A51 F Nco/8151B S908 his R Bgl2 and 8151B Identification of two sialidases in Gardnerella vaginalis nanH3 F Nco/8151B nanH3 T702 his R Pst. The resulting amplicons were cloned into the pTrc99A expression vector. Sialidase activity assays on bacterial isolates Most G. vaginalis strains were grown in NYCIII broth as described above. Strains that could not be cultivated in NYCIII were grown on HBT agar plates. For strains grown in broth, stationary phase cultures were used. Strains grown on plates were scraped off the agar, suspended in 100 mM sodium acetate buffer (pH 5.5), and adjusted to an A 600 of 2.0. 20 l of each sample was mixed with 100 l of 100 mM sodium acetate buffer (pH 5.5) containing 300 M 2-(4-methylumbelliferyl)-N-acetylneuraminic acid (4-MU-Neu5Ac, Gold Bio) in a black polypropylene assay plate (Eppendorf). Fluorescence was measured at an excitation of 365 nm (bandwidth, 9 nm) and an emission of 440 nm (bandwidth, 20 nm) every 60 s for 2 h in a Tecan Infinite M200 plate reader at 37°C. For the experiment presented in Fig. S6, enzyme activity was calculated from the linear portion of each curve and expressed as the change in relative fluorescence units. E. coli BL21(DE3) expressing JCP8151B nanH1, nanH2, or nanH3 was inoculated into 2 ml of LB containing 200 M IPTG (Gold Biotechnology) and grown while shaking overnight at room temperature. The next day, cultures were tested for activity as above, except that 20 mM sodium acetate buffer was used. To confirm NanH1-His 6 expression, bacteria were lysed and analyzed by Western blotting and detection with an anti-His 6 mAb (Covance). Crude recombinant NanH2 protein preparation In cultures of E. coli BL21(DE3) expressing full-length NanH2 from pLR34, high sialidase activity (4-MU-Neu5Ac hydrolysis) was detected in the culture supernatant, whereas culture supernatants from BL21(DE3) containing the empty vector had no such activity. Thus, for crude NanH2 protein preparations, E. coli was grown while shaking in LB overnight at room temperature, and cell-free supernatants were prepared by centrifuging cultures at 12,000 ϫ g for 10 min and passing the supernatant through a 0.22-m filter. Crude recombinant NanH3 protein preparation Full-length NanH3 was primarily intracellular when expressed in E. coli, so clarified whole-cell lysates were used as a source of recombinant NanH3. Although E. coli does not encode its own sialidase, it does encode an intracellular sialate lyase (NanA) that hydrolyzes free sialic acid to N-acetylmannosamine. In many experiments, we measured the generation of free sialic acid to detect sialidase activity. To prevent breakdown of free sialic acid when liberated by NanH3-containing E. coli lysates, the nanH3 expression plasmid pLR35 (and a parallel empty vector control) was transformed into an E. coli strain lacking the nanA gene (LSR4) (75). Briefly, clarified E. coli whole-cell lysates were prepared as follows. Bacteria were grown while shaking in 200 ml LB broth at 37°C to an A 600 of 1.0. After addition of 200 M IPTG, bacteria were incubated while shaking overnight at room temperature. Then cells were pelleted, washed in 30 ml PBS, resuspended in 10 ml of lysis buffer (50 mM NaH 2 PO 4 (pH 7.4), 300 mM NaCl, and 10 mM imidazole), and sonicated five times for 10 s each in a Sonic Dismembrator (Fisher Scientific) at 35% amplitude on ice. The cell lysate was clarified by centrifuging three times (with transfer of the supernatant to fresh tubes) at 15,000 ϫ g for 10 min. Enzyme purification For expression of NanH1, BL21(DE3) cells carrying JCP8151B nanH1-His 6 in pTrc99A were grown while shaking in 800 ml of LB at 37°C to an OD of 0.5 and then induced with 1 mM IPTG for 4 h at 37°C. Truncated NanH2 and NanH3 (lacking the transmembrane segments of both proteins and the signal sequence of NanH2) were expressed similarly in BL21(DE3), except cultures were induced with 0.2 mM IPTG and shaken overnight at room temperature. Following induction, cells were pelleted at 12,000 ϫ g, washed in 120 ml of PBS, resuspended in 7.5 ml of lysis buffer, and sonicated and centrifuged as above. The clarified lysate was transferred to a 15-ml Falcon tube containing 600 l of His-Select nickel affinity gel (Sigma), rotated for 1 h at 4°C, applied to a 5-ml disposable polypropylene column (Thermo Scientific), and washed with 20 ml of lysis buffer. Bound proteins were eluted in 110-l fractions of imidazole elution buffer. Fractions were evaluated for purity by SDS-PAGE, followed by staining with Coomassie G-250 or Western blotting with anti-His 6 antibody. Both methods revealed a prominent band at the expected molecular mass of 100 kDa in a subset of fractions. These fractions contained sialidase activity, as determined by 4-MU-Neu5Ac hydrolysis, and were used in substrate specificity assays. Protein concentration was measured using a Micro BCA Protein Assay Kit (Thermo Scientific). The absence of N-acetylneuraminate lyase activity, which might destroy liberated sialic acid, was confirmed by diluting each sialidase preparation 40-fold into a solution of 20 mM sodium acetate (pH 5.5) and 20 M Neu5Ac. After a 3-h incubation at 37°C, DBM-HPLC revealed negligible Neu5Ac degradation. Normalization of sialidase activity In substrate specificity assays with recombinant NanH2, NanH3, and AUS (Figs. 2-4), similar amounts of enzyme activity were used to investigate their ability to cleave sialic acids from different substrates. To estimate activity, 0.5 l of each sialidase preparation was used in a 4-MU-Neu5Ac assay as described above. After 20 min, slopes were calculated from the linear portions of the curves. Preparations with higher slopes were used in proportionally smaller quantities to ensure similar amounts of overall activity in assays testing each substrate. Identification of two sialidases in Gardnerella vaginalis ml; GBS with Neu5,7Ac 2 (unmigrated), 15 milliunits/ml; GBS with Neu5,9Ac 2 (migrated), 8 milliunits/ml. In time course assays, reactions were stopped by transferring tubes to dry ice for 5 min; samples were subsequently stored at Ϫ80°C until derivatization for HPLC. To test the effect of divalent cations on NanH1 activity, sialidase assays with 4-MU-Neu5Ac and colostrum IgA were also performed in the presence of 1 mM CaCl 2 or MgCl 2 . For assays with GBS, strain COH1 ⌬neuA expressing NeuA N301A from a plasmid was used because it accumulates high levels of sialic acid O-acetylation that originates at the C-7 position but migrates to C-9 under slightly alkaline conditions (54). GBS was grown to an OD of 0.4 in 800 ml of Todd-Hewitt broth with 5 g/ml erythromycin. Cells were pelleted at 12,000 ϫ g for 10 min, washed three times in ice-cold 100 mM sodium acetate buffer (pH 5.5), and stored dry at Ϫ80°C. Before each experiment, cells were resuspended in 20 mM sodium acetate buffer (pH 5.5), to an OD of 30. For experiments monitoring release of 9-O-acetylated sialic acids, GBS cells were first resuspended in 100 mM Tris (pH 9.0) and incubated at 37°C for 30 min to migrate O-acetyl groups from the 7-carbon to the 9-carbon position (74,75). The bacteria were then washed once in 100 mM sodium acetate buffer (pH 5.5) and resuspended in 20 mM sodium acetate buffer (pH 5.5). For the end point experiment presented in Fig. 6C, recombi-nant NanH1, NanH2, and NanH3 were used at ϳ5 g/ml. Human colostrum IgA and bovine submaxillary mucin were both used at 1 mg/ml, and 4-MU-Neu5Ac, 3Ј-sialyllactose, and 6Ј-sialyllactose were used at 1 mM. Sialic acid measurements by DMB-HPLC Sialic acids were derivatized and quantified by HPLC as described previously (48,74,75). Samples were derivatized by mixing with an equal volume of 2ϫ DMB (1,2-diamino 4,5methylenedioxybenzene) reagent (14 mM DMB, 44 mM sodium hydrosulfite, 1.5 M 2-mercaptoethanol, and 2.8 M acetic acid) and incubating for 2 h at 50°C. Immediately after derivatization, samples were loaded into the temperature-controlled autoinjector of a Waters HPLC (set to keep samples at 4°C) equipped with a reverse-phase C18 column (Tosoh Bioscience) and a Waters fluorescence detector set to excite at 373 nm and detect emission at 448 nm. The area under each peak was used to quantitate sialic acid concentrations by referring to a standard curve of Neu5Ac (Sigma) derivatized in parallel. Relative HPLC retention times using this system have been well-established by our group and others for the sialic acid species present in BSM and GBS (54,74,75), both of which contain Neu5Ac and O-acetylated species at each site of the sialic acid side chain (carbon positions 7, 8, and 9). and nanH3 (B) was performed on genomic DNA isolated from 67 human vaginal specimens as described under "Experimental procedures." C and D, categorical analyses was carried out to test whether the presence of nanH2 or nanH3 was related to bacterial vaginosis status as determined by Nugent score. C, a bar graph illustrates the relationship of nanH2/nanH3 status across the three Nugent score categories. D, categorical analysis of the nanH2/nanH3 PCR data versus the Nugent score. Release and consumption of O-acetylated sialic acids Growth medium containing bound O-acetylated sialic acids was prepared by adding sterile filtered BSM to NYCIII broth to a final concentration of 1.5 mg/ml. G. vaginalis strains were grown overnight in standard NYCIII medium and then diluted 100-fold into fresh NYCIII supplemented with BSM. The following day, saturated cultures were centrifuged, and the supernatants were collected for DMB-HPLC analysis. Mild acetic acid is often used to release bound sialic acids for measurement with DMB, but such treatment can also cause the migration of acetyl groups on sialic acids. Therefore, a portion of each supernatant was diluted 5-fold into 100 mM sodium acetate buffer (pH 5.5) and mixed with an excess of AUS (100 milliunits/ml) for 2 h to liberate bound sialic acids before DMB derivatization. Bound sialic acid concentrations were calculated by subtracting the free from total sialic acid concentration in each sample. Sialic acid release was calculated by subtracting the concentration of bound sialic acid in culture supernatants from that of the uninoculated medium control. Sialic acid consumption was calculated by subtracting the concentration of total sialic acid in each culture supernatant from that of the uninoculated medium control. Detection of nanH genes in G. vaginalis clinical isolates Although many of the strains in our collection have draft genome sequences available, these sequences are often comprised of many contigs and thus remain incomplete. Therefore, PCR was used as a more stringent test of the presence or absence of a gene rather than relying on potentially incomplete draft genome sequences. In our analysis, we only included G. vaginalis strains that were available for culture, sialidase activity assay, and PCR confirmation of nanH1, nanH2, and nanH3. G. vaginalis strains were grown anaerobically on NYCIII or HBT agar until colonies reached ϳ1 mm in diameter (generally 36 to 48 h). Agar plates were then removed from the anaerobic chamber, and colony PCR was performed with Ex Taq polymerase (Clontech) and the following primer pairs (Table 1): G. vag sia universal F3/G. vag sia universal R1 for nanH1, G. vag nanH2 qPCR F/G. vag nanH2 qPCR R for nanH2, and G. vag nanH3 qPCR F/G. vag nanH3 qPCR R for nanH3. Annealing temperatures were 54°C for nanH2 reactions and 51°C for nanH1 and nanH3 reactions. Extension time was 30 s for all three PCR assays. Expected amplicon sizes were 636 bp for nanH1, 348 bp for nanH2, and 322 bp for nanH3. PCR amplification of nanH2 and nanH3 in human vaginal specimens This study used samples from the Contraceptive CHOICE project at Washington University. CHOICE received IRB approval at the Washington University School of Medicine, and all participants gave their written informed consent at enrollment and their permission to use vaginal specimens for future studies. This subproject was also IRB-approved (ID 201108155). Both the CHOICE study and this substudy were conducted according to the principles expressed in the Declaration of Helsinki. The human vaginal specimens used in this study were from a previously published subset of CHOICE participants for whom Nugent scores were published previously and additional vaginal material was available (56). Genomic DNA was isolated from vaginal swabs eluted in 0.1 M sodium acetate (pH 5.5). Insoluble material was pelleted by centrifugation and processed using the Wizard Genomic DNA Purification Kit (Promega). Amplification of nanH2 and nanH3 was performed in 96-well PCR plates (Phenix Research) with Intact Genomics Taq polymerase (catalog no. 3249) and the primer pairs G. vag nanH2 MP F1/G. vag nanH2 MP R1 or G. vag nanH3 qPCR F/G. vag nanH3 qPCR R. Genomic DNA was diluted 10-fold into a PCR plate. 2 l of each diluted DNA sample was then transferred to a fresh PCR plate on ice, and 18 l of ice-cold PCR master mix was added to each well. All primers were used at a final concentration of 200 nM. Genomic DNA isolated from GED7760B (nanH2 only) or JCP8066 (nanH3 only) served as positive controls. The annealing temperature was 51°C for both primer sets, and the extension time was 30 s. Amplification was performed for 35 cycles. The expected amplicon sizes were 460 bp for nanH2 and 322 bp for nanH3. PCR products were separated on 1% agarose gels and visualized under UV light with ethidium bromide staining. Bands at the expected sizes were categorized by an observer blinded to the Nugent status of the samples. Very faint bands were categorized as negative because in all cases we observed, the replicate reaction did not yield a visible band. After all nanH2 and nanH3 reactions were separately categorized as positive or negative, we created two summary categories: either nanH2 or nanH3 and neither nanH2 nor nanH3. We then performed the analyses described in Fig. 7. The Wilson-Brown method was used to compute confidence intervals. One sample gave discordant results between the two replicate reactions performed and could not be categorized based on the above summary variables; this (Nugent intermediate) sample was excluded from the final analysis.
2019-03-08T14:08:07.200Z
2019-02-05T00:00:00.000
{ "year": 2019, "sha1": "13577b9a71aba72704d7c530d27713f8105af1c1", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/294/14/5230.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "0f7998a8647135581183157c51199cd4d436f718", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
9823230
pes2o/s2orc
v3-fos-license
The 6th international conference on envenomation by Snakebites and Scorpion Stings in Africa: a crucial step for the management of envenomation During the 6th International Conference on Envenomation by Snakebites and Scorpion Stings in Africa held in Abidjan, from 1 to 5 June 2015, the measures for the management of envenomation were discussed and new recommendations were adopted by the participants. The high incidence and severity of this affliction were confirmed by several studies conducted in African countries. The poor availability of antivenom, particularly because of the cost, was also highlighted. Some experiences have been reported, mainly those regarding the financial support of antivenom in Burkina Faso (more than 90 %) and Togo (up to 60 %) or the mandatory reporting of cases in Cameroon. Key recommendations concerned: improvement of epidemiological information based on case collection; training of health workers in the management of envenomation; policy to promote the use of effective and safe antivenom; and antivenom funding by sharing its costs with stakeholders in order to improve antivenom accessibility for low-income patients. Introduction Scorpion stings in North Africa and snakebites in sub-Saharan Africa are responsible, respectively, for 750,000 cases of envenomation with 1,700 deaths, and 320,000 envenomations including ten thousands of deaths and much debilitating sequelae [1,2]. Envenomation usually affects rural populations, generally young farmers whose income is low. Household surveys found that almost all victims of stings or bites by venomous animals were initially assisted by a traditional healerout of those, more than a half were treated only by himcausing a consultation delay, which was detrimental to the clinical course. Organized jointly by the Société Africaine de Venimologie -SAV (African Society of Venomology) and the Pasteur Institute of Cote d'Ivoire, the 6 th International Conference on Envenomation by Snakebites and Scorpion Stings in Africa (6 ème Conférence Internationale sur les Envenimations par Morsures de Serpent et Piqûres de Scorpions en Afrique) was held from 1 to 5 June 2015 at the Faculty of Medical Sciences, University Félix Houphouët-Boigny, in Abidjan, Côte d'Ivoire. As in previous meetings, this conference consisted of three different sessions: a two-day workshop on the management of envenomation; the scientific conference, which also extended over two days; and stakeholder meeting to discuss the availability of antivenoms (www.sav-asv.com/). Workshop on the management of envenomation The first day of training involved forty trainers, mainly Ivoirians, to whom the methodological basis of venomology was presented (herpetology, epidemiology, biochemistry and toxicology of venoms, clinical manifestations and treatment of envenomation) in order to clarify the causes and consequences of the encounter between a human and a venomous animal. The second day was dedicated to trainees and people that deal with envenomation: physicians, pharmacists, nurses, firefighters, rescue workers, paramedics and traditional healers. Over 200 people attended the course that explained about the circumstances, symptoms and treatment of snakebite in Côte d'Ivoire. The different clinical presentations, as well as the therapeutic approach have been described. A simple diagnostic and treatment algorithm was presented. Scientific conference This session brought together about 200 participants from 18 countries from all continents (Germany, Belgium, Benin, Burkina Faso, Cameroon, Côte d'Ivoire, France, Ghana, Guinea, India, Kenya, Mali, Mexico, Morocco, Nigeria, Senegal, Switzerland, and Togo). Representatives of several countries, already registered (Algeria, Angola, Brazil, Congo, Democratic Congo, USA, Great Britain, Italy, Mauritania, Niger, Chad, Tunisia), could not attend, because unavailable or, most often, due to financial reasons. In his inaugural lecture, Prof Abdulrazaq Habib presented an economic model showing the particularly profitable cost-benefit relation of antivenom use. Despite its high cost, the burden for public finances and society are greater when antivenom is not used taking into account the reduction of life expectancy, disability-adjusted life year (DALY) and quality-adjusted life year (QALY). The costs per DALY and averted deaths vary, respectively, from US$ 2,000 to US$ 6,000 and from US$ 100 to US$ 300, depending on the country. Most studies confirmed the high incidence and severity of envenomation, and serious management deficiencies related in particular to late consultation, inaccessibility of antivenom and lack of training of medical personnel. Various communications dealt with the diagnosis of envenomation and treatment either by antivenom or herbal medicine. Overall, the availability of antivenom is inadequate, which was observed in most countries. In this minicollection "Strategies for Management of Snakebites in Africa", four studies selected among those presented at the conference are to be published in the Journal of Venomous Animals and Toxins including Tropical Diseases (JVATiTD): A research indicates that although the number of reported victims of scorpions stings is elevated in Morocco, snakebites also comprise a public health problem in the country that affects several hundreds of people, including some severe envenomation cases [3]. Another study reports that in the region of Kedougou (eastern Senegal), the annual incidence is about 315 snakebites per 100,000 population. Mortality exceeds eight deaths per 100,000 inhabitants according to household surveys, whereas official health statistics reports less than a third of those [4]. In Burkina Faso, more than 35,000 envenomation cases are notified annually with an average of 275 deaths. Despite these high numbers, a study reveals that only 1,150 doses of antivenom are administered every year in the region [5]. Therefore, although antivenom is imported from neighboring countries, the therapeutic coverage is far from enough. The other work revealed that in Benin, ultrasonography comprises a valuable tool that helps in the diagnosis and management of hemorrhagic disorders provoked by Echis ocellatus bites that represent more than 70 % of envenomation cases in savannas of sub-Saharan Africa [6]. During the general assembly of the African Society of Venomology, the creation of national subsidiariesdesigned to relay the recommendations of SAV and facilitate their implementation -that favor management autonomy was unanimously approved. Stakeholders meeting On the last day, an open discussion was held among stakeholders who were willing to participate. Once again, with the notable exception of the 4 th Conference in Dakar, in which the World Health Organization (WHO) was represented, international agencies, albeit invited, did not attend. The experiences of several represented countries were exposed. Burkina Faso subsidizes antivenom price (up to 90 %), causing the retail price to be 2,500 FCFA (about US$ 5). Since the beginning of 2015, Cameroon introduced mandatory reporting of envenomation, as recommended by WHO. Senegal forced every pharmacy in the country to stock permanently at least one vial of antivenom. Togo has been supporting for five years the price of antivenom by 60 % in the public drug distribution system. Finally, Côte d'Ivoire has introduced the treatment of envenomation in the National Program of Universal Health Coverage that would be active by the end of 2015. Following the debates, four major recommendations were unanimously adopted. 1. Epidemiological studies should be performed to assess the therapeutic needs, particularly the amount of antivenom required and where it should be available. Health authorities in each country were encouraged to establish, as soon as possible, the mandatory reporting of envenomation. 2. Training in the management of envenomation should be restored rapidly in medical, pharmacy, and nurse schools. Meanwhile, training of health personnel in the diagnosis of envenomation and use of antivenoms should be organized within each country. 3. Drug policy for antivenoms should be adapted to the national context. Antivenom selection and registration require rigorous criteria. Antivenoms are complex biological productsantibodies produced by horseswhich cannot be manufactured as generics. They require the use of venoms from local species, whose traceability should be guaranteed. Immunoglobulin purification and fragmentation should be performed using delicate processes, complying with standards set by WHO [7], and the application of quality control at every stage. The safety of antivenom should be favored as well as its effectiveness, especially since it is used in peripheral health centers, often poorly equipped and supplied. These characteristics explain the high price of antivenoms. 4. The accessibility of antivenoms should be ensured through appropriate funding, defined after anthropological investigations on the acceptability of the price by the affected population. An equalization of antivenom costs will involve the state budget, support from local governments, companies employing workers at risk (such as agribusiness corporations), and health insurance groups that are beginning to work in Africa. Representatives from each country made a commitment to convey these recommendations to national health authorities and put in place the measures needed to achieve them before the next international conference on envenomation in Africa, which should happen in 2018.
2017-07-08T12:58:58.255Z
2016-03-16T00:00:00.000
{ "year": 2016, "sha1": "794168341a17070bf1523af34d7fab3467041b35", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s40409-016-0062-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4c05eb104c7699f092f07128c766203c2989bfc3", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233821453
pes2o/s2orc
v3-fos-license
Power supply of the center for development of gifted children in Kaliningrad region based on renewable energy sources One of the modern vectors of world energy development is the transition to an energy self-sufficient principle of functioning of geographically limited local facilities. The paper presents an analysis of the possibility of the transition and the concept of using renewable energy sources in the transition to an autonomous mode of electricity and heat supply of the Center for the Development of Gifted Children in Kaliningrad Region. Introduction In Kaliningrad Region, there is a Center for the Development of Gifted Children (hereinafter referred to as the Center). At present, power supply of the Center is provided by the electric grid company Yantarenergo JSC with the emergency diesel generator for redundant power supply, and heat is supplied from a gas boiler house. One of the current trends in the global energy development is transition to an autonomous principle of electricity and heat supply using renewable energy sources. Therefore, the transfer of the Center power supply system to solar, wind and thermal energy of the earth (heat pump) is of high relevance. Implementation of this concept is particularly important for formation of an adequate attitude of the younger generation to one of the major trends in the global energy development. Analysis of the structure and electrical loads of the Center The center is located on the shore of the bay in Kaliningrad region (Figure 1a). It occupies the territory with a total area of 84700 m 2 and is designed to receive 150 children per session. The Center is supplied with power from the transformer substation TP No. 09-25 with 15/0.4 kV voltage (Figure 1b), which is powered by a 15 kV power transmission line from the grids of Yantarenergo JSC. The installed capacity of the electrical equipment of 160 kW provides electricity for the Center facilities, including dormitory blocks, a dining room, a boiler house, treatment facilities, outdoor illumination, water intake and an administrative building. The highest power consumption and the maximum power consumption of 65 kW is recorded in August (Figures 2 and 3). The analysis of the existing power supply system of the Center shows that it is necessary to develop technical solutions for autonomous operation of the Center using renewable energy sources: the annual electricity consumption of 190 thousand kW*h with regard to the highest monthly load in August (20 thousand kW×h), and maximum one-time load (65 kW). Providing heat load of the Center by means of a heat pump Heating and hot water supply to the Center (Table 1) is currently provided from a gasified boiler house. The heat supply network maintains pressure of 3 kgf/cm 2 and temperature in the range from 70 ℃ to 90 ℃. The analysis of the structure of installed heat capacities and predicting the calculated heat loads took into account the independence of the heat load of hot water supply (HWS) from climatic conditions and the probabilistic pattern of the dependence of the heat load of the heat supply on the ambient temperature ( Figure 4). A heat pump (HP) is the most promising heating, hot water supply and air conditioning system worldwide. The analysis of various heat pump-based systems has shown that the most optimal way to ensure the heat load of the Center is a ground-water heat pump with vertical arrangement of heat exchangers (ground probes). In Kaliningrad region, a geothermal source of hot water is located at a depth of 1-2 km from the earth surface, therefore, thermal energy stored in the upper layers of the ground (60-110 m) should be used [1]. A comparative analysis of the operating modes of heat pumps relative to the temperature profile of the projected heat supply network determined the bivalent HP operating mode as optimal, when the heat pump functions as the main heat generator up to a certain temperature of the ambient air; at lower temperature, the second heat generator is activated in parallel or alternative mode [2][3][4]. Based on calculations, the coverage of the heat load of the Center with a geothermal heat pump is 156 kW, i.e. design heating load provided by three heat pump units (HPU) Nibe F1345-60 with a nominal unit heating capacity of 57.7 kW. The calculation of the characteristics of the primary circuit equipment revealed the following: -length of the vertical probe ensuring the required power of the heat flow from the ground to the primary circuit coolant: where is the ground heat transfer of the soil, W/m; ℎ is power required for water heating, W; PGHP is geothermal heat pump power, W. -the number of probes for the maximum permissible depth of 100 m: where maximum allowable probe insertion depth, m-hydraulic losses in the probe pipes: where is the hydraulic friction factor; L is the pipeline length, m; D is the inner diameter of the pipe; v is flow velocity, m/s; g is gravitational acceleration, m/s 2 . For boreholes, a site was selected in the territory of the Center with sandy-clay soil and watersaturated sedimentary rocks. The average value of the heat flow during heat removal in vertical boreholes for this type of soil is QS = 70 W/m. The probe package should include 24 earth probes in the form of a U-tube with a burial depth of 100 m with a diameter of Du = 50×4.5 mm. The total length of the pipes is 4800 m. The geometrical characteristics of the earth probe and vertical borehole are shown in Figure 6. Figure 6. Design of the borehole with geothermal probe. The boreholes are supposed to be located in accordance with the corridor scheme with the longitudinal and transverse spacing equal to 8 m. The total area of the site allocated for boreholes is 900 m 2 . The average power consumption by heat pumps for heating and hot water supply, and by the heating element of the storage tank in thousand kW*h/month by months of the year is graphically presented for the bivalent mode ( Figure 7). Estimated power consumption for the bivalent heat supply mode is 144 thousand kWh/year. The heat pump unit of the considered configuration uses geothermal heat to satisfy the heat load in the entire probable range of changes in the ambient temperature of the atmospheric air within a year. The HWS heat load is covered by the peak fuel-consuming heat generator. Full coverage of the entire heat load through ground geothermal heat is possible by means of a monovalent cascade scheme or a combined HPU scheme, for example, using vacuum solar collectors in the HWS system. Analysis of the potential of solar energy in the territory of the Center and design of the solar plant In the territory of the Center, the average level of insolation (the amount of energy received from the Sun per unit area) is 1073 kWh/m 2 per year [5]. The change in global insolation against the angle of inclination and geographic orientation of the modules is presented in Table 2 and Figure 8 [6]. Based on the analysis of the data shown in Table 2 and Figure 8, it can be concluded that the maximum average annual insolation of 1230 kWh/m 2 per square meter can be attained on a surface oriented to the south and inclined at an angle of 37 0 to the horizon. The advisable realization of the solar energy potential is most efficient through installation of solar panels on the roof of the Center buildings (Table 3). Analysis of the existing solar panels showed that the maximum total capacity of a solar power plant can be attained using the JA Solar JAM72D10/MB 400W monocrystalline solar panel (Table 4, Figure 9). Figure 9. Current-voltage characteristic of the solar panel. With regard to the construction features of the roofs of the Center buildings, calculation was carried out for 600 m 2 area of the building roofs to install 283 solar panels of the given model. In this case, the total nominal installed capacity of all solar panels is 113.2 kW. Thus, provided that the solar panels efficiency is 19.9 % and the proposed inverter efficiency is 97 %, the overall system efficiency will be 19.3 %. The maximum expected total annual power generation (WY) is calculated using equation (4): where GHR is the maximum average annual insolation per square meter of the surface, is the used roof area of the Center buildings, is the system efficiency (solar panels and inverter), and k is the correction factor applied to additional losses and external factors (k=0.75). Thus, the estimated value of electric power generation will be 106,825.5 kWh/year. The data in Table 5 show that solar panels with the given parameters do not cover the Center's power consumption (190 MW×h/year), especially in winter. In this regard, the additional use of wind power plants should be considered. It should be noted that all the roof areas facing south used for solar panels can provide 200.3 MW*h/year. Choosing wind turbines When choosing wind turbines for the Center, the following factors were taken into acccount: the number and capacity of wind turbines; distance from the residential sector; the wind rose ( Figure 10) Figure 10. Repeatability of wind parameters in the Center area at a height of 10 m: a) wind frequency, b) wind power, c) wind speed. According to [8], the lower part of the wind turbine blades should be 8 meters higher than the highest obstacle observed within 150 m. In the Center area, the height of the obstacle is 10 m. Based on the analysis of the characteristics of wind turbines produced for such objects, the Еocycle eo25 wind turbine class III was chosen ( Figure 11, Table 6 Figure 11. Location of the Еocycle eo25 wind turbine class III in the Center area (a) and dependence of the wind turbine power on the wind speed (b). Given the wind rose and the requirements for the location of the wind turbines at a distance of 10 diameters of the wind turbine, two wind turbines are taken for installation (Figure 11a). The wind speed (v2) at the height of the wind turbine hub is calculated using equation (5) with regard to the data provided in Figures 11, 12 Power generation (Table 7) was calculated using equation (6) [11] with regard to the data presented in Figures 11b and 13: where U is the coefficient of the installed power of the wind turbine ( Figure 13); M is the coefficient of the mutual impact of wind turbines; n is the number of wind turbines; T is time of the estimated power generation. The power generated by the wind turbine can be calculated more accurately by the Reilich method [12]. However, the above estimates are sufficiently accurate for engineering calculations. If it is necessary to provide the Center with additional energy, the following should be considered: installation of three wind turbines based on the analysis of the wind rose, the permissible space between wind turbines and other factors; installation of the wind turbine on a hill where the wind speed at the level of the wind wheel hub is higher which ensures higher power generation. Choosing a storage device based on a storage battery Due to the abruptly variable operating mode of wind turbines, the dependence of power generation by solar panels on solar activity, as well as to ensure the reliability of power supply to the Center, a storage device based on storage batteries (SB) should be used (Figure 14). During the period of low loads of the Center (Figure 3) and excess of generated power, the batteries are charged using a controlled rectifier (CR) (Figure 14, WCH). The power accumulated in batteries to the power-supply system of the Center is supplied via a controlled inverter (CI) (Figure 14, WP) [13]. Figure 14. Scheme for connecting the battery to the Center grid. where is energy received by RB from the Center grid; Р is the power supplied by RB to the Center grid; is current power value; η is efficiency of rectifier and inverter; Pav is basic (average) load power. Taking into account the data presented in Figures 3 and 15, and Table 8, the storage device must supply up to 40 kW of power to the grid. Tesla Ins is currently the world leader in the production of storage devices based on lithium-ion batteries, and the Russian leader is the Liotech plant [14]. Among the products manufactured by the Liotech plant, the LT-LFP 300 battery with improved energy characteristics is of interest (Table 9, Figures 16 and 17). Figure 16. Dependence of the LT-LFP 300 battery voltage on the discharge rate at different discharge currents. Based on the accumulators produced by the Liotech plant, batteries with an output voltage UB = 12.24.36.48 V are currently being developed. The battery voltage fits the output parameters of the controlled rectifier and inverter. The number of batteries connected in series (NSER) is determined using the equation: where KR is the coefficient of the decrease in battery voltage during the discharge period (Figure 16), Ud0 is the constant voltage at the inverter input. In this case, the power and capacity of the storage device is calculated using the equations: The calculation results for the power and capacity of the battery obtained using equations (10)-(11) with regard to the recommendations on inadmissibility of the battery discharge below 0.8Crated, are shown in Table 10. where Npar is the number of parallel branches in the storage battery. Power-supply system of the Center The developed concept of the RES-based power-supply system for the Center is shown in Figure 18. To control, monitor and manage the autonomous energy system of the Center based on RES, a network control room (NCR) should be created in the Center. The NCR core is a server for the system remote control with Supervisory for Control And Data Acquision (SCADA). Thus, it should be possible to transfer the necessary data on the state of equipment and network parameters (including power consumption and generation) to the server. The principle of managing the RES-based power-supply system of the Center is shown in Figure 19.
2021-05-07T00:04:20.208Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "99462e1dbf1f78f4f944b1d5f3015c0bdcaa8447", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/689/1/012023", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "3bd49e3d106d536e47587045b5917357cabeb873", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Physics", "Business" ] }
10534919
pes2o/s2orc
v3-fos-license
STUDY ON THE APPLICABILITY OF THE MODIFIED TOKUHASHI SCORE IN PATIENTS WITH SURGICALLY TREATED VERTEBRAL METASTASIS To present the results obtained from surgical treatment of patients with vertebral metastases, comparing them with the modified Tokuhashi score in order to validate the applicability of this score for prognostic predictions and for choosing surgical treatments. Methods: This was a retrospective study on 157 patients treated surgically for spinal metastasis in Erastus Gaertner Hospital in Curitiba. The Tokuhashi score was applied retrospectively to all the patients. The patients' actual survival time was compared with the expected survival time using the Tokuhashi score. Results: There were 82 females and 75 males. The most frequent location of the primary tumor was the breast. The thoracic region was involved in 66.2%, lumbar region in 65.6%, cervical region in 15.9% and sacral region in 12.7%. All the patients underwent surgical treatment. The most frequent indication for treatment was intractable pain (89.2%). There was partial or complete improvement in a majority of the cases (52.2%). Out of 157 cases studied, 86.6% died. The maximum survival time was 13.6 years, the minimum was 3 days and the mean was 13.2 months. The following frequencies of Tokuhashi scores were found among the operated cases: up to 8 points, 111 cases; 9-11 points, 43 cases; and 12-15 points, three cases. The mean survival time in months for all 157 patients according to the Tokuhashi score was: 0-8 points, 15.4 months; 9-11 points, 11.4 months; and 12-15 points, 12 months. Conclusion: Unlike the nonsurgical approach recommended by Tokuhashi for patients with lower scores, this group in our study was sent for surgery, with better results than those of non-operated patients reported by Tokuhashi. INTRODUCTION With the progressive increase in overall survival among patients with bone metastasis, oncologists are increasingly faced with cases of secondary bone lesions. Knowledge of the principles for treating such lesions therefore becomes necessary, so that patients can be provided with improved quality of life (1) . The aim in treating metastatic bone lesions is to provide patients with improved quality of life through controlling pain and achieving partial or full functional recovery (2,3) . (89.2%). There was partial or complete improvement in a majority of the cases (52.2%). Out of 157 cases studied, 86.6% died. The maximum survival time was 13.6 years, the minimum was 3 days and the mean was 13.2 months. The following frequencies of Tokuhashi scores were found among the operated cases: up to 8 points, 111 cases; 9-11 points, 43 cases; and 12-15 points, three cases. The mean survival time in months for all 157 patients according to the Tokuhashi score was: 0-8 points, 15.4 months; 9-11 points, 11.4 months; and 12-15 points, 12 months. Conclusion: Unlike the nonsurgical approach recommended by Tokuhashi for patients with lower scores, this group in our study was sent for surgery, with better results than those of non-operated patients reported by Tokuhashi. Keywords -Neoplasm Metastasis; Spine; Survivorship (Public Health) Vertebral metastatic disease is difficult to treat, and the prognosis for the patient's life is the main factor in making therapeutic decisions (3) . The treatment options include conservative measures without surgery, such as hormone therapy, chemotherapy/radiotherapy and palliative treatment for pain; and surgical measures, which include complete excision of the lesion, replacement of the vertebra with an endoprosthesis and decompression and stabilization by means of posterior and/or anterior routes. There is a consensus that surgical intervention is thrombosis, pulmonary thromboembolism, pneumonia, atelectasis and other clinical complications that put the patient's life into immediate risk. Physicians have the duty to intervene at the right moment and with the necessary degree of aggressiveness, because the evolution of the condition often will not allow future interventions and such patients end up progressing to death under very poor conditions in terms of quality of life. The aim of this study was to present the results obtained from treating patients with vertebral metastases surgically, comparing their real length of survival with the length of survival expected according to the modified Tokuhashi scale, for each score group, in order to validate the applicability of this scale for prognostic predictions and for choosing the surgical therapy. MATERIAL AND METHODS Between the years 1993 and 2008, 157 patients with metastatic disease in the spine were assessed and treated surgically at our service. In conformity with the routine protocol, they underwent computed tomography scans on the cervical, thoracic and lumbar-sacral spine, in order to analyze all segments of the spine, as well as examinations to investigate other metastatic sites and to investigate primary sites if the primary site was unknown. The following data were tabulated: age, primary tumor, sex, location of the metastasis, preoperative neurological state according to the Frankel scale (Box 1), improvement in pain after the operation using a visual analogue pain scale ( Figure 1) and the final evolution of the case (survival or death). Box 1 -Classification of the patients based on the initial neurological condition, according to Frankel. FRANKEL CLASSIFICATION A -Absence of motor or sensory function below the level of the lesion B -Absence of motor function, but with some degree of sensitivity preserved below the level of the lesion C -Come degree of motor function but without practical usefulness D -Useful motor function below the level of the lesion E -Normal sensory and motor function, although there may be some abnormality of reflexes indicated in the following cases of vertebral metastatic lesion: spinal cord compression with myelopathy; vertebral instability with intractable mechanical pain; dislocated fractures of the spine; radiculopathy with progressive or uncontrollable symptoms; tumor growth even after radiotherapy; and direct extension of the primary tumor in the spine, for example in cases of Pancoast tumor (4,5) . Strategies for surgical treatment of vertebral metastases have been determined according to the prognosis for each patient's life. One way to determine these patients' prognosis is to use the Tokuhashi score, which has recently been revised. This takes into account six parameters that measure the severity of the clinical picture: 1) general condition; 2) number of bone metastases outside of the spine; 3) number of metastases in vertebral bodies; 4) metastases to important internal organs; 5) primary site of the cancer; and 6) paralysis according to the Frankel scale (Box 1). Each parameter is graded with score of between 0 and 2 points. Zero signifies a poor prognosis (6,7) . From the scores obtained, the patients are allocated to different survival prognosis groups: from 0 to 8 points, six months of survival; from 9 to 11 points, six to twelve months of survival; and from 12 to 15 points, more than twelve months of survival. With regard to treatment, choosing the ideal therapy continues to be a challenge, but this can be facilitated through standardizing the approach based on scales like the Tokuhashi score (8,9) . Keeping patients in bed for several days when they are affected by metastatic lesions in the spine cannot be accepted. In such situations, there would be a high likelihood of evolution involving deep vein The indications for surgical treatment were the following: spinal cord compression with myelopathy; spinal instability manifested as fractures, progressive deformation, progressive neurological deficit and intractable pain; radiculopathy with progressive or uncontrollable symptoms; and tumor growth even after radiotherapy or chemotherapy. The treatment methods used were: stabilization of the spine with fixation between segments (using pedicle screws or sublaminar wires and Hartshill rectangle); spinal cord decompression together with stabilization; and curative surgery with resection of the tumor lesion. A retrospective cohort study was conducted, in which the Tokuhashi scores (Box 2, 3 and 4; and Figure 2) were applied retrospectively to all the patients, in order to evaluate the applicability of the scale to the cases studied. The patients' real length of survival was compared with the length of survival that was expected according to the modified Tokuhashi score. -Metastases outside of vertebra Three or more 0 One or two 1 None 2 -Metastases in vertebra Three or more 0 One or two 1 None 2 -Metastases in viscera Not removable 0 Kidney and uterus 3 Rectum, thyroid and breasts 4 Prostate and carcinoid 5 -Neurological state Frankel A and B 0 Frankel B and C 1 RESULTS Out of the total of 157 patients, 82 (52.2%) were female and 75 (47.8%) were male. The mean age was 53.9 years, with a minimum of 15 and maximum of 84. Regarding the primary site of the neoplasm, the most frequent sites were the breasts (25.5%) and prostate (21%). The other cases are specified in Box 5. Regarding the location of the lesion in the spine, the thoracic region was affected in 66.2%, lumbar region in 65.6%, cervical region in 15.9% and sacral region in 12.7% (Box 6). In 50.3% of the cases, the tumor only involved one segment of the spine; in 39.5%, two segments; 9.5%, three segments; and 0.7%, four segments (Box 7). All the patients underwent surgical treatment, and the most common form of treatment was spinal cord decompression and fixation with pedicle screws. The most frequent indication for surgical treatment was intractable pain, which accounted for 89.2% of the cases. In 39.5%, the indication was paraplegia and in 35.7%, paresthesia. The other symptoms are shown in Table 8. Box 9 shows the number of symptoms presented per patient before the operation. It can be seen that 68.1% presented two or more symptoms together. The preoperative Frankel classification is shown in Box 10. Strategy for Treating Metastases in Spine Conservative Treatment Palliative Surgery Single lesion; Without metastases in important internal organs Total Score In relation to the improvement of the symptoms after the operation, it was observed that there was partial or full alleviation of the pain in the majority of the cases (52.2%) (Box 11). The criterion for assessing the postoperative symptoms was the visual analogue pain scale alone (pain was the main symptoms mentioned). No postoperative functional assessment on the Frankel scale was available to us in the patients' medical files; instead, only the pain assessment was available. Because of the retrospective nature of this study, it was thus not possible to make a more detailed analysis on the clinical improvement. Out of the 157 cases studied, 86.6% progressed to death. The maximum length of survival was 13.6 years; the minimum was three days and the mean was 13.2 months. Currently, 5.1% of the patients are still alive, for a maximum time of six years, minimum of 18 months and mean of 3.6 years. The other 13 of the 157 patients were lost from follow-up after a maximum of 6.08 years, minimum of 15 days and mean of 7.4 months (Box 12 and 13). The scores among the cases operated according to the Tokuhashi scale presented the following frequencies: up to 8 points, 111 cases; from 9 to 11 points, 43 cases; and from 12 to 15 points, three cases (Box 14). The mean length of survival in months for all the 157 patients is specified for each of the Tokuhashi groups in Box 15. Total 100% Box 13 -Length of survival in months. Mean Maximum Minimum Patients who progressed to death 13. Box 15 -Length of survival in months according to Tokuhashi scores, obtained by testing the means to investigate differences between the means for each score group, at the significance level of 5%. P = 0.599. DISCUSSION Primary tumors in the spine are rare and represent less than 10% of bone tumors. The great majority of vertebral tumor lesions consist of secondary lesions, especially in adult patients (2) . Around 50% to 70% of the patients who die because of malignant neoplasia present bone metastases in the spine (2,10) . Over recent years, there have been major advances in the staging and treatment of tumor lesions. This has made longer survival possible for individuals with tumors in the spine, and especially for those who present metastatic lesions, who have noticeably increased in numbers over the last few years (4,11) . The literature shows that applying the Tokuhashi score is valid for determining the prognosis for the lives of patients affected by spinal metastases. This subsequently is of assistance in determining whether surgical or conservative treatment is indicated. However, other factors that are not evaluated or included in this and other similar scales, such as the degree of maturity of the neoplasia, stage of the disease at the time of diagnosis and degree of compromising of renal function are also determinants of the prognosis (1) . Evaluations on surgical treatments for spinal tumors present major limitations because of the heterogeneity of the patient samples studied, with regard to the etiology of the tumor lesion. This creates difficulty regarding the final assessment of the results, since it cannot be forgotten that tumors of differing etiologies present different biological behavior. It is very difficult to accumulate a significant number of patients with vertebral tumors of the same etiology, and hence the studies that have been carried out present major limitations. With regard to the primary tumor, the data gathered from the patients in the present study were concordant with the literature, given that breast and prostate neoplasia were the most frequent origins for metastases. Overall, other studies found that 70% of the cases had metastases located in the thoracic spine, 20% in the lumbar-sacral region, 10% in the cervical segment and 17-30% in more than one segment (2,3) . In our sample, the thoracic region was affected in 66.2%, the lumbar-sacral region in 78.3% and the cervical region in 15.9%, and in 49.7% of the cases, more than one segment of the spine was compromised. Contrary to the literature, we observed that the metastases were predominantly in the lumbar-sacral region, with a greater number of multiple metastases. With regard to the treatment, given that the main determining factor for the type of treatment for vertebral metastases is the prognosis, scores such as Tokuhashi help to determine the prognosis for the disease and to choose the therapy. However, they are not the only tool for indicating the type of treatment. Other, separate variables need to be taken into consideration, such as: intractable pain; expansive lesions that do not respond to oncological treatments like radiotherapy, chemotherapy or hormone therapy; and spinal instability and spinal cord compression manifested clinically (1) . Of these symptoms, pain is the main manifestation in patients with metastatic bone lesions. Paraplegia may be another early sign caused by vertebral metastasis. Patients with significant neurological deficits but without paraplegia may present considerable improvement of their deficiencies through surgical treatment that aims to achieve decompression and stabilization. However, there is insufficient evidence to recommend routine surgical intervention for paraplegic patients (2,12,13) . In our study, the main symptom reported was also pain, which was present in 89.2% of the cases. Paraplegia was present in 39.5%, and paresthesia in 35.7%; 68.1% of the patients presented two or more symptoms in association. Regarding the improvement in symptoms after the operation among our sample, we observed that there was partial or full alleviation of the pain in the majority of the cases (52.2%). Surgical intervention is beneficial in the presence of metastatic disease in the spine, particularly with regard to alleviation of the pain symptoms caused by the metastasis, and to improvement of quality of life. However, there is insufficient evidence to ascertain whether survival is longer after surgery. Thus, surgery in such situations is palliative but improves patients' quality of life, provided that the criteria for its indication are respected (4) . Regarding the indications for treatment in the present study, they were not based primarily on the prognosis for patients' lives, given that in the cases presented here, the scores were applied retrospectively. The parameters for indicating treatment were: neurological deficit, instability due to fractures, or intractable pain that did not respond to conservative analgesia or radiotherapy measures. This approach was shown to be valid after retrospective analysis, given that the patients with poor prognoses achieved better quality of life. The distribution of the Tokuhashi scores for the cases operated at Hospital Erasto Gaertner, in Curitiba, was as follows: 111 cases with up to eight points; 43 cases with 9 to 11 points and three cases with 12 to 15 points, such that scores of 0-8 points had the worst prognosis and 12-15 had the best prognosis. The finding that most of our cases had scores between 1 and 8 represents the reality of an oncological reference hospital, to which cases of greater complexity are referred for evaluation by specialists in this field. However, even though the greatest number of cases belonged to the group with the worst prognosis according to the Tokuhashi scale, we observed that the mean survival in this group (15.4 months) was greater than the expected mean survival for the same category (less than six months). In the other groups, the expected survival according to the Tokuhashi score was longer: 6-12 months for the group with 9-11 points and more than 12 months for the group with 12-15 points, and we observed that for our sample, the mean survival for these groups was within the expectations: 11.4 and 12 months respectively. We believe that although surgical treatment for spinal tumors is of large magnitude and not free from complications, it should not be postponed in the presence of the indications mentioned earlier, since accomplishing it may satisfactorily change the clinical evolution of this group of patients. However, surgical indications based on determining the patient's prognosis and expected length of survival using scores like Tokuhashi were not shown to be applicable in this study, considering that patients who, according to the modified Tokuhashi score, would not be indicated for surgery because of the poor prognosis (score from 0 to 8) underwent operations within our service and achieved longer survival than would have been expected using the score. Thus, our study showed that there was also a benefit from surgically treating the patients whose prognoses were worse. In this manner, in metastatic disease, each patient should be considered individually, and the decision regarding the best treatment method should be made after complete staging. Although this treatment is palliative, it is important to avoid an attitude of "disbelief and disregard". The consequences of such an attitude could be disastrous and irreversible, and might lead patients and their families into inhuman situations. Despite the existence of many scales for assisting in making therapeutic indications and their validity for determining prognoses, good sense should be used in making decisions regarding the treatment objectives, methods and approaches, and this always requires a multidisciplinary team. Maintaining an active and proactive attitude regarding treatments for these metastases may help to improve the patients' quality of life, sometimes for a period of years. CONCLUSION Although this study was a retrospective analysis and the therapeutic indications were not applied based on the modified Tokuhashi scores, we formed an impression that the survival was longer in the group with worst prognosis than would be expected from the scores. Thus, differing from the non-surgical approach recommended for patients with lower scores, this group in our study was sent for surgery with better results than those found for non-operated patients reported by Tokuhashi .
2016-05-12T22:15:10.714Z
2011-07-01T00:00:00.000
{ "year": 2015, "sha1": "66943f9b6f68d355def45ca4ff222bec91fdfef7", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/s2255-4971(15)30257-3", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "cd54babb709b07461c4cd3367f13a08732592fe8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16670446
pes2o/s2orc
v3-fos-license
Management and follow up of extra–adrenal phaeochromocytoma Introduction The prevalence of phaeochromocytoma (PCC) in patients with hypertension is 0.1–0.6% and about 10% of PCCs are detected in extra–adrenal tissue. The diagnosis and therapy of this rare disease detected as a retroperitoneal tumor mass can be difficult for clinicians. Material and methods A PubMed database was searched for the peer–reviewed articles, the listed articles until Dec 2012 were included. Following key words were used: “extra–adrenal phaeochromocytoma”, “paraganglioma”, “diagnosis”, “therapy”, “surgery”, “genetic analysis”, and “SDH mutation”. Results Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) are first choice imaging tools for PCC (sensitivity 90–100%). For the validation of the diagnosis or follow up, the functional imaging 123I–metaiodobenzylguanidine (MIBG) or Fluorine–18–L–dihydroxyphenylalanine (18F–DOPA) positron emission tomography (excellent specificity and sensitivity of 90–100% in detection of small tumors >1–2 cm) are used. Laparoscopic surgery with complete resection is a safe and a first choice approach. The conversion (about 5%) to direct open operation was needed for large lesions (>8 cm) with the suspicion of malignancy. Currently, there are no histological criteria for distinguishing benign and malignant tumors. The genetic testing (Sanger DNA sequencing) for hereditary syndromes (von Hippel–Lindau, neurofibromatosis, etc.) is used for prediction of malignancy and recurrence. All patients should get individual and risk–adapted genetic analysis and consultation, including family members. The rate of malignancy in ePCC is about 30% (PCC about 5–10%). In patients with proven SDHB germline mutations, higher malignancy rate, multiple PCCs and recurrences are likely. A stringent lifelong clinical follow–up is recommended in these cases. Patients with syndromic hereditary forms should be screened for other often associated neoplasms. Conclusions New imaging tools and genetic analysis are crucial to improve the diagnosis and prognosis of phaeochromocytoma. INTRODUCTION The sophisticated classification by the World Health Organization, whose major contributions are from pathologists, classified phaeochromocytoma only as adrenal medullary tumors, and used for all other locations the term paraganglioma. For practical reasons a definition by Neumann is applied in the presented review [1]. Here, phaeochromocytoma (PCC) is used for adrenal and extra-adrenal (15%) abdominal, thoracic, and pelvic tumors; these tumors are mostly hormonactive. Paraganglioma (PGL) is exclu-tients with apparently sporadic phaeochromocytoma may be carriers of mutations [3,4]. About 10% of the inherited tumors are malignant, depending on the location of the gene mutation [5]. MATERIAL AND METHODS A PubMed database was searched for the peer-reviewed articles; the listed articles until Dec 2012 were included. RESULTS Retroperitoneal phaeochromocytoma is a rare differential diagnosis for malignant renal tumors, which represent a rare disease with the prevalence of 2-8 million patients in a year and about 0.1-0.6% of patients with hypertension. The tumors can occur from early childhood until late in life, with a mean age of 40 years at the time of diagnosis. The once used "rule of tens" stating the frequencies of inherited, malignant, bilateral and extra-adrenal tumors, is out of date. The new diagnostic and genetic methods discount this rule. Higher frequencies for inherited, malignant, bilateral and extra-adrenal tumors were detected depending on the affected gene. PCC can now be referred to as a "10-gene tumor", based on the number of susceptibility genes identified upto-date [6]. About 25% of patients have an inherited condition associated with different mutations and other tumor entities (VHL, RET, NF1 and SDH genes) [4]. The symptoms are very variable: episodes of palpitation, hypertension, headaches, and profuse sweating are most typical due to the episodes of hormone release. Biochemical diagnostic The diagnosis is based on catecholamine excess testing in plasma and urine and the localization of the tumor by imaging. The sensitivity and specificity of available biochemical tests differ considerably, with the highest sensitivity for plasma-free and urinaryfractionated metanephrines (metanephrine and normetanephrine). Metabolism of catecholamines to metanephrines occurs continuously within tumor cells by a process independent of catecholamine release. If metanephrines are not elevated, a phaeochromocytoma is unlikely; if metanephrines are strongly elevated (>3-4x), then the diagnosis is likely and further imaging diagnostics should be performed. The borderline elevations are likely to be false positive and Clonidin suppression should be done [1,7]. Imaging The imaging presents no clear criteria for distinguishing renal cell carcinoma from phaeochromocytoma. First choice is MRI (gadolinium contrast) showing a hyperintense mass in T2-phase. The infiltration of local organs and vessels can be better evaluated with MRI than with CT. The other benefit is no need of iodine contrast. Alternatively, a CT scan with contrast can be done with nearly the same sensitivity (90-100%). The benefits of a CT scan are low cost, good availability and high sensitivity (detection of lesion 0.5-1 cm). CT has a low specificity for PCCs, since morphological imaging cannot distinguish these tumors from other types of adrenal masses. Small extra-adrenal tumors can be missed with MRI or CT. It is necessary, especially in hereditary PCCs and PGLs to apply the nuclear medicine procedures for tumor localization, validation and follow up. The radioactive tracers such as 131 I, 123 I-metaiodobenzylguanidine (MIBG), octreotide (somatostatine) or Fluorine-18-L-dihydroxyphenylalanine ( 18 F-DOPA) positron emission tomography are commonly used [1,2,5,8,9]. MIBG is a traditional choice of imaging for neuroendocrine tumors since it is more available and can be used for planning the MIBG therapy in metastatic disease, but the resolution and the sensitivity (SPECT) is inferior to DOPA-PET/CT. Excellent specificity and sensitivity of 90-100% for 18 F-DOPA in detection of small tumors >1-2 cm was published. A study by Hoegerle et al. showed that 18 F-DOPA PET/CT had a higher spatial resolution and a more selective, clearer radiotracer accumulation in PCCs than did 123 I-MIBG SPECT [10]. The problem of PET/CT is the lack of availability and high cost, which currently are not reimbursed with medical insurance for this indication. The differential diagnoses of retroperitoneal mass include amongst others hemangioblastoma, sarcoma, renal tumors, non-Hodgkin`s lymphoma and adrenal adenoma. Most frequently, adrenal masses are represented by benign cortical adenomas, which cause a mild hypercortisolism. The criteria for malignancy are established by if there is presence of excessive hormone production and if the tumor size >4-6 cm. Fine-needle aspiration biopsy (FNA) is not useful to distinguish between benign and malignant lesions [1]. Treatment The treatment options for phaeochromocytoma and the approach of the surgeon have to be discussed and determined. Preoperative patient preparation is essential for a safe surgery. The problem is potential perioperative hemodynamic instability with tachycardia, arrhythmia, and hypertensive crisis due to the catecholamine secretion. Before the operation a cardiologic checkup with antihypertensive medication, if needed, should be performed. The medication with α-blocker is generally not required. During the operation, the rise of blood pressure can be controlled by f.e. short-acting calcium antagonists and the tachyarrhythmia can be treated with infusion of a shortacting β-blocker. However, perioperative cardiologic complications are rare and patients do not need an intense surveillance. The major postoperative complications are hypotension and hypoglycemia due to the fall of circulating catecholamines. Postoperatively, the antihypertensive medication can be reduced slowly and the blood pressure will normalize regularly a few days after the operation. The perioperative mortality could be reduced over years to less than 3%, mainly due to improved anesthesiological and operative management [1,7,11,12]. The intraoperative aim is a complete surgical resection even if it is often challenging because of the strong vascularization of the tumor and the location near multiple vital blood vessels. The treatment should be performed in specialized centers, otherwise a second opinion is necessary. All patients with phaeochromocytoma, including those at extra-adrenal abdominal, pelvic, and thoracic sites should be initially opted for the endoscopic operation. The results suggest that the endoscopic versus open approach has a shorter hospital stay and less blood loss. Moreover, faster recovery and better cosmetic results were detected after endoscopic surgery [11][12][13][14]. The conversion rate of endoscopic to open surgery is about 5%, the reasons being large size of the tumor, malignancy, and bleeding [11,15]. The open procedure should be reserved for large extra-adrenal tumors with the suspicion of malignancy [15,16]. The retroperitoneal approach with "no touch technique" seems to be better than transperitonal. The multiple tumors should be removed in a single operation [12]. Walz and colleagues reported on the largest trail (retrospective, non-randomized study) with 144 retroperitoneoscopic or laparoscopic operations for PCCs. The mean tumor size was only 3.5 cm, and the conversion to open surgery occurred only once. The authors also reported excellent results with 11 ePCCs located mostly below the renal vein. Contraindications for the laparoscopic approach include tumors bigger than 8 cm, malignancy, and extreme obesity (BMI>45) [12]. Adrenal-sparing surgery is routine for extra-adrenal tumors, especially in bilateral familial phaeochromocytoma (von Hippel-Lindau, MEN 2) and can be managed endoscopically. The bilateral adrenalectomy has been performed earlier and has lead to a life-long dependency from steroid and mineral corticoid replacement. Postoperatively, catecholamine normalization should be documented and a cortisol deficiency should be excluded if bilateral adrenal cortex-sparing surgery was performed [17]. The prognosis for a completely resected sporadic phaeochromocytoma is excellent. If the tumor is completely removed, the relapse and malignancy risks are low [18,19 ]. But about one-third of patients with a hereditary extra-adrenal disease have recurrence [13]. The surgery of recurrent tumor lesions is still controversial, only data with small numbers of patients (n <10) are available. Recurrent lesions need a potentially more intensified and longer surgical preparation. Another problem is probably the increased pCO2 and effects on the blood pressure during laparoscopy [16]. Other studies showed good results of minimal invasive operations of small sized tumor relapse without higher risk of complications [12]. There are no histological criteria to determine a malignant disease. The most common metastatic sites are the skeleton, lungs, liver, and lymph nodes. The treatment is symptomatic or based on palliative radio-chemotherapy (cyclophosphamide, vincristin, dacarbazine) with a 5-year survival of 30-60% [1,11]. Patients with hereditary disease mutations present higher rates of malignant disease, depending on the location of the mutation. Genetics Given the relatively high prevalence of familiar syndromes (about 25%) among patients who present with phaeochromocytoma or paraganglioma, it is useful to identify germline mutations, even in patients without a known family history. Twothirds of extra-adrenal tumors are associated with one of the hereditary syndromes and have a higher risk of multifocal locations. Other family members should be screened if a germline mutation was detected [1,5,7,20,21]. The most frequent germline mutations, responsible for familial PCCs, are: the von Hippel-Lindau gene (VHL), which causes von Hippel-Lindau syndrome; the RET gene, leading to multiple endocrine neoplasia type 2; the neurofibromatosis type 1 gene (NF1), which is associated with von Recklinghausen's disease; and the genes encoding the B, C and D subunits of mitochondrial succinate dehydrogenase (SDHB, SDHC and SDHD), which are associated with familial paragangliomas and phaeochromocytomas [7,11] (Table 1). About 20% of patients with VHL syndrome present with phaeochromocytoma, which are often bilateral and present multifocal abdominal or thoracic locations. Associated tumors are renal clear-cell carci-nomas and cysts, primitive neuroectodermal tumor (PNET), central nervous system and retinal hemangioblastomas, pancreatic tumors and cysts, endolymphatical tumors, and epididymal cystadenomas. Malignant disease is rare, but RCC and PNET should be excluded. In multiple endocrine neoplasia type 2 (5-10%, autosomal-dominant), bilateral tumors occur often (50-80%), while extra-adrenal tumor or malignant disease are very rare. Clinical presentation is not evident because the penetrance of the disease is age dependant. An associated tumor is medullary thyroid carcinoma, which should be operated early [7]. The prevalence of phaeochromocytoma in neurofibromatosis type 1 is relatively rare (1-3%). Because of this, routine screening for the tumor is not generally recommended. Succinate dehydrogenase or succinate-ubiquinone reductase is the complex II of the mitochondrial respiratory chain located in the mitochondrial matrix. SDH is an enzyme complex composed by four subunits encoded by four nuclear genes (SDHA, SDHB, SDHC and SDHD) [20]. Mutations in the four SDH complex subunits and SDHAF2 have been detected in PCC, but frequency, site, and malignancy varies. SDHA-mutations give rise to severe neurodegeneration and myopathy, and rare cases of malignancy [22]. An associated protein, SDHAF2, is implicated in flavination of SDHA and is essential for SDH function. There were no metastases found in mutations that were associated with multifocal paraganglioma. [24]. SDHC-mutations are rare and are mostly associated with PGL [21]. Mutations of SDHB and SDHD genes have been seen in about 5-10% of pa-tients with non-syndromic phaeochromocytoma [23]. SDHB-mutations are often associated with extra-adrenal PCC and an increased rate of malignant disease (up to 50%). Rare associated tumors are renal-cell carcinomas; this is not clear for thyroid papillary carcinoma. SDHD-mutations that have been inherited from the father develop the disease (often paraganglioma) and those from the mother are disease-free [11,7,25]. Life-time tumor risk for SDHmutations seems higher than 70% with variable clinical manifestations depending on the mutated gene [20]. The largest web-based gene specific DNA variant database Leiden Open Variation Database (LOVD) reported three hundred and forty-seven indexed cases as carriers of SDHB and two hundred and fifty-three indexed cases as carriers of SDHD germline mutations in August 2011. Over a hundred unique DNA variants were described for each of the genes. The mechanism whereby SDH-mutations (mostly SDHB) predispose to malignancy is unclear. In some instances, the SDH subunits apparently behave as tumor suppressor genes, with somatic loss of heterozygosity occurring in neoplastic transformation [21]. Pasini and Stratakis [20] present the results, which strongly suggest the activation of the hypoxia ⁄ angiogenesis pathway as a possible mechanism underlying tumor development. In malignant phaeochromocytoma with somatic terminal deletion of 1p (SDHB-mutation) the SDH activity was abolished with increased expression of the vascular endothelial growth factors VEGF-R1 and VEGF-R2 in endothelial cells. TMEM127 is a recently detected tumor suppressor gene, which encodes a protein linked to mTOR-sig- naling. Typically patients that have adrenal phaeochromocytomas (often bilateral), and malignancy is infrequent. The frequency of TMEM127-mutations in PCC is low with about 2% and the need for testing is not clearly defined [26,27]. New studies identified germline inactivating MAXmutations in PCC and association with malignant outcome and preferential paternal transmission. MAX is a key component of the MYC-MAX-MXD1 network that regulates cell proliferation and differentiation [28]. Genetic information could be potentially useful for the surgeon. In cases that are at high risk for postsurgical complications, especially mediastinal tumors or those at the base of the skull; it could help to decide between watchful waiting or surgical removal [5]. For analysis, a genetic testing and immunohistochemistry should be performed. The exact sensitivity and specificity of the methods vary and have to be determined. Follow up Generally, all patients should be followed up every year for at least 10 years after surgery and patients with extra-adrenal or familial pheochromocytoma lifelong. If genetic testing is negative in a patient with phaeochromocytoma, recurrence is very unlikely [11]. Pasini and Stratakis [20] suggest the patients with SDH-mutation, a high risk collective and postulate minimum follow-up program (a careful history and physical examination, annual measurement of the blood pressure and urinary catecholamines in addition to bi-annual imaging with CT and/or MRI), starting in the second decade of life (first decade in SDHB mutation carriers). CONCLUSIONS The management of patients with phaeochromocytoma should be performed by teams of experienced anesthesiologists and surgeons in order to prevent perioperative complications and reduce the morbidity. Endoscopic organ sparing approaches should be favored. Currently there are no histological criteria for distinguishing between benign and malignant tumors. The genetic diagnostics is a crucial tool in following up and counselling the patients and their families. Testing of all genes is expensive and time consuming. For unilateral tumor, age >40 and no familial history, testing is probably not necessary. The monitoring of the patients is based on clinical and biological examination (measurement of urinary catecholamines). Extra-adrenal phaeochromocytoma is a rare differential diagnosis of patients presenting with a retroperitoneal tumor mass and hypertension. Because of the described pitfalls, it is important consider it and to consult experts early on.
2016-05-14T10:39:34.075Z
2014-06-23T00:00:00.000
{ "year": 2014, "sha1": "0e764dcaa71ffd30392f6f3dc8e73a4d62ba66fc", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc4132600?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "0e764dcaa71ffd30392f6f3dc8e73a4d62ba66fc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248190857
pes2o/s2orc
v3-fos-license
Population Pharmacokinetics of Intravenous Acyclovir in Oncologic Pediatric Patients Background: Acyclovir represents the first-line prophylaxis and therapy for herpes virus infections. However, its pharmacokinetics in children exposes them to the risk of ineffective or toxic concentrations. The study was aimed at investigating the population pharmacokinetics (POP/PK) of intravenous (IV) acyclovir in oncologic children. Methods: Patients (age, 8.6 ± 5.0 years, 73 males and 47 females) received IV acyclovir for prophylaxis (n = 94) and therapy (n = 26) under a therapeutic drug monitoring (i.e., minimum and maximal plasma concentrations, >0.5 and <25 mg/L, respectively). Plasma concentrations were fitted by nonlinear mixed effect modeling and a simulation of dosing regimens was performed. Findings were stratified according to an estimated glomerular filtration rate (eGFR) threshold of 250 ml/min/1.73 m2. Results: The final 1-compartment POP/PK model showed that eGFR had a significant effect on drug clearance, while allometric body weight influenced both clearance and volume of distribution. The population clearance (14.0 ± 5.5 L/h) was consistent across occasions. Simulation of standard 1-h IV infusion showed that a 10-mg/kg dose every 6 h achieved target concentrations in children with normal eGFR (i.e., ≤250 ml/min/1.73 m2). Increased eGFR values required higher doses that led to an augmented risk of toxic peak concentrations. On the contrary, simulated prolonged (i.e., 2 and 3-h) or continuous IV infusions at lower doses increased the probability of target attainment while reducing the risk of toxicities. Conclusion: Due to the variable pharmacokinetics of acyclovir, standard dosing regimens may not be effective in some patients. Prospective trials should confirm the therapeutic advantage of prolonged and continuous IV infusions INTRODUCTION Herpesvirus infections in immunocompromised patients, particularly in hematopoietic stem cell transplant (HSCT) recipients, lead to severe disease with high dissemination rates, complications, and mortality (Beyar-Katz et al., 2020). Herpes simplex virus (HSV) is a ubiquitous virus that results in lifelong infections due to its ability to alternate between lytic replication and latency (Ly et al., 2021). The worldwide prevalence of HSV-1 increases consistently with age, reaching 40% by age 15 and increasing to 60%-90% in older adults (Chayavichitsilp et al., 2009). Up to 80% of adult leukemia patients are HSV seropositive, as well as up to 80% of HSV-seropositive allogeneic HSCT recipients had post-transplant HSV reactivation (Styczynski et al., 2009;Flowers et al., 2013). In the first post-transplant year, symptomatic varicella-zoster virus (VZV) reactivation in adult recipients is described with rates of 13%-55% (Beyar-Katz et al., 2020). Similarly, 30%-33% of pediatric HSCT recipients had VZV reactivation, and 11% of these were disseminated (Fisher et al., 2008). In the current era of antiviral prophylaxis in seropositive HSCT recipients, the infection rate has decreased significantly, besides a significant reduction in mortality (Dadwal, 2019). In pediatric patients undergoing allogeneic HSCT, surveillance algorithms, antiviral prophylaxis, or pre-emptive treatment are well established for many viruses due to the high incidence of severe systemic complications in this population (Czyzewski et al., 2019;Jaing et al., 2019). In contrast to HSCT recipients, data on systemic viral infections in children receiving chemotherapy for hematological malignancies are very limited (Buus-Gehrig et al., 2020). However, few reports confirm that children with malignant diseases who experience prolonged periods of myelosuppression due to cytotoxic chemotherapy are highly susceptible to invasive viral infections (Feldman and Lott, 1987). Most viral reactivation in adult cancer patients during neutropenia after myelotoxic chemotherapy is due to HSV (Saral et al., 1984). However, despite the high infection rate, there is not enough evidence from randomized trials on acyclovir prophylaxis in patients with acute leukemia to establish a strong recommendation in adult and pediatric patients undergoing intensive chemotherapy (Styczynski et al., 2009). Acyclovir (ACV) effectively prevents and treats HSV and VZV infections but demonstrates high interindividual variability in its treatment response. Indeed, ACV prophylaxis is recommended for all HSV-seropositive HSCT recipients from conditioning until engraftment or until mucositis resolves to prevent HSV reactivation during the early post-transplant period. For VZVseropositive HSCT recipients antiviral prophylaxis is recommended for at least one year, while for VZVseronegative HSCT recipients passive immunization is preferred (Tomblyn et al., 2009;Carreras et al., 2018). For instance, the prophylactic regimen of ACV for the European Blood and Marrow Transplant Group consists in 250 mg/m 2 (or 5 mg/kg) i.v., every 12 h, while the treatment of infections needs an intensified regimen (250 mg/m 2 or 5 mg/kg i.v., every 8 h for 7-10 days) (Carreras et al., 2018). Due to the time-dependent killing of ACV, plasma concentrations should be higher than 1 mg/L for at least 50% of time interval between two consecutive doses (Saiag et al., 1999), so that a minimum plasma concentrations >0.5 mg/L could be considered an appropriate target. Moreover, high peak concentrations in plasma (i.e., >50 mg/L) are associated with an increased rick of neurotoxicity (Wade and Monk, 2015), even if the correlation between high plasma concentrations and the risk of nephrotoxicity and bone marrow adverse events remains to be fully investigated. ACV is eliminated by the kidney's glomerular filtration and tubular secretion, has low oral bioavailability, approximately 10% in children (Carcao et al., 1998), and a short half-life, hence it requires high and repeated doses to exceed the value of the inhibitory concentration (Abdalla et al., 2020). Furthermore, ACV has high interindividual variability (Blum et al., 1982;O'Brien and Campoli-Richards, 1989), which is particularly evident in the younger patients, and related to changes in renal function during the first months after the birth and body weight across the ages (De Miranda and Blum, 1983;Zeng et al., 2009;Abdalla et al., 2020). In addition to this, the genetic status of the patient may be considered a further cause of variability of pharmacokinetics and clinical outcome, as recently demonstrated for the NUDT15 polymorphism (Nishii et al., 2021). Therefore, information regarding ACV optimal use in children with malignancies is restricted because pharmacokinetic data are limited in this population (Zeng et al., 2009;Abdalla et al., 2020). This study aims to characterize the pharmacokinetics of ACV following intravenous (IV) administration and to evaluate the adequacy of current dose regimens for children with malignancy undergoing myelosuppressive chemotherapy or HSCT. Additionally, the study aims to explore alternative dosing regimens that could be more effective and tolerable. Study Design and Population This prospective, single-center, observational study was carried out at the pediatric Onco-Hematology Department and pediatric Bone Marrow Transplant Center of the Institute for Maternal and Child Health-IRCCS "Burlo Garofolo," Trieste, Italy, from 2011 to 2020. The Institutional Review Board of the IRCCS Burlo Garofolo (reference no. IRB RC 10/20) approved the protocol, and the study was conducted following the Declaration of Helsinki (Clinicaltrials.gov code: NCT05198570). The patients' parents gave their written consent to collect and use personal data for research purposes. From January 2011 to December 2020, consecutive patients aged 0-18 years, affected by hematological malignancies, undergoing ACV prophylaxis or treatment for HSV-VZV infection during allogeneic HSCT or ACV treatment during high-intensity chemotherapy were included in this study. All patients underwent ACV therapeutic drug monitoring (TDM), and consequent dose adjustment was applied to maintain minimum (C min ) and maximum (C max ) plasma concentrations >0.5 mg/L and <25 mg/L, respectively, as per laboratory practice. Data collection included patient demographic characteristics as gender, age, weight, height, body mass index, body surface area, serum creatinine in addition to primary diagnosis, treatment for primary diagnosis, ACV dose, administration interval, concomitant medications, duration of ACV treatment, and cause of treatment interruption. The Schwartz formula determined the estimated glomerular filtration rate (eGFR) for each patient (Schwartz et al., 1976). ACV Administration Regimens and Blood Sampling ACV was administered every 6-8 h intravenously (Acyclovir Recordati ® , Biologici Italia Laboratories S.r.l., Masate, Milan, Italy) over 60 min infusion, with median (range) starting daily doses equal to 40.7 (15.6-136.7) mg/kg/day. The prescribed doses refer to the local protocols and depend on the age, treatment indication, and renal clearance. These differences in dosage reflect systematic changes in institutional practice over the ten years covered by the study. The first pharmacokinetic assessment (two samples) was performed after at least four days of ACV administration when the steady state was achieved. In particular, blood samples were withdrawn 10 min before (trough levels, C min ) or 30 min after IV infusion (maximum concentration, C max ). For some patients, blood samples were collected on several occasions. Blood samples were centrifuged, and plasma concentrations of ACV were measured by a liquid chromatography-tandem mass spectroscopy (LC-MS/MS) performed in multiple reaction monitoring mode (Kanneti et al., 2009). Briefly, after deproteinization, calibration standards, quality controls (both prepared in human blank plasma) and patients' plasma samples (using fluconazole as internal standard, IS) were prepared by solid phase extraction (Oasis HLB Vac Cartridge, Waters, Milford, CT, United States). Acyclovir and IS were eluted through a C18 column with a mobile phase consisting of 0.1% formic acid solution and methanol (30:70 v/v) at a flow rate of 0.8 ml/min in isocratic conditions. The LC-MS apparatus worked in positive ion detection, and quantification was performed in multiple reactions monitoring (MRM) of transition ions m/z 226.3 > 152.1 and m/z 306.9 > 219.9 for ACV and IS, respectively. Pharmacokinetic Modeling and Simulation The population pharmacokinetics (POP/PK) analysis was performed on the available plasma concentrations using a nonlinear mixed effect modeling approach using NONMEM software vers. 7.4 (ICON, Dublin, Ireland) 1 and the packages PsN and Xpose (Jonsson and Karlsson, 1999;Lindbom et al., 2004). One-and two-compartment models with first-order elimination were tested, while the residual error was assayed as an additive, proportional and mixed model. Interoccasion variability (IOV) was evaluated for pharmacokinetic parameters. The introduction of covariates within the model was guided by their range of values in the dataset and their possible mechanistic involvement in ACV pharmacokinetics. Overall, a decrease in the objective function value (OFV) greater than 3.84 points (p < 0.05) and 6.63 points (p < 0.01) in the forward inclusion and backward exclusion, respectively, were considered during the model development. The conditional weighted residuals (CWRES) were calculated (Hooker et al., 2007), and the goodness-of-fit (GOF) plots, the precision of parameter estimates, ηand ε-shrinkage values were evaluated for each model. A bootstrap analysis and a prediction-corrected visual predictive check (pcVPC) were used to judge the final model (Bergstrand et al., 2011). The terminal elimination halflife (t 1/2 ) was calculated as t 1/2 = ln (2)/k el , where k el is the individual empirical Bayes estimate of the ACV elimination constant.The software NONMEM was used to simulate different drug administration schedules based on the final POP/PK model. The simulation included dosages in the range 15-30 mg/kg administered as standard (i.e., 1 h), prolonged (i.e., 2 and 3 h) and continuous IV infusions every 6 and 8 h in 1,000 individuals for each dose level. C min values higher than 0.56 or 1.156 mg/L in at least 50% of patients or C max values > 25 mg/L in less than one-fourth of patients were considered as the desired pharmacokinetic targets in patients grouped according to the presence of augmented renal clearance (ARC) or not (i.e., eGFR >250 or ≤250 ml/min/1.73 m 2 , respectively) (Abdalla et al., 2020). Statistical Analysis Data are presented as mean ± standard deviation (SD), median and minimum-maximum range, or 95% confidence interval (95% CI) according to the parameter described. Statistical computations (i.e., unpaired Student's t-test with Welch's correction, Mann-Whitney test, ANOVA, Fisher exact test) were performed using Prism 5.0 (GraphPad Software Inc., La Jolla, CA, United States) after checking for normal distribution of values (when appropriate) by the Kolmogorov-Smirnov test, and the significance level was set at p < 0.05. Patients and Acyclovir Monitoring The current database included 73 boys and 47 girls (age, mean ± SD, 8.0 ± 5.2 and 9.5 ± 4.6 years, respectively; Table 1). Most patients were affected by hematological malignancies and addressed to allogenic HSCT ( Figure 1). Twenty-six patients received ACV to treat a viral infection caused by HSV or VZV, whereas in the remaining 94 children the drug was administered for prophylaxis. Along with the treatment, blood samples were withdrawn to perform ACV monitoring of plasma concentrations in 120, 54, and 13 patients on a first, second, and third occasion, respectively. On the first occasion, measured C max accounted for 7.6 ± 5.4 mg/L, while C min values were 1.0 ± 1.1 and 0.85 ± 1.3 mg/L for the 6-h and 8-administration schedule of ACV, respectively, without significant differences across occasions ( Figure 2). The dose was changed in 47 children (increased in 39) and 8 patients (increased in 7) at the second and third occasions, respectively. In 26 patients (21.7%) who received ACV to treat herpetic infections, doses were 381.3 ± 199.2 mg (median, 400 mg) on the first occasion. In 11 and 4 patients, a second and a third occasions were available, with doses of 432.5 ± 150.9 mg (median 400 mg) and 500.0 ± 163.3 mg (median 500 mg), respectively. Among the measured ACV plasma concentrations, 15 and 11 patients had at least one C min value > 0.56 and >1.156 mg/L, respectively. ACV administration was followed by a complete recovery, improvement, or infection control in approximately 75% of patients. In comparison, in the remaining individuals, the records showed a worsening of symptoms and signs (14.3% of patients) or the emergence of a further viral infection (i.e., cytomegalovirus, 9.5% of individuals). Measured C min values in 3 out of 4 patients with poor response to therapy were greater than 1 mg/L, while in the fourth child, the C min value accounted for 0.78 mg/L. .7 kg, respectively at the first, second, and third occasion) and it was not considered in the following modeling. POP/PK Modeling Among covariates with a possible effect on acyclovir pharmacokinetics, only eGFR on CL (−12.386 points) and body weight (with allometric scaling) on both CL and V (−22.811 and −12.664 points, respectively) did significantly decrease OFV; hence they were retained in the final model ( Table 2). Among the GOF plots (Figure 3), the CWRES versus time after dose graph revealed an overprediction during the first few hours after the dose, which was likely dependent on the schedule of blood withdrawal, while the pcVPC plot did not entirely fit the lowest C min and highest C max values ( Figure 4). However, the bootstrap analysis (with 1,000 resampled databases) resulted in good performance of the final model (Table 2), with nearly 90% of runs that ended successfully. The analysis of PK parameters of the final model (showed in Table 3) did not demonstrate significant gender-based differences. The terminal t 1/2 accounted for 1.364 ± 0.614 h, while IIV CL and IOV CL values were 46.4% and 20.0%, respectively. The present CL and V values for a typical individual do match those previously obtained (Abdalla et al., 2020), despite the higher eGFR values calculated in the present patients (median, 209.4 ml/min/1.73 m 2 ) with respect to previous ones (164 ml/min/1.73 m 2 ). More interestingly, the analysis of PK parameters between the first and the second occasion did not reveal significant differences for any pharmacokinetic parameter considered (Table 4). In agreement with these results, an additional analysis of ACV PK in 13 patients did not show significant differences across the three occasions ( Table 5). POP/PK Simulation The present simulation was based on the median values of covariates that were included in the final model. In particular, individual values of eGFR were randomly calculated from a distribution similar to that obtained from the present population of patients (i.e., mean and standard deviation values of 228.1 and 79.7 ml/min/1.73 m 2 , respectively), while body weight was fixed at 27.8 kg. In agreement with a previous study (Abdalla et al., 2020), an ACV dose of 20 mg/kg every 6 h may ensure a C min value > 0.56 and >1.125 mg/L in approximately 62.5% and 50.9% of the patients, respectively, while 28.6% of patients may experience a C max value > 25 mg/L when the eGFR was ≤250 ml/min/1.73 m 2 ( Table 6). On the contrary, at eGFR values > 250 ml/min/ 1.73 m 2 , standard dosing regimens had a lower probability of target attainment, especially when the time interval between doses was 8 h ( Table 6). The increase in dose to improve target attainment may expose the patients to an augmented risk of high peak plasma concentrations. The following simulation of prolonged 2-and 3-h IV infusions resulted in an increased probability of ACV C min values > 0.56 and >1.125 mg/L ( Figure 5; Table 7). Of note, the reduced rate of infusion resulted in a lower probability of C max values > 25 mg/L. In line with these results, simulated continuous infusions at doses of 10 mg/kg every 8 h were associated with C min values > 0.56 mg/L in all patients regardless of the eGFR value. When the desired C min threshold was set at 1.125 mg/L, 96.1% and 90.5% of patients achieved the target when the eGFR was ≤250 (C min = 3.18 ± 1.69 mg/L) and >250 ml/min/1.73 m 2 (C min = 2.51 ± 1.26 mg/L), respectively. DISCUSSION The present study was performed in a homogeneous population of oncologic paediatric patients receiving IV ACV for prophylaxis or to treat HSV and VZV infections. The final findings of the POP/PK analysis demonstrate a high variability between and within individuals that warrants the adoption of therapeutic drug monitoring. Furthermore, the simulation suggested that prolonged IV infusions could increase concentrations in the therapeutic range while reducing the risk of toxic peak concentrations. ACV is a fundamental agent for prophylaxis and treatment of herpes virus infections, especially in HSCT patients or those who received high-dose antineoplastic chemotherapy like those enrolled in the present study. However, appropriate use of ACV depends on maintaining effectiveness through plasma concentrations above the threshold of sensitivity of known viral strains while reducing the risk of toxic concentrations identified in plasma levels >25 mg/L (Abdalla et al., 2020). The achievement of those goals may be severely influenced by patients' characteristics, especially in children. Indeed, previous paediatric studies have clearly demonstrated that the variability in ACV pharmacokinetics is better explained by the renal function rather than dose (De Miranda and Blum, 1983), meaning that including plasma creatinine or eGFR within the model reduced IIV CL and predicted the individual values with respect to observations (Zeng et al., 2009). In addition to this, the variability in both CL and V d was significantly associated with body weight (Zeng et al., 2009;Abdalla et al., 2020). The present findings do confirm those conclusions in the largest population of enrolled patients published so far, showing a large interpatient and intrapatient variability that requires the adoption of TDM protocols. In the present study the unexplained IIV CL (46.3%) was accompanied by a IOV CL that accounted for 20.0%. Together, these values strengthen the benefit to include IOV in the estimation of individual PK parameters (Karlsson and Sheiner, 1993), and do sustain the monitoring of ACV concentrations during chemotherapy, because the time-varying covariates help to explain the variability across different occasions. Since it is possible that "the magnitude of IOV increases with the time between study occasions" (Karlsson and Sheiner, 1993), the mean/median interval time between two consecutive occasions in the present study was 16.5/8 days (minimum-maximum range, 2-165 days), likely explaining the lower IOV CL with respect to IIV CL . Some characteristics of the present study have to be pointed out. First of all, the number of enrolled patients did allow the analysis of patients who received IV ACV only, thus overcoming the variability associated with the oral administration of ACV or its prodrug valacyclovir. Second, the database used to develop the POP/PK model included plasma concentrations obtained at fixed time points that corresponded to those adopted in TDM protocols (Di Paolo et al., 2013) instead of dense blood sampling (Zeng et al., 2009), or random time points (Abdalla et al., 2020). The choice of the sampling scheme may depend on several factors, but ultimately the present POP/PK model returned PK estimates very similar to those published by other studies (Zeng et al., 2009;Abdalla et al., 2020). For example, the present mean values of CL (0.54 L/h/kg) and V (0.97 L/kg) were comparable to those previously reported in children with a mean bodyweight of 20 kg (Eksborg et al., 2002;Nadal et al., 2002). Analogous conclusions can be drawn for terminal t 1/2 (De Miranda and Blum, 1983;Zeng et al., 2009) and the IOV CL value (20.0%), which was in agreement with the value (19.2%) found in a previous study (Zeng et al., 2009). Even the simulation part of the present study showed a concordance with previous findings. For example, ACV doses of 10-20 mg/kg administered as a conventional 1-h IV infusion every 6 h showed almost identical percentages of toxic concentrations (Abdalla et al., 2020). Moreover, the present findings are suggesting that a 20-mg/kg dose every 6 h may achieve effective concentrations in approximately 50% of patients, while the dose should be increased in children with ARC. The study's novelty resides in the simulation of alternative regimens of acyclovir administration. With a short terminal halflife, the maintenance of effective C min values is guaranteed by higher doses of acyclovir (i.e., 20 mg/kg) or more frequent dosing (i.e., every 6 h instead of 8 h). However, those high-dose-intensity regimens may result in high C max values that could expose the patients to the risk of toxicities, while ineffective through concentrations may be measured especially in patients with ARC (Abdalla et al., 2020). As observed for other antimicrobial drugs that have a short plasma half-life, a prolonged (i.e., 2 or 3 h) or continuous infusion together with an appropriate dose increase may allow the achievement of effective target plasma concentrations. Indeed, the simulation of a prolonged infusion of a 20 mg/kg dose resulted in an increased percentage of patients achieving the predefined PK targets for the standard 1-h IV infusion. The percentage of patients who achieve effective C min values sharply increased when the simulation considered a continuous infusion, even with a low dose-intensity regimen, consisting of 10mg/kg doses every 8 h. Noteworthy, fragile patients affected by severe HSV infections were successfully cured with continuous IV infusions of acyclovir at higher doses (up to 30 mg/kg) (Engel et al., 1990;Kim et al., 2011). Interestingly, in most individuals "clinical response was seen within 72 h of continuous ACV administration" that "was well tolerated, even in patients with renal insufficiency." Indeed, the authors did not observe any sign of toxicity, including hematological adverse reactions or deterioration in renal function. Further studies have demonstrated that continuous infusions would prolong the time during which the ACV concentrations exceed the IC 50 value for HSV and VZV and therefore may be considered a valid alternative to intermittent IV dosing (Spector et al., 1982;O'Leary et al., 2020). It is worth noting that in some cases, continuous ACV infusion was adopted to treat neonatal HSV encephalitis (Kakisaka et al., 2009;Cies et al., 2015;O'Leary et al., 2020). In particular, plasma concentrations were maintained above 3 mg/L (Cies et al., 2015) or even higher (5.5-8 mg/L) (O'Leary et al., 2020) to ensure cerebrospinal fluid concentrations of at last 1 mg/L. Interestingly, a former study that enrolled 13 patients included 4 children who received continuous i.v., infusions of acyclovir at doses of 6.1-9.7 mg/kg/h from 7 up to 13 days (Fletcher et al., 1989). Plasma concentrations were ≥4.5 mg/L (up to 22.1 mg/L in some patients) and one child developed neutropenia, whereas none of the patients experienced renal insufficiency. In agreement with those observations, another study did not report signs or laboratory findings of systemic toxicity in 3 adolescents who received continuous i.v., infusions of ACV at doses of 7.2-28.8 mg/kg/day (Spector et al., 1982). Finally, the present simulation showed that a 10 mg/kg dose of ACV administered as a continuous infusion every 8 h allowed the achievement of therapeutic plasma concentrations in more than 90% of patients, regardless the eGFR value. Those results do sustain the adoption of prolonged infusions (or continuous ones) because the reduced rate of drug infusion may be advantageous to decrease the risk of toxic C max values. In particular, the percentage of patients at risk of higher peak plasma concentrations is abated, moving from a standard 1-h infusion to prolonged or continuous infusions. Additionally, a loading dose consisting of a 30-min infusion may precede the continuous infusion to ensure the achievement of therapeutic C min values. Although these premises of greater efficacy and good tolerability, the nephrotoxic effects of acyclovir in children have been associated with a variety of causes, as well as the concomitant administration of nephrotoxic drugs, a reduced eGFR at baseline, hypertension, older age, and obesity (Schreiber et al., 2008;Richelsen et al., 2018;Yalçınkaya et al., 2021). Therefore, these risk factors should be carefully evaluated, especially when dose-intense regimens were adopted (Fletcher et al., 1989). In conclusion, the present findings confirm the high variability of ACV pharmacokinetics in immunocompromised children undergoing HSCT or myelotoxic chemotherapies, hence TDM protocols are recommended to adjust drug dosing. Indeed, standard dosing regimens seems adequate to achieve effective plasma concentrations of ACV, despite the drug could be ineffective in a variable percentage of some patients. The further increase in dosage may expose patients to an augmented risk of toxicities. Noteworthy, alternative regimens based on prolonged IV infusions (i.e., 20 mg/kg as a 3-h infusion every 6 h) or even continuous infusions (i.e., 10 mg/kg every 8 h) may increase the efficacy of ACV while reducing the risk of highest plasma peaks and sparing patients from severe toxicities. Prospective trials adopting TDM protocols are required to confirm the present findings. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Institutional Review Board of the IRCCS Burlo Garofolo (reference no. IRB RC 10/20). Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. AUTHOR CONTRIBUTIONS NM, DN, GL, RS, EP, LS, EB and AP developed the concept and designed the study. NM, DN, EP and LS. provided study material or participants. GL, RS and AP verified the data and performed the statistical analyses. NM, EB and AP wrote the initial draft of the manuscript. All authors provided critical comments and editing, contributed to the data interpretation, reviewed the analyses of this manuscript, and approved its final version. ACKNOWLEDGMENTS The authors thank the children and their parents for participating in the present study. Moreover, a thank goes to the nurse staff and all other caregivers who participate in the study.
2022-04-16T15:10:46.443Z
2022-04-14T00:00:00.000
{ "year": 2022, "sha1": "c5efd49b373ac45278c2d58604eb992c1eb1b500", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2022.865871/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a104889dba1905b24bfd66e621d83f2ac75acc4b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
56027778
pes2o/s2orc
v3-fos-license
POTASSIUM FERTILIZATION FOR OPTIMIZATION OF ONION PRODUCTION Information about the response of onion to different potassium doses may contribute to optimizing the use of fertilizers and, consequently, make the activity more profitable and environmentally sustainable. In this context, the aim of this study was to evaluate the effects of doses of potassium on onion yields. Two field experiments were carried out in the periods of September to December 2012 and April to July 2013. The experimental design used was completely randomized blocks with four replications. The treatments corresponded to the potassium doses (0, 36, 72, 108, 144 and 180 kg ha of K2O). Potassium fertilization promoted an increase in the content of K in the leaf, commercial and total yield, with the maximum obtained in the dose of 180 kg ha of K2O. The maximum overall and commercial yields were respectively 54.69 and 54.12 t ha in the experiment of September to December 2012 and 47.39 and 46.39 t ha in that of April to July 2013. INTRODUCTION The onion (Allium cepa L.) is one of the most consumed vegetables worldwide and the third most cultivated species in Brazil, having a planted area of 59,830 hectares and production of about 1.65 million tons in 2014 (IBGE, 2016). The average yield was 27.8 t ha -1 , which is considered low if compared to other onion-producing countries. Despite its importance, researches relating to nutrition and fertilization of this culture are incipient, which in part makes it difficult to increase productivity in the producing regions. Potassium (K) plays an important role in plant metabolic processes, such as the synthesis, transport and storage of photoassimilates (HAWKESFORD et al., 2012). Potassium is considered the most essential nutrient to onions. In studies on nutrient accumulation in onions, Vidigal, Moreira and Pereira (2010) obtained maximum accumulations of K 228 mg plant -1 (transplanting of seedlings) and 242 mg plant -1 (no-till). Aguiar Neto et al. (2014) observed accumulations of K varying from 218.21 to 790.50 mg of plant -1 , according to the cultivar and the planting site. Responses of onion crops to potassium fertilization are not always significant, or are divergent, despite the large amount of this nutrient extracted. Doses vary considerably as a function of cultivars and soil and weather conditions in the cultivated region. In Brazil, Resende, Costa and Pinto (2009) found diverse behaviors of the Alfa Tropical cultivar, according to the dose of K applied. Without N addition, combined with 180 kg ha -1 of K 2 O, yields of 42.1 t ha -1 of commercial onion bulbs were possible. However, with the application of 97 kg ha -1 of N, without K supply, estimated yields were of the order of 36.5 t ha -1 of onion bulbs. However, an application of 90 kg ha -1 of K 2 O and 180 kg ha -1 of N yielded 36.2 t ha -1 of bulbs, which indicates that the responses to potassium supply may vary considerably, with or without nitrogen addition. Cecílio Filho et al. (2010) obtained higher yields of Superex onion (89.5 t ha -1 ) with 150 kg ha -1 of N and 150 kg ha -1 of K 2 O. The main onion-producing regions of Brazil have official recommendations adapted to their soil conditions, which show variations among the producing states. In the state of Rio Grande do Norte there is no official fertilization recommendation bulletin, and the amount of fertilizer applied is based on information from private companies, recommendations from other regions and on the experience acquired by producers. The average amount of potassium used varies from 200 to 300 kgha -1 of K 2 O. The form and the management are different from other regions, because the drip irrigation system is employed and it uses fertirrigation. Information on the response of onion to different potassium doses may contribute to optimal use of fertilizers and, consequently, make onion crops more profitable and environmentally sustainable. In this context, the aim of this study was to assess the effects of different doses of potassium on onion yields. MATERIAL AND METHODS Two experiments were carried out in Mossoró-RN, from September to December 2012 (Experiment 1 -E1) and from April to July 2013 (Experiment 2 --E2). The soil in the experimental area was classified as Red Yellow Argisol. The chemical analyses of the soil conducted with samples taken at depths of between 0 and 20 cm before the experiments began produced the following results from E1 and E2, respectively: 7.0 and 7.7 pH (H 2 O); 3.5 and 3.8 g kg -1 organic matter; 9.90 and 10.60 mg dm -3 P (Mehlich); 0.25 and 0.28 cmol c dm -3 K; 1.50 and 1.60 cmol c dm -3 Ca; 0.10 and 0.12 cmol c dm -3 Mg. During E1 and E2, rainfall was 44 and 450 mm, average temperatures were 26.9 and 25.4 °C, and the average relative humidity was 46.0 and 64.6%, respectively. The treatments corresponded to the potassium doses (0, 36, 72, 108, 144 and 180 kg ha -1 of K 2 O). The experimental design used was completely randomized blocks with four replications. The experimental area consisted of 3.0 x 1.0 m planting beds with eight plant rows with a spacing of 0.10 x 0.10 m, comprising a total area of 3.0 m². The useful area was 1.68 m 2 , corresponding to the six central rows, excluding one plant from each end of the bed. Soil preparation consisted of ploughing and harrowing followed by the construction of the planting beds. Planting fertilization was done according to the soil analysis and as recommended by Cavalcanti (2008), using half of the recommended phosphorous dose (67.5 kg ha -1 ) as triple superphosphate. Drip irrigation with three hoses per bed at a spacing of 0.30 m was used, with self-compensating drippers and a flow rate of 1.40 L h -1 , with a distance of 0.20 m between them. Irrigation was performed on a daily basis, according to the culture requirements. Transplanting was carried out 53 and 57 days after sowing in E1 and E2, respectively, when the seedlings were 15-20 cm tall. The cultivar used was Vale Ouro IPA 11. The fertirrigation was initiated 10 days after transplanting (DAT) and maintained until 70 DAT, and its distribution throughout the cycle was established based on the rate of nutrient absorption by the onion culture, according to Aguiar Neto et al. (2014). Potassium doses, according to each treatment, were administered using potassium chloride as source. In total, 135.0 kg ha -1 of N, 45.0 kg ha -1 of S and 67.5 kg ha -1 of P 2 O 5 , in the form of urea, ammonium sulfate and monoammonium phosphate (MAP), were used. The foliar nitrogen, phosphorus and potassium contents were determined 45 days after transplanting, using the highest leaf of the plant in accordance with the instructions of Trani and Raij (1997). When nearly 70% of the plants had toppled over, irrigation was interrupted so that the natural ripening of the plant could occur in the field. The plants were then pulled out and the parts above the ground and roots were discarded. The characteristics assessed were the following: The bulbs were classified the largest transverse diameter, based on the classification of CEAGESP (2001), into Class 1 (noncommercial) bulbs, with a diameter of < 35; Class 2 bulbs, with a diameter of 35-50 mm; Class 3 bulbs, with a diameter of 50-75 mm; Class 4 bulbs, with a diameter of 75-90 mm;and Class 5 bulbs, with a diameter of > 90 mm. Commercial bulbs yield, total weight of bulbs with a diameter of > 35 mm. Noncommercial yield, total weight of bulbs with a diameter of < 35 mm (Class 1). Overall bulbs yield, total weight of bulbs harvested in the useful area of the parcel. Mean bulb weight obtained by dividing the commercial bulbs weight by the number of commercial bulbs in the useful area of the parcel. Analysis of variance of the data of the characteristics evaluated was performed separately for each experiment using the F test. Joint analysis of the experiments using SAS software followed. Fitting procedure of response curve on each characteristic was carried out in function of the potassium doses using software Table Curve Package (SCIENTIFIC, 1991). Foliar NPK contents A significant interaction between potassium (K) doses and planting time (PT) occurred only for foliar potassium content, while phosphorous and nitrogen contents were not influenced by other factors. The foliar N and P mean contents found in both cultivation periods were 28.3 and 7.7 g kg -1 , respectively. Comparing these contents with the ranges considered appropriate for onions, according to Trani and Raij (1997), it can be seen that the N content was within the suitable range (25-40g kg -1 ) and the P content was above (2-5 g kg -1 ). In other vegetables, such as lettuce and pumpkin, no effects of N and P contents were found following the addition of potassium to the soil (KANO; CARDOSO; BÔAS, 2012; ARAÚJO et al., 2012). Foliar potassium content increased linearly with doses of K, in both cultivation periods. In E1, the leaf potassium content rose from 41.4 g kg -1 (without the application of K) to 64.7 g kg -1 with the 180 kg ha -1 K 2 O, while in E2 it rose from 43.6 g kg -1 to 47.2 g kg -1 (Figure 1). This increase was probably due to the increased K level in the soil, as a result of the application doses of this nutrient, which caused more K uptake by the plant. In the two planting times (E1 and E2), the conditions of the experiments (genetic material, location, default initial soil, fertilizers) were similar, yet different foliar K level values were obtained. The increases observed in K levels were 56% and 8%, respectively, in E1 and E2, considering the treatment without application of K and the maximum dose (180 kg ha -1 K 2 O). The higher rainfall in E2 (450 mm) favored higher leaching of K in the soil, and consequently lower absorption by plants. The potassium contents found in the leaves were in line with those described by Trani and Raij (1997) as adequate for onion. Even in the treatment without the addition of potassium, leaf contents were within the adequate range, and no potassium deficiency was observed in the plants in this experiment. Mean bulb weight and yield Mean bulb weight (MBW) was significantly influenced by the potassium doses and planting time, considered separately. There was a linear increase of MBW as a function of K doses, with a maximum of 106.7 g obtained with 180 kg ha -1 K 2 O. With regard to the experiment without K addition, the increase was 22.81% ( Figure 2). These results corroborate those obtained by Verma and Sing (2012) in a study where they evaluated K doses between 0 and 90 kg ha -1 of K 2 O and increased mean bulb weight, obtaining a maximum of 61 g for a dose of 90 kg ha -1 . Application of potassium to onion crops raises its concentration in the tissues and reduces water potential, resulting in more water storage in the tissues and, consequently, an increase in mass of the draining organs, such as the bulb (BUGARÍN-MONTOYA et al., 2002). In this regard, the addition of potassium probably increased the translocation of photoassimilates to the bulb, thereby increasing its average weight. The mean bulb weight was approximately 4% higher in E1 than in E2 (Table 1). For the bulb percentage of Class 2 (C2) and 3 (C3) onions, only the planting time had an influence, and Class 4 onions were not influenced by any factor. The bulb percentage of Class 2 onions was higher in E2, while that of Class 3 was higher in E1. The Class 4 bulb percentage was not influenced by dosage and planting times. The lower rainfall that occurred in E1 reduced K runoff in the soil, and consequently increased the availability and absorption of the same by onion plants, and the production of photoassimilates. The difference between Class 2 and Class 3 bulb percentage in the planting times was probably due to the higher rainfall rate observed in E2, which caused more nutrient runoffs and a greater production of smaller bulbs. In addition, rains also caused the occurrence of purple blotch (Alternaria porri). The action of this disease probably contributed to the lower percentage of Class 3 bulbs found in E2 than in E1. The maximum estimated OY was 54.69 and 47.39 t ha -1 for E1 and E2, respectively. In E1, it was 15.40% higher than in E2. Similar behavior was observed for CY, where the E1 yield was 16.66% higher than that of E2, with maximum estimates of 54.12 and 46.39 t ha -1 for E1 and E2, respectively (Figure 3). May et al. (2007) assessed the effect of nitrogen and potassium fertilization on onion cultivars and found that treatments without the application of potassium resulted in maximum yields of 68.4 t ha -1 and 71 t ha -1 in Optima and Superex cultivars, respectively. The authors found that potassium had little effect on bulb production and that the K levels found in the soil provided this nutrient to the culture, and the addition of potassium fertilizers was not required. The dose that resulted in maximum yields in both experiments (180 kg ha -1 of K 2 O) was higher than that recommended by Cavalcanti (2008), i.e. 135 kg ha -1 of K 2 O for irrigated onion crops (IPA 11) in soils with potassium concentrations of between 0.16 and 0.30 cmol c dm -3 . In the treatment without the addition of potassium, estimated overall yields were 44.32 and 43.31 t ha -1 for E1 and E2, respectively, which are considered high, and higher than the Brazilian average yields (27.8 t ha -1 ) (IBGE, 2016). Potassium concentrations in the soil (0.25 and 0.28 cmol c dm -3 ), which are considered high, according to Cavalcanti (2008), associated with the amount of this nutrient in the irrigation water, 0.54 mmol c L -1 (data not shown), were sufficient to fulfill the onion culture needs. The noncommercial bulb yields were not changed by the factors analyzed, with a production of 0.64 t ha -1 and 0.62 t ha -1 of onions in E1 and E2, representing 1.28% and 1.42% of the number of bulbs, respectively. CONCLUSIONS Potassium fertilization promoted an increase in the content of K in the leaf, commercial and total yield, with the maximum obtained in the dose of 180 kg ha -1 of K 2 O. The maximum overall and commercial yields were respectively 54.69 and 54.12 t ha -1 in the experiment of September to December 2012, and 47.39 and 46.39 in that of April to July 2013.
2018-12-10T19:00:39.317Z
2018-04-06T00:00:00.000
{ "year": 2018, "sha1": "079c52d652fee8e65f1f1f861cf5a3b850c94e41", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/rcaat/v31n2/1983-2125-rcaat-31-02-379.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e9ae63176584b35368e54687038cb73002c0d069", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
235755991
pes2o/s2orc
v3-fos-license
Choice of LECS Procedure for Benign and Malignant Gastric Tumors Laparoscopic endoscopic cooperative surgery (LECS) refers to the endoscopic dissection of the mucosal or submucosal layers with laparoscopic seromuscular resection. We recommend a treatment algorithm for the LECS procedure for gastric benign tumors according to the protruding type. In the exophytic type, endoscopic-assisted wedge resection can be performed. In the endophytic type, endoscopic-assisted wedge resection of the anterior wall is relatively easy to perform, and endoscopic-assisted transgastric resection, laparoscopic-assisted intragastric surgery, or single-incision intragastric resection in the posterior wall and esophagogastric junction (EG Jx) can be attempted. We propose an algorithm for the LECS procedure for early gastric cancer according to the tumor location. The endoscopic submucosal dissection (ESD) procedure can be adapted for all areas of the stomach, and single-incision ESD can be performed in the mid to high body and the EG Jx. In full-thickness gastric resection, laparoscopy-assisted endoscopic full-thickness resection can be adapted for the entire area of the stomach, but it cannot be applied to the pyloric and EG Jx. In conclusion, surgeons need to select the LECS procedure according to tumor type, tumor location, the surgeon's individual experience, and the situation of the institution while also considering the advantages and disadvantages of each procedure. INTRODUCTION Laparoscopic endoscopic cooperative surgery (LECS) refers to the endoscopic dissection of the mucosal or submucosal layers with laparoscopic seromuscular resection. The LECS procedure was first performed in 2008 by Hiki et al. [1]. Initially, LECS aimed to preserve as much of the normal stomach as possible by efficiently resecting benign tumors. In the early days of LECS, intraoperative endoscopy was also used to evaluate tumor localization [2,3]. As time passed, the endoscopic technique was developed, and gradually, collaborative surgery (endoscopic mucosal resection and endoscopic submucosal dissection [ESD]) was performed [4]. In addition, the procedure for resecting only benign gastric tumors, such as submucosal tumors (SMTs), moved to the stage of resecting malignant tumors, such as early gastric cancer (EGC) [5,6]. Endoscopic full-thickness resection (EFTR) has been performed for curative resection of EGC [4,7,8]. Nonexposure techniques such as non-exposed endoscopic wall-inversion surgery (NEWS) and clean no-exposure technique (clean-NET) have been developed to minimize contamination in the intraperitoneal cavity and cancer cell spillage [9,10]. The aim of this review was to help surgeons select a method suitable for individual institutions by examining and comparing the advantages and disadvantages of each procedure. In this paper, we summarize each of the LECS procedures. The term resection is used to describe endoscopy and laparoscopy procedures that involve major resection. Assisted endoscopy or laparoscopy is not used to perform the main resection procedure, but to observe and perform the procedure if necessary. For example, in laparoscopy-assisted endoscopic resection (LAER), resection is performed mainly by endoscopy and assisted synchronously by laparoscopy. Regarding the procedure times, endoscopic procedures (ESD, EFTR) are assumed to take longer as they require more time than laparoscopic procedures. LECS PROCEDURES FOR BENIGN GASTRIC TUMORS LAER LAER is an ESD procedure performed laparoscopically [2,11,12]. ESD or endoscopic muscular dissection is performed in gastric tumors that require removal of the muscle layer, such as SMTs, which have a high perforation risk. If perforation occurs, laparoscopic suture closure can be performed after endoscopic resection. Since LAER is mainly an endoscopic procedure assisted by laparoscopy, it takes longer to remove the tumor than laparoscopic resection. This procedure is recommended for benign endophytic tumors in the stomach (Fig. 1A). LECS LECS is a classic method that was first reported in Japan [1,13,14]. LECS involves precutting around the tumor with an endoscope (b, blue arrow) and artificial perforation of the gastric wall. Next, excision of the tumor with laparoscopy and repair of the gastric wall with a stapler are performed. LECS usually takes some time because it is an endoscopic procedure, but it does not take too much time because only endoscopic precut is performed. The advantage is that there are no limitations on tumor location. However, there is a risk of spillage into the abdominal cavity, and collaboration with a skilled endoscopist is required ( Fig. 1B and C). Endoscope-assisted laparoscopic wedge resection (EAWR) The concept of EAWR is contrary to that of LAER [2,12,[15][16][17]. EAWR is performed to remove tumors with a laparoscope after localization by an endoscope during surgery. Since tumors are removed mainly by laparoscopy, EAWR can be performed faster than endoscopic resection. As normal gastric wall tissue can be lost, EAWR is difficult to implement in sites where strictures may occur, such as the esophagogastric junction (EG Jx) and pylorus. The advantage of EAWR is that unlike ESD, it does not require professional endoscopic skills ( Fig. 1D and E). Endoscope-assisted laparoscopic transgastric resection (EATR) The EATR procedure involves opening the gastric wall under the direct view of an endoscope, tagging the tumor with a laparoscopic suture, and performing wedge resection with a laparoscopic stapler [2,[18][19][20][21][22]. The purpose of endoscopy is to assist surgeons in obtaining the proper position when they incise the stomach wall. In addition, endoscopy can play an assisting role in monitoring spillage at the repair site after laparoscopic repair of the resected gastric wall. This procedure does not take as much time as laparoscopy is mainly used. In cases of gastric tumors located at the posterior wall of the stomach, the EG Jx, or pylorus, damage to the vagus nerve or loss of normal gastric wall tissue can be minimized if wedge resection can be used appropriately with endostaplers. However, the disadvantage of EATR is that the spillage of stomach contents can occur; therefore, minimizing this limitation is key ( Fig. 1F and G). Laparoscopic intragastric surgery (LIGS) If EATR is an open surgery concept in which the stomach wall is opened, LIGS can be used in laparoscopic surgery performed within the stomach [2,15,[23][24][25]. The incision in the gastric wall is minimized, and laparoscopic trocars are inserted into the gastric lumen. After tagging the gastric tumor with a laparoscopic suture, laparoscopic wedge resection is performed with a stapler. Because the endoscope acts like a laparoscope, the operator can see both the endoscopic and the laparoscopic field of views simultaneously. The advantage of an endoscope is that it saves time by cleaning the camera lens itself, whereas a laparoscope cannot. The disadvantage is that it is difficult and time consuming to insert the trocar by piercing the stomach wall again after introducing artificial pneumoperitoneum. In addition, current commercialized trocars have difficulty implementing LIGS. It is recommended to use a balloon trocar because a trocar inserted into the stomach can easily fall out during gastric tumor resection. When using a balloon trocar, it is difficult to insert into the stomach because of the blunt tip, and there is no commercialized 12 mm diameter balloon trocar. The insertion of an additional 12 mm non-balloon trocar is required to insert a laparoscopic stapler. To overcome these limitations, the procedure time may be longer than expected, along with the time required to overcome the learning curve (Fig. 1H). Single-incision intragastric resection (SI-IGR) If the gastric wall is the abdominal wall, EATR is an open surgery, LIGS is a laparoscopic surgery, and SI-IGR is a single-port laparoscopic surgery [2,26,27]. In the case of SI-IGR, because there are no obstacles in trocar insertion and fixation (in contrast to LIGS), it can be performed quicker and more conveniently. After forming the pneumoperitoneum through the umbilicus, an incision is made on the front wall of the stomach, and a wound protector is placed to perform the procedure comfortably. In addition, the procedure is more comfortable if an incision is made on the left upper abdomen instead of the umbilicus. However, it is recommended to perform the procedure through the umbilicus for best cosmetic result. The disadvantage is that the devices clash in a narrow space, as in single-port laparoscopic surgery, so it is useful to use a single-port-dedicated device with curvature. The location of the tumor is more advantageous if it is closer to the posterior wall and the EG Jx than the anterior wall. In addition, tumors located in the lower stomach, such as the antrum, are too close to the wound protector, and are difficult to treat because access to the laparoscopic instrument and stapler is not easy due to angulation. Similar to LIGS, wedge resection is performed with a stapler after tagging the tumor with a laparoscopic suture. Therefore, it is expected that less nerve damage occurs when an endophytic type of gastric SMTs is removed. During the removal procedure, an endoscope is not necessarily required. Endoscopy is useful for checking leakage at the repair sites and the stricture of the EG Jx after removal (Fig. 1I). LECS PROCEDURES FOR MALIGNANT GASTRIC TUMORS All procedures performed to remove malignant gastric tumors include laparoscopic perigastric lymph node dissection (LND). In this article, we focus only on gastric tumor resection and do not describe the contents of lymph node resection. ESD with laparoscopic LND (ESD+LLND) This procedure is the same as LAER with LLND, and the concept is to preserve the stomach if submucosal dissection is performed with endoscopy and if LND is performed with laparoscopy [28][29][30][31][32][33]. The advantage is that the stomach can be preserved; however, the main procedure is ESD, which requires a skilled endoscopist and is time consuming (Fig. 2A). Single-incision endoscopic submucosa dissection (SI-ESD) with LLND SI-ESD with LLND is similar to SI-IGR, where sentinel node navigation surgery with unilateral perigastric LLND is performed with a single-port, and then ESD is performed through a single-port. Jeong et al. [34][35][36] reported that ESD performed at the anterior or posterior wall of the stomach required less time than ESD performed at the lesser curvature. In addition, since 2-basin LND induces delayed perforation in 30% of gastric ulcers due to ischemic injury, it is recommended to perform only one-basin LND. The advantage is that it is a quick procedure, as the laparoscopic instrument can be used to assist with a single-port, thus reducing the ESD time by 29%-44%. The disadvantage is that there is a risk of cancer cell spillage because it approaches the stomach wall directly (Fig. 2B). Laparoscopy-assisted EFTR (LAEFR) LAEFR achieves safer EFTR than ESD [3,7,8,37,38]. If the tumor invades deeper than the muscle layer of the gastric wall, full-thickness resection with an endoscope is performed and a laparoscope is used for repair. In 2012, a case of EGC was reported by Nunobe et al. [39]. This technique minimizes cancer spillage by fixing the stomach to the abdominal wall (crown method). When perforation of the gastric wall occurs during endoscopic resection, it becomes difficult to complete the resection due to air leakage. After gastric perforation, the endoscopic field of view is not well secured, and endoscopic resection can take a long time even with the help of a laparoscope. After perforation, there are concerns regarding spillage. The advantage is that LAEFR can be applied to the whole area, even in the EG Jx and pyloric areas because of their exact localizations. The disadvantage of this procedure is the risk of cancer cell leakage and the long procedure time. Cho et al. [40] reported 14 cases in which resection was performed via the same concept as hybrid natural orifice transluminal endoscopic surgery (NOTES). However, 5 of the 14 cases were converted to conventional gastrectomy due to an abnormal anatomical shape, ischemia, and leakage (Fig. 2C). NEWS NEWS was developed so that EFTR could be performed without spillage [41][42][43][44][45]. First, cancer marking with saline injection at the submucosal layer is performed using an endoscope. Subsequently, seromuscular cutting and suturing were performed by laparoscopy to invert the EGC site into the stomach. Finally, removal of EGCs with ESD and repair of the mucosal layer with endoscopic clips or nets are performed. The advantage of this non-exposure technique is that cancer spillage does not occur. The disadvantages are that the procedure time is long, as it involves ESD and endoscopic closure, and it is difficult to apply to the EG Jx and pyloric areas ( Fig. 2D and E). Clean-NET Similar to NEWS, clean-NET has also been developed to prevent cancer cell spillage [10,46,47]. First, the localization of EGC is performed with an endoscope, and saline is injected into the submucosal layer. Next, the mucosa is fixed to the muscle layer using a laparoscopic suture. Then, seromuscular dissection is performed with a laparoscope, and resection of the externally protruding tumor is performed with a laparoscopic stapler. Clean-NET can be applied to EGCs in most locations, except for the EG Jx and pyloric areas. The advantage is that clean-NET is performed mainly by laparoscopy, so the operation time is shorter than that of NEWS. However, gastric perforation can occur, causing leakage depending on the tumor location or skill of the surgeon (Fig. 2F and G). Faster procedure Compared with endoscopic procedures, laparoscopic procedures are usually faster ( Table 1). These techniques include EAWR for benign tumors and SI-ESD with LLND and clean-NET for malignant gastric tumors ( Figs. 1 and 2). Cleaner and oncologically safer procedure (less spillage) Methods for removing benign tumors include LAER, EAWR, and SI-IGR. For the removal of malignant tumors, ESD with LLND, SI-ESD with LLND, NEWS, and clean-NET can be implemented without spillage (Table 1, Figs. 1 and 2). Proper procedure for EG Jx tumors ESD at a tumor located in the EG Jx area is difficult because endoscopic manipulation is more difficult in this area than in the gastric low body. The techniques for benign tumor removal in the EG Jx include EATR, LIGS, and SI-IGR ( Table 1). For malignant tumors, LAEFR is possible if the EG Jx is not invaded by EGC. is relatively easy to perform on the anterior wall side, but not on the posterior wall side or near the upper body, especially in the EG Jx. Therefore, if EAWR is difficult to implement, surgeons can try EATR, LIGS, or SI-IGR. Proposed algorithms for the LECS procedure for EGC We proposed an algorithm for the LECS procedure for EGC according to the tumor location (Fig. 4). The ESD procedure can be adapted for all areas of the stomach, and SI-ESD can be performed in the mid to high body and EG Jx areas. In full-thickness gastric resection, LAEFR can be adapted for the whole stomach, but it cannot be applied to the pyloric and EG Jx areas by NEWS and clean-NET. Additional sentinel node biopsy or LLND is recommended for all procedures. DISCUSSION The presence of numerous reported procedures proves that there are no definite procedures. For example, there was much interest on NOTES several years ago [9,48], but the interest of surgeons has shifted to single-port laparoscopy due to the slow development of instruments and limitations of technology. Considering cosmetics with few wounds, reduced or single- Fig. 4. Proposed algorithm for the LECS procedure for EGC according to tumor location. LECS = laparoscopic endoscopic cooperative surgery; EGC = early gastric cancer; ESD = endoscopic submucosal dissection; LAEFR = laparoscopy-assisted endoscopic full-thickness resection; NEWS = non-exposed endoscopic wall-inversion surgery; Clean-NET = clean no-exposure technique; SI-ESD = single-incision endoscopic submucosa dissection; EG Jx = esophagogastric junction. port robotic surgery seems to be preferred compared to NOTES or LECS, which take more time and effort [49][50][51]. Before starting the LECS procedure, it is necessary for beginners in LECS to become familiar with the procedure by participating in programs such as animal laboratories where the technique is used frequently rather than immediately applying the immature technique to patients [34][35][36]48]. Before the implementation of a new procedure, an internal medicine endoscopist should be consulted to clarify the simulation of the method, and if necessary, experience through visits to other hospitals is required. It is especially important to check for stricture and leakage with an intraoperative endoscope after the procedure. If a passage disturbance is suspected by endoscopy, conversion to conventional gastrectomy is necessary [40,52]. It is also necessary to first remove benign tumors and then, after accumulating experience, the challenge to remove malignant gastric tumors can be attempted. Additionally, the method of removing EGC tumors with the LECS procedure is still in the clinical study stage, and its oncological safety has not been verified [2,14]. Therefore, it is necessary to inform patients with EGC about the feasibility of LECS and its safety. The authors hope that this review will contribute to the understanding and selection of LECS techniques for individual patients on a case-by-case basis. In conclusion, surgeons need to select the LECS procedure according to tumor type, tumor location, their own experience, and the situation of the institution while also considering the advantages and disadvantages of each procedure.
2021-07-08T05:24:31.310Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "8d26855626b45a6171868d497e2ed9be0f789ba0", "oa_license": "CCBYNC", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8255300", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "8d26855626b45a6171868d497e2ed9be0f789ba0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118663225
pes2o/s2orc
v3-fos-license
Absolute neutrino masses We discuss the possibility of using experiments timing the propagation of neutrino beams over large distances to help determine the absolute masses of the three neutrinos. I. INTRODUCTION Determining the absolute values of the three neutrino masses is a subject of great interest. Historically, Pauli introduced the concept of the neutrino in order to explain why the energy of the emitted electron in beta decay did not have a single value as expected for a two body final state, but had a spectrum of energies (for different observations of the process). Fermi then suggested experimentally determining the mass of the neutrino by precisely measuring the electron's energy spectrum in the endpoint region. This procedure has yielded steadily decreasing upper bounds on neutrino mass for more than half a century. It has however turned out to be difficult to reach the range suggested by the cosmology bound [1] for the three different neutrinos now known to exist: This bound is based on the gravitational force exerted by neutrinos in the universe rather than directly from neutrino kinematics.. However, heroic efforts [2] - [7] are continuing to make steady progress on finding the neutrino masses using the kinematics of the tritium beta decay reaction. One of course has the associated problem of finding the masses of all three neutrino mass eigenstates. The neutrino oscillation experiments [8] - [15] have determined these mass differences up to an ambiguity of either a "normal" or an "inverse" hierarchy. Specifically, where the numbers have been taken from the recent review [16]. Of course, the two hierarchies correspond to the two possible signs of (m 3 ) 2 − (m 1 ) 2 . Evidently, measuring any combination of three neutrino masses which is independent of A and C should determine, for each hierarchy choice, the values of all three neutrino masses. II. TOWARD FINDING THE ABSOLUTE NEUTRINO MASSES Here we would like to discuss some aspects related to the possibility of finding the separate (or "absolute") neutrino masses with the help of experiments measuring neutrino velocities by timing laboratory-made neutrino beams traveling from one point to another. At the moment, and for the near future, these are "thought experiments". However, thinking about them, raises a number of interesting questions. If there were just a single neutrino it is clear that we could "trivially" find its mass, m by measuring its velocity, v as well as its energy, E and using Einstein's formula: where we are working in units where the velocity of light, c = 1 and v = 1 − ǫ, appropriate to neutrinos traveling just slightly less than the speed of light. For definiteness, in the realistic 3-neutrino world, we assume a neutrino beam initiated in association with muons and detected, after traveling a measured time and distance, by produced muons. We designate this as a mu-neutrino type beam. Now we must take account of the fact that the laboratory neutrino beam in this setup corresponds to working with a neutrino "flavor" eigenstate (of muonic type) rather than a mass eigenstate. A prescription is therefore required to define what might be called the "averaged neutrino mass" m which enters into Eq. (3) when we want to use it in the 3-flavor world. In Quantum Mechanics the mu-type neutrino corresponds to a linear combination of the three mass eigenstates: where the K 2a 's are approximately the elements of the leptonic mixing matrix. In models, say for the case of Majorana neutrinos, the mixing results from bringing an"original" mass matrix M to diagonal form M diag , where K is unitary. The "to-be-measured" mass, m will thus be interpreted [17] as the "averaged" quantity, where the repeated index, i should be summed over. We also have assumed for simplicity that the K ij 's are real. As a check of this formula, one notes that if the neutrinos didn't mix we would have K 2i = δ 2i and m = m 2 . Present experimental information on the "angles" which parameterize the matrix K are summarized in ref. [18] Translating these angles to the needed matrix elements, K 2i yields the numerical values with estimated errors: Eq. (6) then explicitly gives the to-be-measured quantity, m as a different combination of the masses than those in Eq. (2). Thus combining the three equations we can find the absolute neutrino masses for each hierarchy choice. For example, we may eliminate m 2 and m 3 in terms of m 1 by using Eqs. (2) to get Here the plus sign corresponds to the normal hierarchy choice and the minus sign corresponds to the inverted hierarchy choice. For the normal hierarchy we can, using this equation, obtain the curve in Fig 1 while for the inverted hierarchy we obtain the curve in Fig. 2. Thus if an average mass, m is measured by a long baseline time of flight experiment, we can read m 1 from this curve and find also (from the known differences above), both m 2 and m 3 . This should be done for both hierarchy assumptions; the present method does not determine which hierarchy is the one nature chooses. Of course, there are some important experimental errors introduced in this determination. It is expected that further experimental measurements of neutrino oscillations will substantially improve the experimental accuracy. This procedure thus appears to be able to reasonably solve the problem of relating the "averaged" mass, m measured by using Eq.(6) to the three neutrino mass eigenvalues m i . However, we are not done since we should pay some attention to the accuracy to which m itself can be determined. It should be remarked that the need to take mixtures of mass eigenstates into account is also evident in the electron endpoint of beta decay method for absolute neutrino mass determination. In that case the K 1i 's appear instead of the K 2i 's; see for example Eq. (21) of [19]. Next, we discuss the needed determination of the averaged mass, m by a long baseline time of flight measurement. First the mu-neutrino, ν µ must be produced. We would like to select the ν µ to have as small an energy as possible in order that it travel at as low a speed, v as possible. An at least conceptually convenient source in this context might use the decay process: By selecting the K + and µ + momenta one can reduce the energy of the ν µ and hence its speed, v. That would make a precise time measurement easier. From Eq. (3), the deviation of the neutrino's speed from that of light is given by: If one were to select the neutrino energy to be 1 MeV, this would result in ǫ to be of the order of 10 −14 . That corresponds to a deviation from the velocity of light in the fourteenth decimal place, which does not seem practical. However, if one were able to get enough data with a neutrino energy selected to be 1 KeV, ǫ would be of the order of 10 −8 which should be measurable. III. SUMMARY We have shown how measurements of, for example, a mu-neutrino beam velocity and energy can be used to help in the determination of the absolute masses of the presently known three neutrinos. This could be a supplement to the determinations obtained using the endpoint of tritium beta decay method. Of course, an important question is the accuracy which can be achieved. In addition to the accuracy of the velocity and energy measurements of the beam it is important to increase the accuracy of the neutrino mass differences and the lepton mixing matrix elements obtained via neutrino oscillation experiments. A recent review of these is given in [18].
2012-07-20T16:21:12.000Z
2012-04-03T00:00:00.000
{ "year": 2012, "sha1": "df1373da6d009826359a361a8dc9b8ffc24fcea3", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1204.0582", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "df1373da6d009826359a361a8dc9b8ffc24fcea3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
271263240
pes2o/s2orc
v3-fos-license
A Sensitive Method for Determination 1,25-Dihydroxyvitamin D3 in Human Brain using Ultra-Pressure Liquid Chromatography Tandem Mass Spectrometry The hormonally active form of vitamin D, 1,25-Dihydroxyvitamin D3 [1,25(OH)2D3], has been associated with neuroprotective effects in the brain, but has been difficult to measure in human brain tissue because of its low concentration. The aim of this study was to develop and validate a sensitive method to quantify 1,25(OH)2D3 in the human brain. Prior to analysis by the LC-MS/MS, the samples were derivatized with 4-phenyl-1,2,4-triazoline-3,5-dione. The method showed good linearity of 1,25(OH)2D3 over the physiological range (R2 = 0.9998). The limit of detection was 2.5 pg/g, >10 times lower than the previously reported limit of detection. The average 1,25(OH)2D3 concentrations in 3 regions of human brain tissue samples were: anterior watershed 30.7 pg/g; mid-temporal cortex 19.2 pg/g; and cerebellum 18.5 pg/g. This validated method to quantify 1,25(OH)2D3 in human brain tissue can be applied to obtain information about its presence in various regions of the human brain associated with neurodegenerative diseases. Introduction Vitamin D is an essential fat-soluble nutrient that is classically known for its role in maintaining calcium homeostasis and skeletal health [1].Vitamin D can be obtained in the diet or synthesized cutaneously in the presence of UVB light.It is hydroxylated by the liver to produce 25-hydroxyvitamin D3 [25(OH)D 3 ], which is the primary circulating form of vitamin D. Through a tightly regulated series of feedback loops, 25(OH)D 3 is further hydroxylated to produce 1,25-dihydroxyvitamin D3 [1, 25(OH) 2 D 3 ], which is the hormonally active form that can bind to the nuclear vitamin D receptor (VDR) to activate gene expression [2].Several tissues in the body are capable of hydroxylating 25(OH)D 3 to 1,25(OH) 2 D 3 and also express the VDR, including brain tissue [3].It has been suggested that vitamin D is mechanistically linked to neurodegenerative diseases [4].Most of the studies that have associated vitamin D with neurodegenerative diseases and cognitive decline have relied on circulating of 25(OH)D 3 as the indicator of vitamin D status [5].However, it is not clear if the 25(OH)D 3 measured in circulation reflects vitamin D in the brain.We recently reported that 25(OH) D 3 in human brain tissue was associated with postmortem cognitive status but was not associated with Alzheimer's disease pathology [6].Unfortunately, the 1,25(OH) 2 D 3 , using our original assay [7], was below the assay lower limit of detection (LOD) in 58% of the samples, and therefore excluded from our data analysis.Because 1,25(OH) 2 D 3 is the active form of vitamin D, and we still do not have the exact mechanisms underlying the role of vitamin D in the brain, it is critical to be able to measure this active form.Indeed, we posit that the role of vitamin D in neurodegenerative diseases can only be elucidated by quantifying 1,25(OH) 2 D 3 , in addition to 25(OH)D 3 , in the human brain, which requires greater assay sensitivity.Here we report on the development and validation of a mass spectrometry assay with improved sensitivity, with an LOD 10 times lower than that of the previous assay, to quantify 1,25(OH) 2 D 3 in postmortem human brain tissue.This assay will enable future research into the role of 1,25(OH) 2 D 3 in the human brain and its potential relevance to neurodegenerative diseases. Samples and clinical application One human cadaver brain obtained from a 54-y-old woman donor through the National Development and Research Institutes was utilized for method validation, which has been described previously [7].The cortex was homogenized and aliquoted for use as brain controls (blank), then stored at À80 C. Quality controls (low and high) were freshly prepared by taking 0.1 g of brain control and spiking it with a specific amount of 1, 25(OH) 2 D 3 . Postmortem brain tissue samples, stored at À80 C for no longer than 6 y, were obtained from 153 participants in the RUSH Memory and Aging project [6,8].During autopsy and brain dissection, the tissues were promptly frozen and maintained in a frozen state throughout the dissection process without thawing [9].The 1,25(OH) 2 D 3 concentrations were measured in the following brain regions: anterior watershed (AWS), middle temporal cortex (MT), and cerebellum (CR). Preparation of brain samples and analyses Brain sample preparation of 0.1 g and liquid-liquid extractions with methylene chloride:methanol (1:1) were used, as described previously [7].Silica solid phase extraction (SPE) columns (Agilent) and a Vac-Elute SPS 24 manifold rack were used for SPE, in which each sample was reconstituted in 1 mL of 4% isopropanol in hexane and transferred into the SPE columns, then washed with 9 mL 4% isopropanol in hexane and 6 mL 6% isopropanol in hexane, and eluted with 4.5 mL of 25% isopropanol in hexane to be dried under N 2 gas at 60 C. For the derivatization, 200 μL of PTAD solution (0.25 mg/mL) were added to each dried sample, vortexed, and subsequently left to be stored in dark place for 1 h at room temperature.Derivatized samples were dried and reconstituted in 100 μL of 20 mM methylamine, vortexed for 2-3 min, then centrifuged at 16,300 Â g for 5 min at 4 o C. The supernatant was pipetted into vials with glass inserts to be analyzed with the LC-MS/MS system. Validation experiments Linearity was established for 1,25(OH) 2 D 3 using serial dilutions of the calibration standard to concentrations ranging from 1.25 to 400 pg/mL.The LOD for 1,25(OH) 2 D 3 was determined by spiking human brain with serially diluted vitamin D standards.Further validation with intra-assay and inter-assay precision were characterized with relative standard deviations (RSD) for the targeted concentrations of 10 and 80 pg/g in human brain.The inter-assay variability was determined by repeating the same procedure on 4 consecutive days.The accuracy was calculated as the percentage of nominal concentration [(measured concentration-blank brain concentration)/(nominal concentration) Â100%].d6-1,25(OH) 2 D 3 as IS was added to spiked brain samples to evaluate the IS-normalization extraction recovery. Statistical analysis Linearity, slope, and regression coefficients were determined by linear regression using Microsoft Excel.Spearman's correlation of 1,25(OH) 2 D 3 concentrations in different brain sessions was calculated using SPSS (IBM, version 29). Results This method for successfully measuring 1,25(OH) 2 D 3 in human brain demonstrates linearity between 1.25 and 400 pg/ mL with an R 2 value of 0.9998.The LOD was 2.5 pg/g, which is more sensitive than the previous LOD of 25 pg/g [7].The precision and accuracy of 1,25(OH) 2 D 3 in spiked human brain are shown in Table 1.The precision of these measurements is characterized with an intra-assay variability for the targeted brain concentration of 10 and 80 pg/g as RSDs of 4.0% and 6.7%, respectively.As for the inter-assay variability, the RSDs for the targeted brain concentration of 10 and 80 pg/g were 13.6% and 6.0%, respectively.The accuracy of spiked brain samples was slightly over 100%, ranged from 101.1% to 108.5%.The extraction recovery, as shown in Table 1, indicates that this method yields high recovery rates for brain samples.The resulting multiple reaction monitoring chromatographs of the derivatized 1,25(OH) 2 D 3 with PTAD are shown in Figure 1.Studies have shown that the double peak on the chromatogram is the result of the Diels-Alder reaction with derivatization reagent PTAD, reacting with the s-cis-diene moiety from both the α and β sides and forming 6S and 6R epimers [11]. According to the previously published method, there are certain challenges in isolating one vitamin D metabolite, the 1,25(OH) 2 D 3 , from brain samples because of the differences in polarity among the vitamin D metabolites analyzed [12,13].The polarity difference makes it impossible to extract all of the vitamin D3 metabolites using the same solvent.Of all the vitamin D metabolites, 1,25(OH) 2 D 3 has the greatest polarity because of the additional hydroxyl group.This difference in polarity allows for its extraction through SPE, with mobile phases of varying polarity.The specific silica column used in the SPE is polar, allowing for the 1,25(OH) 2 D 3 to bind to silica particles and other less polar vitamin D metabolites to wash through.SPE has been successfully applied for measuring 1,25(OH) 2 D 3 in plasma [14].Using the sequential washes of 4% and then 6% isopropanol in hexane allows for selective isolation of 1,25(OH) 2 D 3 to be bound to the column.The last eluant is collected with the greatest polar concentration of 25% isopropanol in hexane.This specific sequence of polar mobile phases was found to yield the greatest recovery of 1,25(OH) 2 D 3 . The method was used to analyze 459 human brain samples, from 153 postmortem brains obtained from participants in the RUSH Memory and Aging project study.All 3 analyzed brain regions (n ¼ 153), AWS, MT, and CR, measured 1,25(OH) 2 D 3 with the respective concentrations 30.7 AE 15.2 pg/g, 19.2 AE 9.3 pg/g, and 18.5 AE 8.6 pg/g (Figure 2).These correlations of 1,25(OH) 2 D 3 concentrations among the 3 regions were the following: MT and CR Spearman r ¼ 0.70, MT and AWS Spearman r ¼ 0.55.CR and AWS Spearman r ¼ 0.45; all P 0.005.The method's LOD of 2.5 pg/g contributed to the quantification 1,25(OH) 2 D 3 in all brain samples analyzed (Figure 2).Compared with the previous method, which was unable to detect 1,25(OH) 2 D 3 in 58% of brain samples below the LOD of 25 pg/g, this assay could detect 1,25(OH) 2 D 3 in all brain samples and with greater linearity over the concentration range evaluated [7]. Discussion Here we present a successful modification to a prior validated method of measuring vitamin D metabolites in human brain tissue that now allows us to quantify 1,25(OH) 2 D 3 , the hormonally active form of vitamin D. This assay can detect 1,25(OH) 2 D 3 in multiple human brain regions.Accurate measurement of 1,25(OH) 2 D 3 in human brain tissue will allow us to better elucidate the mechanism underlying associations between vitamin D status and cognitive function in older adults.Because of the small sample size available for assay development, we were not powered to further analyze the associations between 1,25(OH) 2 D 3 and cognitive or neuropathological outcomes.However, now that we have a validated assay, this will be an important future direction for this research.Future research is also needed to elucidate the factors that contribute to the variability in human brain 1,25(OH) 2 D 3 concentrations.For example, the brain 1,25(OH) 2 D 3 concentrations may be attributed to variations in VDR and/or 1-alpha hydroxylase expression, because the VDR and the 1-alpha hydroxylase [crucial for synthesizing 1,25(OH) 2 D 3 ] have been identified within the adult cadaver human brain, as well as in both neurons and glial cells [15]. There are few studies using similar methods that can measure 1,25(OH) 2 D 3 in mammalian brain tissues, because brain tissue is ~60% lipids, and the composition of these lipids is very complex.One recent method using LC-MS/MS with an online extraction to measure 1,25(OH) 2 D 3 in the rodent brain, with a limit of quantification of 12.5 pg/g [16].However, the majority of methodologies that use LC-MS/MS to measure 1,25(OH) 2 D 3 in human plasma samples have higher LOD ranges from 20 to 210 pg/mL [5,17].The SPE step provided additional purification of the brain samples, enhancing the selectivity of the assay.Approximately 500 samples were analyzed with quality controls inter-assay coefficient of variation of 7%.This indicates that the assay is robust and suitable for large-scale studies.Overall, our method was successfully used for the determination of 1,25(OH) 2 D 3 in the human brain, with detection of 1,25(OH) 2 D 3 in all samples because of the improved LOD.With this method, research into the role of the 1,25(OH) 2 D 3 in the human brain can be further elucidated.In conclusion, this validated method can be further applied and coupled with other methods that measure vitamin D metabolites to bring further insight into the role of vitamin D and its metabolites in the brain and neurodegenerative diseases.
2024-07-18T15:09:51.082Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "4a806fcafdea5d3ec22375115b4a94180ce3cceb", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "26de3cf9148e702c7252532e8126feb87f62d743", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [] }
222298751
pes2o/s2orc
v3-fos-license
Factors affecting the accuracy of nurse triage in tertiary care emergency departments OBJECTIVES: The accuracy and duration of triage is vital in emergency departments. However, patient density, diversity of cases, and time pressure make triage difficult. Triage performed properly and at the right time prevents patients from experiencing any untoward incidents that may occur because of waiting. Therefore, the study aimed to share the data obtained from the Hospital Information Management System (HIMS) regarding the accuracy and duration of nurse triage in an adult emergency department. METHODS: This descriptive and cross-sectional study evaluated the accuracy and duration of triage decisions made by nurses for patients admitted to an adult emergency department between June 15 and July 15, 2019. Statistical analysis was performed using Statistical analysis was performed using SPSS software version 23.00. RESULTS: The study included the data of 7705 adult patients. The accuracy rate of nurse triage was 59.3% (n = 4566), and the average duration of triage was 1.52 ± 2.10 min. It was observed that the average duration of accurate triage decisions was longer in patients with triage category 3. A statistically significant relationship was determined between the accuracy of nurse triage and the duration of triage, years of seniority of the nurse, and shifts (P < 0.05). CONCLUSIONS: The accuracy and duration of nurse triage in the hospital where the study was conducted can be evaluated via the HIMS. In order to increase the accuracy of nurse triage in the emergency department, it is necessary to employ experienced and trained nurses, develop computer-based support systems, and increase the number of nurses working in shifts providing care to a large number of patients. Introduction I n emergency departments, every second is important for patients. The duration of time spent in an emergency department can determine the patient's death, disability, or return to life. [1] Another factor affecting this process is the accuracy of triage decisions made. [2] It is the most important role of triage nurses in the initial assessment to ensure that the patient is in the right place at the right time in the emergency department and that no one is ignored. [3] Approximately half of the triage assessments are estimated to be erroneous in emergency departments. [4,5] Inaccurate or inconsistent triage in emergency departments can lead to poor clinical results, such as prolonged diagnosis and treatment time for patients, improper use of hospital resources, decreased patient and employee satisfaction, and even increased mortality rates. [2,5] Therefore, triage needs to be fast and should be completed in 2 to 5 min. [6] Triage practice in our country is a fairly new field and has its challenges. Legal arrangements for triage were announced in 2009; however, the accuracy and error rates of the triage performed are unknown. [7] The number of studies in the field of triage at the national level is quite low. These few existing studies mostly focused on the creation, practicability, and validity of triage algorithms. Moreover, another noteworthy issue that studies on nurse triage are inadequate. [8] Therefore, it is important to conduct studies on triage practices at the national level. Studies assessing nurse triage will be useful in the process of continuous monitoring, supervision, and quality service delivery. The study aims to share data on the accuracy and duration of the nurse triage performed in an adult emergency department. Knowing the accuracy, duration of nurse triage, and other affecting factors can form the basis for developing measures for providing safer health care to patients. Setting This study was conducted in the emergency department of a university hospital, with an average admission rate of 280 patients daily. The emergency overcrowding score according to The National Emergency Department Overcrowding Scale was calculated to be Level 4 (overcrowded). Triage in the emergency department is performed by nurses with at least 1-year emergency experience and 6 h of triage training. In this hospital, there are 19 nurses working in the field of triage. The accuracy and duration of the triage performed by nurses in the hospital can be monitored through the Hospital Information Management System (HIMS) using the Structured Query Language (SQL) Script language developed previously. Design We obtained data on nurse triage performed for patients in the adult emergency department of a university hospital between June 15 and July 15, 2019, from the HIMS. SQL Script, developed by the researchers, is considered the gold standard for the accuracy and duration of triage decisions. In this system, the accuracy of triage decisions was determined according to the clinical outcome criteria using the Emergency Severity Index algorithm, for determining the triage categories of 3, 4, and 5 based on the resource requirement. If patients did not require any resources, they were considered category 5. If they required the use of one resource, they were category 4, and the use of multiple resources was category 3. In patients with triage categories 1, 2, and 3, accuracy assessments were performed according to the clinical outcome criteria. The clinical outcome criteria include the death of the patient in emergency department, referral to another hospital, hospitalization in the intensive care unit or clinic, and the death in the first 24 h of the hospitalization. The SQL script system developed was designed such that the triage performed can be considered correct if the categories assigned to the patient by the triage nurse falls into one of the above situations, and incorrect if they do not. The study data were obtained from the HIMS by writing a SQL Script together with the information technology unit. The duration of performing triage in the system refers to the time period between the moment that the triage nurse starts receiving the first data from the patient by clicking on the new record button and completes all the procedures by clicking the save button. A total of 8684 patients applied to the adult emergency department during the study duration. Pediatric trauma patients (under the age of 18) were excluded from the study; hence, the study sample consisted of 7705 adult patients. Written consent was obtained from the faculty of medicine clinical research ethics committee (March 4, 2020/225), the university hospital, and the emergency department before starting the study. The study was designed as a descriptive and cross-sectional study. Statistical analysis Statistical analysis was performed using IBM corp. SPSS Version 23.0 (NY, USA). Average, frequency, standard deviation, and percentage were calculated for categorical and continuous variable data. The relationship between categorical variables was evaluated using the Chi-square test. Bonferroni's multiple comparison test was performed to test group differences after the Chi-square test. The independent sample t-test was used to compare parametric continuous data. The level of significance was set as 0.05. Box-ED Accurate triage is one of the most important solutions in providing safe care and treatment in emergency departments. Nurse triage decisions in the Emergency Department was 59.3% accurate. It may be helpful to develop electronic guiding support systems in order to increase triage accuracy and perform triage within the recommended time period. National studies on the rates of triage errors in emergency departments are needed. Results In the study, the data of 7705 patients were evaluated. The median age of the patients was 41.40 ± 17.64 years. The nurse triage accuracy rate was found as 59.3% (n = 4566). The mean triage duration was found to be 1.51 ± 2.10 min [ Table 1]. There was no difference between the accuracy decision and triage times in patients with triage category 2. However, the average duration of accurate triage decisions in category 2 was longer (P = 0.53). Additionally, the triage durations by accurate triage decisions were longer in patients with triage category 3 (P = 0.01). In patients with triage category 4 and 5, there was a difference in terms of average time and accuracy decisions. The average duration of accurate triage decisions was shorter (P = 0.01) [ Table 2]. It was determined that the accuracy rates of nurse triage in the adult emergency department differed by shift. According to the Bonferroni test, the reason for this difference is the higher patient density in the evening shift than in the day and night shifts (P = 0.03) [ Table 3]. It was determined that the accuracy rates of triage in the emergency department differed by working time of nurses in triage (P = 0.01). According to the Bonferroni test, the difference is due to the fact that triage accuracy rates of specialist nurses (61.8%) are higher comparing to novice nurses (54.8%) (P = 0.01). Novice nurses have lower triage accuracy rates [ Table 4]. Discussion This study shows that the nurse triage decisions 59.3% are accurate. Moreover, it was determined that 41.7% of the patients who presented to the Emergency Department were determined as category 3 by nurses and that their accuracy rates were high [ Table 1]. In triage category three patients, the average duration of correct triage decisions was longer [ Table 2]. This result suggests that triage assessment times should be longer especially for category three patients and should be made in line with the literature. [6] One study found that approximately half of the patients (49.1%) that applied to the Emergency Department were determined as category 3. [9] In another multicenter study, the accuracy rates of nurse triage decisions did not differ between regions and the average accuracy rate was found to be 59.2%. [10] In another study, the accuracy rate of nurse triage was found to be 68.3%. [4] In a study by Chen et al., [11] nurses' triage decisions were inaccurate at 40%. In a study where the accuracy rates of triage decisions were evaluated by case scenarios, it was stated that 40.4% of the triage decisions were inaccurate. [12] In our study, the average duration of triage assessments was found to be 1.51 ± 2.10 min, which was less and below the time period recommended in the literature [ Table 1]. Furthermore, it was determined that the levels between the accuracy rates of nurse triage decisions and the duration of triage were different in categories 3, 4, and 5 [ Table 2]. In one study, it was found out that the average time to the end of the patients' triage assessments was 04':04. [13] Some studies indicated that the average duration of triage is 2.6 ± 2.5, [14] and 5.9 minutes. [15] In another study, it was found that the longest time period allocated for triage was in category three patients at 2.8 ± 2.5 min. [14] The literature suggests that triage assessment should be performed between 2 and 5 min in order for the assessment to be fast and accurate. [6] One of the most important factors affecting decision making in triage is the number of patients in the emergency department. [16] In our study, the error rate in the night shift with lower patient density (n = 1317) Table 3]. In day and evening shifts with high patient density, triage accuracy rates decrease. The literature review indicates that the accuracy of nurse triage is associated with the number of nurses working in each shift, patient density, [17] and workload. [5] In our study, it was determined that accuracy rates increased as the years of triage experience of nurses increased [ Table 4]. Previous studies determined that the accuracy of triage decisions increased as the working time of nurses in the emergency department and triage increased. [11,12] In the same study, it was found that the nurses with 3-4 years of triage experience had an accuracy rate of 67.2%, while those with 0-1 years of triage experience had an accuracy rate of 58.1%. [12] In a study examining the relationship between triage performance and experience, it was found that the triage accuracy rate of nurses with less experience was found to be 45.76% and those with more experience was 53.8%. [18] Hammad et al. [19] state that nurses working in triage should have at least five years of experience. These findings reveal the need for novice nurses to be not employed in daytime and evening shifts where patient density is high. It may be recommended that novice nurses work together with experienced nurses during high patient density times to build skill and competency. It also shows the need to increase the number of triage nurses in terms of sharing workload in intense shifts. Limitations The limitation of this study is that it was conducted using data of nurse triage from the HIMS of a university hospital. Conflicts of interest None Declared. Ethical approval Ethical approval was granted by the Akdeniz University Faculty of Medicine Ethics Committee for Non-Interventional Clinical Research (04.03.2020/225). Consent to participate The data were obtained from the HIMS. The Standards for Privacy of Individually Identifiable Health Information (the Health Insurance Portability and Accountability Act-HIPAA) were followed.
2020-10-13T13:54:08.307Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "73ff953e4ddabea5cd57ffa539a2eab803a9e052", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/2452-2473.297462", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d780319736b22a4bc930d07397b15ba354c1df39", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9838396
pes2o/s2orc
v3-fos-license
Texture Analysis of Abnormal Cell Images for Predicting the Continuum of Colorectal Cancer Abnormal cell (ABC) is a markedly heterogeneous tissue area and can be categorized into three main types: benign hyperplasia (BH), carcinoma (Ca), and intraepithelial neoplasia (IN) or precursor cancerous lesion. In this study, the goal is to determine and characterize the continuum of colorectal cancer by using a 3D-texture approach. ABC was segmented in preprocessing step using an active contour segmentation technique. Cell types were analyzed based on textural features extracted from the gray level cooccurrence matrices (GLCMs). Significant texture features were selected using an analysis of variance (ANOVA) of ABC with a p value cutoff of p < 0.01. Features selected were reduced with a principal component analysis (PCA), which accounted for 97% of the cumulative variance from significant features. The simulation results identified 158 significant features based on ANOVA from a total of 624 texture features extracted from GLCMs. Performance metrics of ABC discrimination based on significant texture features showed 92.59% classification accuracy, 100% sensitivity, and 94.44% specificity. These findings suggest that texture features extracted from GLCMs are sensitive enough to discriminate between the ABC types and offer the opportunity to predict cell characteristics of colorectal cancer. Introduction Colorectal cancer (CRC) represents one of the most frequent cancers affecting people [1]. It is characterized by abnormal and uncontrolled cellular proliferation [2]. Surgical resection of the primary tumor with curative intent is possible in only 70% of patients. Unfortunately, up to 30% of CRC patients who undergo surgical resection of the primary tumor experience a subsequent relapse within 3 years and with a median time to death of 12 months [3,4]. Colorectal cells are transformed by CRC into anomalous and heterogeneous shapes [5,6]. In this context, heterogeneity is a pronounced feature of colorectal cancer that manifests as areas of high cell density. Attempts to quantify heterogeneity have been made using multiple feature functions such as Haralick features [6]. Another instance has used the link between the texture of hepatic tissue and its entropy and uniformity to predict survival using computer tomography images [7]. However, limited studies have used texture features to assess the continuum of CRC from benign to malignant cells. Additionally, classical optical microscopy systems can detect ABC by applying advanced image processing techniques [8]. Early detection of ABC by shape or heterogeneity is of high interest in order to diagnose and start therapy early [6]. Hence, automating the process allows a faster and more precise reading of microscopic biopsies and may even allow classification of samples as BH, IN, or Ca [6,9,10]. In this context, numerous studies have considered developing automated reading procedures of such biopsies [5,[11][12][13][14]. The biopsies examined by these procedures can be prepared and preprocessed for automated reading using the optical microscopy system. Then, ABCs can be analyzed from their surrounding media using segmentation techniques [15]. In this context, the appropriate segmentation technique must be carefully established in order to process multispectral bioimages from microscopic system that provides high resolution gray scale images. Moreover, identification of ABC within an image should take into consideration some characteristic features that are representative of each ABC type [6]. Texture feature extraction from ABC can be a promising technique to 2 Analytical Cellular Pathology characterize each ABC type. Then, discrimination between ABC types can be done by applying one of the classifier approaches such as the decision tree approach [16]. The analysis of the textures and structures of each ABC type permits a more accurate diagnosis of the malignant cells as they are structured in various patterns and textures. In this work, we propose to analyze each of ABC type by extracting texture features from GLCM. Texture refers to the variability in tone within a region, or the spatial relationships among the gray levels of neighboring pixels. Threedimensional (3D) texture analyses offer more information by using two phases and multioffset pixels to detect the variability of pixel pairs in 3D space [16,17]. The statistical approach of image analysis based on the matrix of cooccurrence is commonly applied to optical and medical images to evaluate morphology [5,11,18,19]. The texture features thus extracted from GLCM describe the texture and local variation in an image. For classification, we selected twelve principal features in order to identify ABC types, while discarding those that are either redundant or confusing, thereby improving the performance of the proposed feature based detection technique. In summary, the purpose of this study is the derivation of quantitative texture multispectral image features from optical microscopy images that classify the continuum of CRC lesions. The novelty of this study was the first training on an automated continuum prediction of CRC. This will be foundation of radiomic maps that associate these texture features with various ABC types. The remainder of this paper is organized as follows. Section 2 describes the texture feature extraction from 3D GLCM in detail with performance metrics. Sections 3 and 4 demonstrate experimental results and discussions. Finally, Section 5 concludes the paper. Materials and Methods We specifically analyzed 3D multispectral digital whole slide images (WSI) from 27 colorectal cancer patients. An example of spatial heterogeneity for each multispectral ABC type is seen in its histogram distribution. Clearly, there are certain characteristics and features from preliminary analysis that differentiate BH, IN, and Ca ( Figure 1). Sample Preparation and Data Acquisition. Whole tissue samples were taken from colonic glands with thickness value of 5 m which is stained using Haematoxylin and Eosin (H & E) stains. Images were captured by a charge coupled device (CCD) camera integrated with a liquid crystal tunable filter (LCTF) in the optical microscopy system [20]. LCTF provides multispectral images of the tissue samples by changing the wavelength operation [21]. LCTF has a bandwidth of 5 nm, and its wavelength is controllable through the visible spectrum range of 400-720 nm. Multispectral images are produced through repeated image capture in various wavelengths subbands. Moreover, the impact of multispectral imaging has been shown that the classifier accuracy increases with the number of spectral bands [22]. Note that each image band is 8 bits coded and hence has 255 possible light intensity levels. In this study, LCTF offered 16 multispectral bands using a wavelength range of 500-650 nm. Thus, from each original image, we obtained 16 images representing the wavelength range and a volume of multispectral data ( Figure 1). Hence, texture extraction from each band of multispectral data enhances the lesion characterizing each abnormal cell type. Note that a colorectal pathologist views images at lower power to identify the abnormal cells which is represented by a low magnification (×40) of image samples. Patients. After excluding samples with incomplete data, a set of 27 CRC patients were gathered for a preliminary study. We selected nine volumes of data from each ABC type, where a volume of data was structured in 16 multispectral images ( Figure 1). Thus, images were filtered by an average filter (spatial filter) before further segmentation processing to minimize the effects of noise in images and other external factors. All the images were reconstructed to a 512 × 512 matrix where the volume size of 512 × 512 × 16 was taken into consideration in texture feature extraction from GLCM of ABC (Figure 2(a)). Segmentation of Abnormal Cell. We employed active contour segmentation to accurately segment anomalous shape of cells. This technique is based on a dynamic curve that moves toward and detects the contour of the object by a number of iterative processes [23,24]. This approach was successfully implemented to detect ABC types from similar kinds of multispectral bioimages. The computation time was improved by limiting the number of iterations, which was set automatically based on empirical calculations [6]. Computation time was further strengthened by resizing the images. For instance, an image of size 512 × 512 pixels was decreased to 64 × 64 pixels and active contour was applied to detect cells within the image. Active contour images were then resized to 512 × 512 pixels and placed on the original image ( Figure 3). In fact, this technique resized the active contour and not the original image in order to enhance the computation time. Cell images in 16 multispectral images were then assessed by a board certified colorectal pathologist. A volume of a cell segmented in 16 multispectral images (2D images) was created to represent the variance details in the multispectral band (Figure 2(a)). To assess the active contour segmentation, ground truth of cell and cells segmented based on active contour were considered. In this context, evaluation of WSI segmentation considered two similarity metrics, namely, Jaccard similarity coefficient (JSC) and Dice similarity coefficient (DSC). Additionally, false positive rate (FPR) and false negative rate (FNR) were also computed. JSC and DSC measure the degree of the correspondence between ground truth cell images and segmented images. JSC can be formulated according to the following: where and are the area of ground truth of cell and segmented cell, respectively. JSC was employed to calculate the overall level of similarity between segmented cell and ground truth cell. DSC was also employed and can be expressed according to the following: Additionally, we employed FPR and FNR which were used to quantify over-and undersegmentation. Both FPR and FNR are calculated according to the following: Direct relation between JSC, FPR, and FNR is defined according to the following expression: The performance metrics of active contour technique were reported (Table 1). This volume of WSI was quantified by the texture feature extracted from GLCMs of each abnormal cell type. GLCM Based Feature. One of the best techniques used to evaluate the relationships between image pixels is the texture feature extraction from GLCM. This technique was proposed by Haralick et al. in 1973 [19]. It is one of the most popular second-order statistical features which is based on Analytical Cellular Pathology GLCM computation and its texture features. Then, secondorder statistics estimate properties of two or more pixel values occurring at specific locations relative to each other. For these reasons, we proposed to use GLCM based feature technique in this work. (2D and 3D). GLCM represents the probabilities , ( , ) of transition from a pixel with intensity " " to a pixel of intensity " " separated by a translation vector defined by direction " " and an offset " " (offset known as distance) [11,[16][17][18][19]. Given a twodimensional (2D) image of size × , the cooccurrence matrix , ( , ) can be defined as follows: GLCM computations can be also applied to 3D images. In this case, the GLCM ( , ) counts the number of pixel pairs that have intensities " " and " " for the spatial relationship specified by a translation vector ( , , ), where , , and represent the number of pixel offsets along theaxis, -axis, and -axis of the 3D image. For volumetric data, two angles ( , Ø) lead to 13 directions ( Figure 2). Each segmented cell was histogram equalized to 32 levels, and then we employed the GLCM computation. The foremost advantage of GLCMs applied to volumetric data is the ability to capture intensity relationships between the pixels in a 3D volume. Further, the number of GLCMs resulting from 3D operations is typically smaller than that corresponding to numerous 2D slices. For example, in a data cube with 10 separate 2D slices, there are a total of 80 GLCMs (8 GLCMs analogous to 2 offsets and 4 directions per slice). On the other hand, in a 3D operation, the total number of GLCMs is 26 (13 directions and 2 offsets). Supported by the benefit of GLCMs applied on volumetric data, we computed GLCMs of multispectral ABC and quantified these cooccurrence matrices by Haralick features. Texture Quantification. Haralick proposed 14 texture features to be extracted from GLCMs, and the value of each extracted feature indicates the preliminary indicators of ABC in the texture image. Among the 14 texture features, we employed the 12 principal textural features: energy ( 1 ), entropy ( 2 ), correlation ( 3 ), contrast ( 4 ), homogeneity ( 5 ), variance ( 6 ), sum-mean ( 7 ), inertia ( 8 ), cluster shade ( 9 ), cluster tendency ( 10 ), maximum probability ( 11 ), and inverse difference moment ( 12 ). These features are defined by their functions as follows shows the scale of texture homogeneity. It is high when the GLCMs consist of few pixels of high amplitude and low when all the values of GLCMs are almost similar. measures the disorder or complexity of an image. The highest value of entropy is found when the values of ( , ) are allocated quite uniformly throughout the matrix. measures the linear dependence of gray level values in the GLCM or describes the correlations between the rows and columns of GLCM. measures intensity contrast or the local variations present in an image to show the texture fineness. returns a value that measures the closeness of the elements distribution in GLCM to the GLCM diagonal. is expected to be large if the gray levels of the image are spread out greatly. measures the average of the gray levels. It can be high value if the sum of the gray level of the image is high. Analytical Cellular Pathology 7 measures the inhomogeneous in image. measures the skewness (asymmetric) of the GLCM and is considered to gauge the perceptual concepts of uniformity. When the cluster shade is high, the image is asymmetric. measures the grouping of pixels that have similar gray level values. measures the dominant pair pixels in the GLCM. It can be high if the dominant pair pixel is high. measures the smoothness of the image. It can be high if the gray levels of the pixel pair are similar. For the ABC detection problem, the aforementioned textural features are extracted from the 3D GLCMs conforming to the 13 directions and 4 types of offset. Therefore, the length of the resulting feature vector is 12 (functions) × 13 (directions) × 1 (distance or offset) = 156 features. To analyze the effect of texture feature based on GLCMs, we organized texture features into 5 groups (G 1 , G 2 , G 3 , G 4 , and G 5 ) reported in Table 2. Moreover, we calculated the average of texture feature based on 3D GLCM within 13 directions and 4 offsets to evaluate the value of each one from ABC (Table 3). Additionally, we employed feature selection techniques on each texture feature group to demonstrate the effectiveness of texture analysis in a definite direction and offset, and the performance metrics were reported (Table 4). Haralick) can be found among the BH, IN, and Ca cell samples. -score normalization was employed on each of the feature vectors, which converted the features to zero mean and unit variance [25]. The mean and standard deviation ( ) of the feature vector are calculated as follows: Statistical Analysis. Textures quantified by twelve functions (based on those suggested by where is the original value, is the new value, and the mean and are the mean and standard deviation of the original data, respectively. ANOVA was used to assess the statistical significance between texture features and ABC types [26]. This test was used to identify the significant texture feature where a value < 0.01 was deemed significant. An aggregate of 158 significant features were selected, which was further reduced using PCA. Five principal components (PCs) representing 97% of the variance among the 158 selected features were used in a decision tree classifier (Tables 3 and 4). Classifier Setting and Performance Metrics. Classification of the ABC types based on texture features was performed using the significant features as input variables in a decision tree (DT) classifier [27]. The most important aspect of a decision tree induction strategy is the split criteria; it is a method of selecting an attribute that determines the distribution of training objects into subsets upon which subtrees are consequently built. In this study, a goodness criterion based on Gini index was used to determine how well various feature test conditions performed [28]. The reason to use the DT classifier is to find automatically the dominant features and provide the classifier metrics; however, we considered the naïve Bayes and nearest neighbors to evaluate the classifier performance metrics using the known class labels from ABC types. Due to limited data (27 patients), the classifier was validated using leave-one-out cross-validation [29]. We considered the following performance metrics of classification: accuracy, sensitivity, specificity, -score, and area under the curve (AUC), which were performed to test the reliability of the texture feature classifier. We used multiple metrics for better assessing the feasibility of abnormal cell type discrimination using texture feature based 3D GLCMs. Note that true positive (TP) and true negative (TN) are the number of positive and negative samples correctly classified; false positive (FP) and false negative (FN) are the number of positive and negative samples incorrectly classified [30] Accuracy represents the correctly classified samples and can be expressed by the following: Sensitivity is a measure of the capability of a classifier to recognize the positive class patterns. It can be expressed according to the following: Specificity is a measure of the capability of a classifier to recognize the negative class patterns. It can be expressed by the following: -score is a weighted average of precision and recall and can be calculated using the following: Experimental Results ABC digital images were segmented using the active contour segmentation technique. Figure 3 shows ABC types segmented using several steps. The process of cell detection from multispectral images may appear to be a difficult task as bioimages contained some areas that have a similar range of gray shades and irregular shapes. Morphology operators were necessary to select the required cells from images by a board certified colorectal pathologist because there were multiple cell types within images. Snake (active contour) techniques showed that ABC types were correctly detected and located (Figure 3). JSC shows a similarity range of 75.92-81.56% with the best performance achieved with Ca cell type. Meanwhile, DSC shows a similarity range of 86.31-88.21% with the best performance achieved with Ca cell type. Moreover, FPR shows a range of 05.03-07.61% with the best performance achieved with IN cell type, while we observed that FNR provided a range value of 16.11-20.26% with the best performance achieved with Ca cell type (Table 1). These metrics confirmed the feasibility of active contour segmentation method to determine the abnormal cell types and specifically the Ca cell type (Table 1). Figure 4 shows an example case of GLCMs for corresponding ABC types in Figure 3. GLCM images showed the most pronounced texture associated with Ca cells among the three ABC types. These texture values represent a high number of pixel pairs in the original image of Ca cells, followed by IN and BH cells, respectively. Additionally, BH images had a homogenous texture that was more homogenous than IN and Ca; its corresponding GLCM showed that most BH textures were depicted in the diagonal of the GLCM image. Notably, when the texture GLCM image has more fitted data around the diagonal, the original image is less homogenous. The average of texture functions based on whole offsets and directions showed the differences between the ABC groups, which were demonstrated in each of the 12 texture features extracted from GLCMs (as shown in Table 3). For instance, Analytical Cellular Pathology Maximum values of the performance metrics range were achieved by using group G 3 which represented a four-pixel offset and 13 directions of GLCMs (Table 4). Moreover, classifier accuracy for each ABC feature exhibited a range of 55.55-88.88%, 44.44-88.88%, and 55.55-66.66% for BH, IN, and Ca, respectively (Table 5). Furthermore, five PCs features showed the highest value of 92.59% accuracy, 100% sensitivity, and 94.44% specificity (the last row in Table 4). The highest classifier accuracy obtained for IN was 88.88% using G 1 and G 2 features (Table 5). However, BH and Ca features showed the highest values of 100% using five PCs features ( best AUC values for ABC discrimination were achieved using 5 PCs features ( Figure 5). Moreover, using five PCs features, comparative study of the confusion matrix for abnormal cell type discrimination based on decision tree (DT), naïve Bayes (NB), and nearest neighbors (NN) classifier [26][27][28] showed that the nine BH and nine Ca samples are correctly classified based on DT and NB classifier, respectively. However, eight IN samples are correctly classified based on NB classifier technique ( Table 6). -score showed the highest BH, IN, and Ca metric with 94.73, 94.11, and 100%, respectively, using NB classifier. This demonstrates that the best classifier technique for discriminating BH from IN and Ca is NB (Table 7). Discussion In this study, we have shown the role of texture feature extraction from GLCMs to discriminate BH, IN, and Ca. We demonstrated the use of quantitative image texture features and reported significant features with performance metrics for ABC discrimination. Additionally, we showed the power of texture quantification from GLCMs using 12 functions to indirectly associate image features with ABC types. Texture features extracted from GLCMs using four pixels offsets and 13 directions (G 3 ) showed a higher accuracy to discriminate between types of ABC than other features groups. This proves that the GLCMs of fourth pixel neighbors in 13 directions can offer the best automated ABC classification (Table 4). Similarly, classifier features of BH showed effectiveness to identify BH cells by texture features extracted from GLCMs using 4 pixels offsets and 13 directions (G 3 in Table 5). Abnormal IN cell presented the best classification using G 1 and G 2 , and Ca shows the best classification using G 2 , G 3 , and G 5 . However, without using PCs features, a lack of predictive accuracy of Ca demonstrated that its texture may resemble the texture of IN, which represents the complexity of malignant diagnostic (Table 5). According to the experiments in which different groups of texture feature were applied to the ABC discrimination process, the results showed the efficiency of PC features derived from significant texture features extracted using GLCMs for histopathology colorectal cancer image analysis (last row of Tables 4 and 5). This potential of PC features demonstrated the highest AUC values for discrimination of BH versus IN, BH versus Ca, and IN versus Ca ( Figure 5). Figure 6 showed a heat map correlation between the ABC features where the highest correlation represents the resemblance between the texture features. We observed that some BH and IN features (red rectangular shape) have a high correlation value. This represented the lack of performance metrics. This study demonstrates that texture feature extraction can be a map for ABC identification by using the techniques in image processing such as significant feature and feature selection. Previously, differentiation of human colon cancer cells was demonstrated using gene expression of B-tubulin isotypes [31]. More recently, multilabel classification of colon cancer using histopathological images was performed using several types of features. It was concluded that combined features can offer good performance for multilabel colon cancer prediction, with a precision of 73.7% [32]. Moreover, another study has proven that colon cancer prognosis can be identified by using distinct molecular subtypes and serrated precursor lesions [33]. Thus, the effort to analyze the continuum of colorectal cancer is still incomplete. To date, few studies have directly addressed the discrimination between types of ABC for the diagnosis of colon cancer. Most have focused on their heterogeneity; several studies have suggested that increasing heterogeneity is associated with malignancy [34]. Additionally, it has been proposed that greater biologic heterogeneity may be associated with oxidative stress and genomic instability [35]. Also, a study based on hepatic texture in patients with CRC found a more heterogeneous liver texture at coarse scale (textures extracted based on Laplacian of Gaussian filter) is related to the presence of occult malignancy [36]. Moreover, in this work, it was proven that higher value of entropy function is associated 12 Analytical Cellular Pathology with carcinoma which represents a higher heterogeneity between ABC types. This study offers a simple approach based on texture feature analysis to evaluate the continuum of colorectal cancer from benign to malignant by using three abnormal cells. These three cell types represent the transformation from benign to malignant cancer. In this context, the results showed that radiomic texture feature is significant and provide good classifier metrics and also highlight the potential of radiomic texture feature extraction for enhanced prediction of ABC from colorectal tissues. This should trigger further research of image-based quantitative texture features in colorectal cancer. Given the reality that colorectal cancer is highly heterogeneous between patients, texture feature analysis is a more desirable approach to provide clear categorization of ABC type than the established methods. Our study had limitations; the most important of these was the limited subjects ( = 27). Also the computation time of the segmentation, 3D GLCM, and texture feature extraction was around 15 minutes for each case. However, given the reality of the ABC, texture feature based 3D GLCM is a more preferable approach to categorize ABC type than the recognized models. Conclusion In this paper, a new method based on multitexture features for abnormal cell classification of colorectal cancer is proposed. Real data of colorectal cancer was used to validate the discrimination between ABC features. ABC was segmented by active contour technique and then texture feature extracted from GLCMs. Significant texture features were selected based on ANOVA test. The best results were obtained when combining all features together and PCA was applied to get five PC features with accuracy of 92.59% to discriminate between ABC types. This result is promising to make a bride between image features and colorectal pathology which would lead to efficient medical diagnosis and treatment. Consent Any necessary approvals, authorizations, and informed consent documents were obtained. Disclosure The materials are in compliance with all applicable laws, regulations, and policies for the protection of medical data.
2018-04-03T04:59:15.669Z
2017-01-17T00:00:00.000
{ "year": 2017, "sha1": "fa9f2de6a3b901d24f5ed5775c7ad5963dcc35ea", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2017/8428102", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "915fba7cb343412bc95011c7776c31fddb23b57f", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
93001709
pes2o/s2orc
v3-fos-license
The rate of mother-to-child transmission of antiretroviral drug-resistant HIV strains is low in the Swiss Mother and Child HIV Cohort Study ) had cART. HIV subtypes were concordant in all mother-child pairs (subtype B 13/22 [59%]). Using stored plasma (n = 66) and mononuclear cell (n = 43) samples from the children, HIV-tDRM (M184V) was identified in 1 of 22 (4.5%) mothers (1/11 treated, 9%) and was followed by HIV-sDRM at 10 months of age. HIV-sDRM (M184V 23%; K103N 4.5%; D67N 13.6%) occurred in 16/22 (73%) after 4 years, half of whom were treatment naïve. HIV-sDRM were associated with a lower CD4 T-cell nadir (p <0.05) and tended to have higher viral loads and more frequent cART changes. CONCLUSIONS: HIV-tDRM were low in this Swiss MoCHIV cohort, making them a minor yet preventable complication of prenatal HIV care, whereas HIV-sDRM are a significant challenge in paediatric HIV care. Introduction Mother-to-child transmission (MTCT) accounts for more than 90% of HIV infections during childhood according to UNAIDS reports [1,2].Missed maternal diagnosis and non-treatment are the main causes of MTCT.These are preventable by a combination of measures consisting of early HIV testing for all women as part of routine prenatal care, immediate combined antiretroviral treatment (cART) and regular monitoring throughout pregnancy.Complete HIV load suppression by the third trimester, and specifically at the time of delivery, reduces the risk of transmission to almost zero.As outlined by the WHO (Prevention of MTCT -PMTCT) and European guidelines [3,4], treatment-naïve women should start cART immediately, with an integrase strand transfer inhibitor (INSTI)-containing regimen being the preferred treatment.Pregnant women already under suppressive cART should continue, but contraindicated drugs such as didanosine, stavudine or triple nucleoside reverse transcriptase inhibitor (NRTI) combinations should be replaced.The latest data from the IMPAACT (International Maternal Pediatric Adolescent AIDS Clinical Trials Network) study suggest that exposure of mothers to dolutegravir (DTG) at the time of conception could be associated with a higher risk of neural tube defects.Therefore, the Swiss HIV Cohort Study (SHCS) advises against prescribing DTG for women who are trying to become pregnant [5]. In case of incomplete suppression (HIV-VL >50 copies (cp)/ml) at 34-36 weeks of gestation, intravenous zidovudine administration should be considered during labour and Caesarean section.In the case of ongoing maternal viral replication, international guidelines recommend starting cART in high-risk newborns as soon as possible and advocate lifelong continuation in the case of confirmed MTCT [4,6,7].Recent studies indicate that starting cART at birth is associated with improved long-term outcomes for HIV-infected children [8,9].Accordingly, morbidity and mortality in HIV-infected children have significantly decreased in the recent decades [10]. HIV-infected children in Western Europe and North America benefit from easy access to specialised medical care which provides laboratory monitoring and cART [11].Moreover, novel drugs and combinations have facilitated cART for pregnant women and children alike, allowing the goal of long-term HIV suppression with outstanding benefits for health at the individual, societal, and epidemiological levels to be reached.Nevertheless, virological failure remains a key challenge in the management of HIV-infected children.The rates of virological failure range from 12% to 32% in high and limited-resource settings, respectively.Rates peak during infancy, again during adolescence, and then when transitioning to adult medical care [12][13][14].Although dosing of cART in children is sometimes difficult because a suitable drug formulation (e.g.suspension) is not available, virological failure reflects not only a lack of access to care and cART or toxicity concerns, but often also adherence issues of both mothers and children.Thus, a main challenge in HIV-infected children is the emergence of HIV variants with drug resistance mutations (DRM), either selected by suboptimal adherence and insufficient drug combinations and levels (HIV-sDRM), or originating from MTCT (HIV-tDRM).For the reasons pointed out earlier, the risk of HIV-sDRM increases during the HIV-infected child's lifetime [14].Conversely, MTCT may involve HIV-tDRM [15], which may also impact cART efficacy and favour the emergence of HIV-sDRM.Here, we report the results of a retrospective analysis of data and blood samples collected prospectively from MTCT pairs participating in the Swiss Mother and Child HIV Cohort Study (MoCHIV) in order to evaluate the rate of transmitted and selected HIV drug resistance in Switzerland. Study design Data and blood samples were collected prospectively by MoCHIV, in which 277 HIV-infected children have been enrolled since 1989.These children were followed in clinics in Basel, Bern, Genève, St Gallen, Zurich and Ticino [16].MoCHIV is approved by the Swiss Federal Office of Public Health and by the institutional review boards of the participating centres.All women gave their written informed consent to participate in the study, including the retrospective analysis of their and their children's anonymised data and stored material.In June 2017, 274 ldren had been registered in Switzerland as being born to HIV-infected mothers (see MoCHIV on SHCS website).Of these, 1559 children were diagnosed as being not infected with HIV-1, 145 children had an undefined HIV-status, and 277 children were HIV-1 infected and participated in the Swiss MoCHIV.Of the HIV-positive children, 72 were lost from follow-up or are being treated by a non-cohort physician, 62 have died, and 59 are currently under followup in MoCHIV. For inclusion in this virological DRM study, we identified all MTCT pairs enrolled in the MoCHIV cohort from whom samples were available from the mothers at the time of pregnancy and/or within one year postnatally, and from the corresponding MoCHIV follow-up visits of the children.We analysed demographic, clinical and laboratory parameters including plasma HIV load, CD4 T-cell counts, cART, adherence and clinical conditions, as well as prospectively cryopreserved plasma and whole-blood samples.All available genotype and drug resistance profiles previously performed were included in the analysis.In addition, HIV-DRM were identified by HIV genotyping and resistance testing of the samples from mothers and children at the time of, or shortly after, birth in the Laboratory for Genotyping and Resistance Testing (formerly Division of Infection Diagnostics, Department Biomedicine, University of Basel, accreditation STS219; from 1 Jan 2019: Clinical Virology, Laboratory Medicine, University Hospital Basel, STS0568) to identify tDRM.HIV-tDRM were defined as DRM detected post-natally in RNA or proviral DNA samples that had been obtained before any antiretroviral treatment was administered and which matched the mother's viral genotype.HIV-sDRM were defined as new DRM arising in children with virological failure. HIV genotyping and resistance testing The standard of care of HIV genotyping and resistance testing in our institution is Sanger sequencing of amplicons generated from the HIV protease, reverse transcriptase and, since 2012, the integrase genes.This analysis was performed on both plasma and/or peripheral blood mononuclear cell (PBMC) samples by generating amplicons using the Abbott ViroSeq HIV-1 Genotyping kit (Abbott, IL, USA) or in-house methods validated and accredited in the Division of Infection Diagnostics, Department Biomedicine, University of Basel (STS219).Briefly, following nested PCR and quality control by agarose gel elec-trophoresis, the amplicons were purified with illustra ExoProStar 1-Step (GE Healthcare, England).The sequencing was performed using the BigDye Terminator v3.1 cycle sequencing kit (Thermo Fisher Scientific, MA, USA), purification with Sephadex G-50 (GE Healthcare, England) and capillary electrophoresis on a 3500 Genetic Analyzer (Thermo Fisher Scientific, MA, USA).The sequences were analysed using SmartGene (Lausanne, Switzerland) [17] and the results interpreted using the HIV Drug Resistance Database of Stanford University (CA, USA).As an accredited regional HIV reference laboratory, the Laboratory for Genotyping and Resistance Testing participates in an external quality assurance program using the French ANRS. Additionally, deep sequencing using the Illumina MiniSeq platform (CA, USA) was performed on 10 samples from three mother and child pairs.HIV protease, reverse transcriptase and integrase were first amplified using nested PCR primers, followed by library preparation using the Illumina Nextera XT kit (CA, USA).Paired-end sequencing was performed using the Illumina MiniSeq MidOutput Kit and a read length of 150bp.Reports were generated by DeepChek-HIV (ABL, Luxembourg) [18].Proviral load was determined using DNA extracted from the collected PBMCs using a quantitative, real-time PCR assay with normalisation to 150,000 cells using aspartoacylase, a human diploid housekeeping gene routinely used for this purpose [19]. Data analysis Patient demographics and laboratory parameters were analysed using descriptive statistics.Data from each individual child were described longitudinally over time.Demographic characteristics of mother-child pairs are presented as proportions (percentages) or as medians (25 th , 75 th ; range), using box and whisker plots where indicated.Categorical variables were compared using the unpaired ttest and Fisher's exact test, whereas continuous variables were analysed using the Wilcoxon rank-sum test.Twosided p-values of <0.05 were considered statistically significant.Analyses were performed using the statistical software package SPSS version 24 (IBM ® SPSS Statistics 24.0.0.1) and Excel version 15.37 (Microsoft ® Excel 2016 for Mac). Results We identified 22 MTCT pairs which fulfilled the enrolment criteria in the Swiss MoCHIV study.The children were born between 1989 and 2009, thereby covering 20 years of different treatments and antiretroviral drug availabilities.Nineteen children are still enrolled in this national cohort and three have moved to the adult Swiss HIV Cohort Study (SHCS).Median gestational age at delivery was 38 weeks (25th percentile 28, 75th percentile 39, range 27-42 weeks) and 27% of the children were born prematurely (<37 weeks) (table 1).Eight of the 22 children (36%) were delivered vaginally.All but one of the mothers (21/22, 95%) reported as treatment-naïve before pregnancy.During pregnancy, only 11 (50%) mothers received any kind of antiretroviral therapy (zidovudine in five cases, table 1), and cART was administered to only 6 (27%) mothers.Plasma HIV load at the time of delivery was only available for 10 mothers, and none of them were suppressed to <50 cp/ml (median 71,249 cp/ml; table 1). The demographics of the HIV-infected children are summarised in table 2. To the best of our knowledge, there were no siblings in this analysis, but this was not due to explicit exclusion.Five (23%) children had detectable plasma HIV loads at birth or within the first month, suggesting in utero transmission (median HIV load 13,845 cp/ ml; range 6880-2,531,122).In another 14 children, the first plasma HIV loads were detected within the first nine months of life.Only one child had undetectable HIV loads at six months of age, whereas the other children had median plasma HIV loads of 200,000 cp/ml (IQR 16,274-765,085; range 748-2,531,122).During follow-up visits, the nadir CD4 T-cell count varied widely, between 14 and 1,282/μl (median 280; IQR 146-627).Peak HIV loads at the time of diagnosing virological failure ranged from 1160 to 7,400,000 cp/ml (median 424,913; IQR 64,831-958,025).At last follow-up (median 14 years old; IQR 14-18 years; range 6-23 years), 19 of the 22 children were alive and attending regular follow-up (one child died at one year of age, two children were lost to follow-up at 9 and 10 years old), and 16 of these 19 children were on cART.The CD4 cell counts had a median of 709 cell/mmc (IQR 555-848; range 240-1,300).Plasma HIV loads were below the limit of detection (<20 cp/ml) in 11/19 children (median 60 cp/ml; IQR 0-77; range 0-10,000). Antiviral treatment and virological failure Thirteen (59%) children were born before the year 2000.Individual infant treatment history from the start of ART is summarised in figure 1.All children, except one who died within the first year of life, had a follow-up longer than two years.Among children who received cART, there was an average of five treatment changes (median; IQR 3-8; range 2-16) (table 2).Virological failure occurred in all children during follow-up (median 2; IQR 2-3.5; range 1-6). Adjusting the observational data for the year of effective follow-up indicated an average of 0.18 virological failure episodes per year and child, i.e., a virological failure occurred once every five years of observation time on average.Approximately half the virological failure events were among those older than eight years of age.In three children, plasma HIV loads were detectable over prolonged periods of more than 10 years. Genotype and drug resistance profile A total of 165 genotypic resistance profiles were analysed.One third of them (56/165) had been performed previously during a routine visit.For 33 profiles, only partial sequences were available, and these needed to be repeated.).The presence of HIV-tDRM was associated with early emergence of HIV-sDRM (0.83 vs 3.5 years after birth), but the small numbers precluded a meaningful statistical analysis. The emergence of HIV-sDRM was seen in 16/22 (73%) children, at a mean age of 4.3 years (median 2.8; IQR 0.8-5.9;range 0.25-14.8),and 10/22 (45%) had two or more mutations.Thirteen of these 16 (81%) were on treatment at the time of HIV-sDRM emergence.Children with HIV-sDRM showed a trend towards a greater number of treatment changes over the entire follow-up period compared to children without HIV-sDRM (7 vs 4 times, respectively; p = 0.104).The CD4 T-cell nadir was lower and associated with higher HIV loads in children with HIV-sDRM (p <0.05 and p = 0.06, respectively; fig.3). The most frequent DRM was M184V, detected in 7/22 (32%), while other major NRTI mutations (M41L, L210W, ) was detected, in one child in a proviral sequence, while minor INSTI DRM were found in a total of six patients.However, clinical evidence of associated virological failure was lacking, as none of the patients were exposed to this class of drugs.In all but two cases, HIV-DRM emerged after previous exposure ranging from six months to 10 years.DRM were already present at the time of the first genotypic analysis (M184V, T215Y, L210W, M230I) in only two children without medical history of prior ART.In three MTCT pairs, 10 sample materials from the time of delivery were available in sufficient amounts to complement the conventional Sanger sequencing by next-generation sequencing (NGS).In pair 21, no additional DRM were found.In pair 18, DRM M184I and V, 41L, 65R, 67N, 70R, 74I, 210W and 215Y were detected at frequencies of 53.7% (1,300 reads) and 22.7% (551 reads), 63.7% (557 reads), 1.2% (22 reads), 70.6% (1,282 reads), 1.2% (25 reads), 3% (64 reads), 4.1% (46 reads) and 70% (609 reads), respectively.None of the variants with frequencies of <4% were seen by amplicon-directed Sanger sequencing.In pair 20, NGS identified the PI resistance 82A at a rate of 16.5% (44,670 reads) during the time when the child had persistent high HIV loads having been exposed to this drug class.The PI resistance 82A increased to a detection rate of 99.6% (147,899 reads) two years later with the same method, and at that time was also identified by Sanger sequencing (fig.4). Discussion In this study of the Swiss MoCHIV cohort, we analysed HIV-1 genotype and drug resistance profiles in data and blood samples collected prospectively from 22 HIV infected mother-child pairs.The rate of HIV-tDRM was one out of all 22 MTCT pairs (4.5%) and 1/11 (9%) MTCT pairs where mothers had been previously exposed to antiretrovirals.Although the overall study population was too small for more robust conclusions, other studies have reported rates between 4.9% and 17% [21][22][23].The HIV-tDRM detected affected the HIV reverse transcriptase through the M184V mutation. Conversely, the rate of HIV-sDRM was 16/22 (73%), emphasising that HIV-sDRM represent a significant issue in HIV-sDRM emerged on average three years after birth and was associated with more frequent treatment changes (p = 0.104), lower CD4 T-cell nadir (p <0.05) and higher peak plasma HIV loads (p = 0.06).In the one case of HIV-tDRM, HIV-sDRM were subsequently detected after 10 months, suggesting that transmitted drug resistance might facilitate much earlier HIV-sDRM emergence and virological failure, although further analyses are needed due the low sample size.The first report on HIV-tDRM was published in 1994, describing a case of neonatal infection by a zidovudine-resistant HIV-1 strain [24].In a systematic review published in 2014, HIV-tDRM data among treatmentnaïve children were only available from 14 countries, but the overall number of cases analysed was low [25].As expected, the type and prevalence of HIV-tDRM reflected the choice and availability of antiviral treatments in the respective countries.Historically, the low cost of nevirapine administered as a single dose to the mother during labour and to the child right after delivery allowed its widespread use and improved the prevention of MTCT in low resource settings.Unfortunately, rapid selection of resistant viral mutants was soon observed [26,27].In high-income countries, HIV-tDRM of 10% to 17% have been estimated [15].A recent study investigated both selected and transmitted HIV drug resistance and indeed found a rate of 4.9% in the latter group [22].However, the data originated from a small cross-sectional study of 19 families and did not compare HIV-tDRM and -sDRM.There are few data on the role of HIV-tDRM, even though they potentially have effects on virological failure and the emergence of HIV-sDRM when only limited options for new antiretroviral drug classes exist. Our in-depth analysis of HIV-DRM by NGS was limited to three patients with sufficient sample materials.The three cases were representative of three different scenarios.In two cases, HIV-DRM at frequencies below 20% that were not detected by Sanger sequencing were detectable by deep sequencing.However, in one case the HIV-sDRM emerged as a majority variant after two years and was then detected by Sanger sequencing.Thus, NGS could be helpful for the identification of DRM-minority variants and for monitoring them over time.In the absence of DRM-informed decisions about effective cART, at least two new antiretrovirals that have not been used previously to treat either mother or child should be considered for children with virological failure [28]. Thus, the results of this study indicate that in the reported group of children, HIV-tDRM were found in only one out of 22 HIV-infected children born to mothers not receiving ART before and receiving it rarely during pregnancy (six with cART).In contrast, the frequency of HIV-sDRM was high and occurred with three to five years, despite the fact that half the children were treatment naïve. This study has several limitations.Firstly, the number of available mother-child pairs was small, limiting the statistical analysis to the descriptive level, even though a long follow-up of 15 years was available.Secondly, this is a retrospective evaluation which included patients from a range of different eras of HIV diagnosis and management from the years 1989-2009, including the pre-ART and pre-cART eras.Thirdly, apparently treatment-naïve women appeared overrepresented among the 22 MTCT pairs, but this result emerged only because of this detailed analysis.However, we remain cautious by not directly attributing a low pre-test risk of HIV-tDRM to a patient history of no prior ART exposure.In some cases, there was no specific ART history available, whereas in others, this information may not have been reliable, given the stigma in patients with a difficult psycho-social history, including a migration background.Moreover, the transmission of drug-resistant viruses remains a possibility, especially in the early era of ART when insufficient suppression was usual.In this respect, it is notable that 60% of the MTCT pairs had the B subtype (fig.2), but the 68% were of non-white ethnicity.Fourthly, HIV load and resistance testing could not be performed at all time points of interest, even though a significant effort was made by processing 109 samples of plasma and PBMC in addition to the HIV load data and resistance data that had been obtained at the time of the historic followup in the routine clinic.As the blood samples from these new-borns and infants were limited in size, additional next generation deep sequencing could not be performed except in 10 samples obtained from three cases.Nevertheless, the case studies were informative in principle, demonstrating that either no additional DRM were detectable (scenario 1), that DRM were detectable as minority species, but did not emerge as majority species (scenario 2), or that DRM were detectable as minority species and emerged as majority species during follow-up (scenario 3).Thus, HIV genotyping and resistance testing may provide relevant guidance for treatment, but access and adherence to appropriately dosed antiretrovirals, including new, not previously prescribed drug classes such as integrase inhibitors and combinations, remains an important consideration throughout paediatric HIV care. Conclusions While acknowledging these limitations, we conclude that HIV-tDRM were low in this Swiss study of mother-to child HIV transmission, making it a minor yet today preventable complication of HIV care [29,30].However, HIV-sDRM remain an important issue in paediatric HIV care, underpinned by the fact that all options were available for the treatment-naïve children in our study.Indeed, similar to the paediatric care of other chronic medical conditions [31] such diabetes mellitus [32] and transplantation [33], nonadherence in paediatric HIV care remains challenging despite more potent drugs, lower pill counts and fewer side effects in the current era of cART [14]. Financial disclosure This study was supported by the Swiss HIV Cohort Study (SHCS Project 800) and by the Swiss National Science Foundation. Potential competing interests No potential conflict of interest relevant to this article was reported. Figure 1 :Figure 2 : Figure 1: Individual antiretroviral treatment history of vertically HIV-infected children.HIV treatment history over eight years of follow-up is shown.Patients 18 and 21 were analysed by deep sequencing, as described in more detail in the results section, and patient 20 is shown in figure 4. The date of birth is indicated in front of the corresponding time line.Each coloured bar represents a type of treatment.A black bar means no treatment and a grey bar means data not available.Time is represented on the x axis, and the length of a bar corresponds to the interval of treatment, including when it was started.3TC= lamivudine; AZT = zidovudine; FTC = emtricitabine; ABC = abacavir; DDI = didanosine; DDC = zalcitabine; D4T = stavudine; TDF = tenofovir; EFV = efavirenz; ETV = etravirine; DRV = darunavir; LPV = lopinavir; NFV = nelfinavir; ATV = atazanavir; RTV/r = ritonavir; DTG = dolutegravir; RGV = raltegravir Figure 3 : Figure 3: HIV load and CD4 T-cell count in vertically HIV-infected children. A. CD4 T-cell count in MTCT children at birth with and without selected HIV drug resistance mutations (HIV-sDRM).B. Nadir CD4 T-cell count in MTCT children with and without HIV-sDRM.C. First HIV-1 load in MTCT children with and without HIV-sDRM.D. Peak HIV-1 load in MTCT children with and without HIV-sDRM.Box and whiskers plots showing median and 25th and 75th percentiles. Table 1 : Clinical characteristics of mothers in MTCT pairs. [20]1/22) each.Since protease inhibitors were the first antiretrovirals to be used in cART in Switzerland in 1996[20], we used HIV-protease sequences as a surrogate of diversity and evolution over time.In the circular phylogram of mother and child viral RNA and proviral DNA sequences (fig.2),examples of viral evolution are visualised in the distance between branches, which are Table 2 : Demographics of the HIV-infected children.
2019-04-05T03:28:47.297Z
2019-03-25T00:00:00.000
{ "year": 2019, "sha1": "15dc45235ba59489c8737c05c46891087373acc6", "oa_license": "CCBY", "oa_url": "https://smw.ch/journalfile/view/article/ezm_smw/en/smw.2019.20059/227281a6b35d954ffaa62e56ac0fea20bcd4df1d/smw_149_w20059.pdf/rsrc/jf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a6cd60bfe287cab2d4ed45de0d8a8554915745a4", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
31093886
pes2o/s2orc
v3-fos-license
Obestatin Modulates Ghrelin ’ s Effects on the Basal and Stimulated Testosterone Secretion by the Testis of Rat : an In Vitro Study T. AFSAR, S. JAHAN, S. RAZAK, A. ALMAJWAL, M. ABULMEATY, H. WAZIR, A. MAJEED Department of Biochemistry, Faculty of Biological Sciences, Quaid-i-Azam University, Islamabad, Pakistan, Department of Animal Sciences, Faculty of Biological Sciences, Quaid-i-Azam University, Islamabad, Pakistan, Department of Community Health Sciences, College of Applied Medical Sciences, King Saud University, Saudi Arabia, Department of Medical Physiology, Faculty of Medicine, Zagazig University, Egypt, Department of Biochemistry and Molecular Biology, National university Of Science and Technology, Islamabad, Pakistan Introduction In mammals, gonadal function critically relies on a complex regulatory network of autocrine, paracrine and endocrine signals.Although it has been known that conditions of negative energy balance are frequently linked to lack of puberty onset and reproductive failure, the exact mechanisms involved in the coupling of reproductive function and body energy store have not been elucidated (Fernández-Fernández et al. 2004).Central and peripheral endocrine signals that are primarily involved in the control of energy balance, also, control reproductive functions by acting at different levels of the hypothalamic-pituitary-gonadal axis, thus providing a basis for the link between energy homeostasis and fertility (Fernández-Fernández et al. 2006). Ghrelin, a 28-amino-acid peptide that is characterized as the endogenous ligand of the growth hormone (GH)-secretagogue receptor (GHS-R), is an orexigenic peptide and a long-term regulator of energy homeostasis (Yang et al. 2008, Howard et al. 1996).Obestatin, the counterpart of ghrelin, is a 23-amino acid Vol.66 anorexigenic peptide.It is produced by the enzymatic cleavage of pre-pro-ghrelin (Kojima et al. 1999, Caminos et al. 2003).Ghrelin and GHSR-1a has been localized in reproductive tissues, including the placenta, ovary and testis (Tena-Sempere et al. 2002).Within the testis, expression of ghrelin has been reported in Leydig cells (Barreiro et al. 2002).However, it is expressed in Sertoli cells in humans (Gaytan et al. 2003).Similarly, expression of obestatin has been reported in Leydig cells of the testis in rodents.Obestatin plays a functional role in the regulation of gastrointestinal and metabolic function through interaction with a member of the receptor family that includes receptors for ghrelin and motilin (McKee et al. 1997, Nogueiras et al. 2007, Kojima et al. 1999).Obestatin and ghrelin are functional antagonists of each other, as ghrelin facilitates food intake while obestatin suppresses food intake (Gualillo et al. 2003). An in vitro experiment reported that obestatin antagonized the actions of ghrelin on GH secretion (Zizzari et al. 2007).It is evident that different factors with key roles in the growth axis and body weight homeostasis are potentially involved, in part, in the regulation of reproductive function in a paracrine or autocrine manner (Caminos et al. 2003).Concerning the involvement of obestatin in the reproductive functions is still scarce; however, it was found that obestatin significantly increased progesterone secretion in cultured porcine ovarian granulosa cells.Moreover, in adult male rats, it was reported that obestatin could induce testosterone secretion both in vivo and in vitro (Jahan et al. 2013, Jahan et al. 2011, Hizbullah and Ahmed 2013).On the contrary, ghrelin delays balano-preputial separation, an external sign of pubertal development, and decreases circulating luteinizing hormone (LH) and testosterone concentration (Martini et al. 2006).Therefore, this study was conducted to explore the probable effects of obestatin in modulating the inhibitory effects of ghrelin on basal and stimulated testosterone secretion in isolated strips of rat's testes. Animals Adult (125-135 days old) male Sprague Dawley rats (250-290 g) were used in accordance with an experimental protocol approved by the ethics committee of the College of Applied Medical Sciences, King Saud University.Animals were caged under standard conditions of light (12 h light/12 h dark) and temperature (22-25 °C).These animals were acclimatized for three days with free access to food. Tissue incubation Assessment of the direct intra-testicular effect of obestatin and ghrelin upon basal and stimulated testosterone secretion in vitro was carried out by incubating adult rat testicular slices, as previously described (Tena-Sempere et al. 1999, Hizbullah andAhmed 2013) with slight modifications.Based on our earlier findings that obestatin is a positive modulator of testosterone secretion and its effect depends on nutritional status, testicular tissues were obtained from normally-fed adult rats (n=9/treatment group) in the morning (8-9 AM) after overnight fasting (Jahan et al. 2011, Hizbullah andAhmed 2013).The animals were decapitated, and the testes were then immediately removed from the scrotal sac and de-capsulated.Later, the testes were rapidly sliced into small pieces (approx.100 mg) on an ice-cold glass plate.They were weighed and finally poured into 10 ml culture tubes containing DMEM/HEM F12 (1:1 ratio) medium (Hiclone, Thermo Scientifics.Inc. USA) supplemented with 50 IU/ml penicillin and 50 µg/ml streptomycin.After 30 min of pre-incubation, the culture media in each tube were replaced with fresh media containing obestatin (mouse/rat, PGH-3891-PI, Peptides International, USA) or ghrelin (mouse/rat, Ana Spec USA) (supplemented with Aprotinin 500,000 KIU/l and disodium EDTA 1 g/l) or combinations of both peptides at the doses of 10 ng/ml and 100 ng/ml (Ob10, Ob100, Gh10, Gh100, Ob10+Gh10 and Ob100+Gh100 groups, respectively).The control group was replaced with only fresh media.Then, tissue cultures were preserved in 10 ml culture tubes under 5 % CO 2 and 95 % air at 34 °C.In order to evaluate the ability of obestatin and ghrelin to modulate stimulated testosterone secretion, testicular tissues were incubated with 10 IU Human chorionic gonadotropin (hCG) (Gonachore) alone in the medium (hCG control group).In addition to incubation with different doses of obestatin, ghrelin and obestatin plus ghrelin at the doses of 10 ng/ml and 100 ng/ml (Ob10+hCG, Ob100+hCG, Gh10+hCG, Gh100+hCG, Ob10+Gh10+hCG and Ob10+Gh100+hCG groups, respectively).At the end of the incubation period, culture tubes were placed in a vortex mixer and aliquots of 100 µl were collected for testosterone measurement.The aliquots were stored at -20 °C until the assay.The levels of testosterone in the samples were expressed as normalized values per milligram of incubated tissue. Hormone analysis Testosterone concentrations were determined using specific EIA kits (Abcam plc, USA) according to manufacturer's instructions provided along with the kit. Statistical analysis Values are expressed as means ± SEM.The results from testicular incubations were analyzed for statistically significant differences among study groups by using one-way ANOVA with post-hoc Tukey's test on Graph Pad Prism 5 software. Stimulation of basal and hCG-induced T secretion by obestatin In a previous laboratory work obestatin at 10 -8 M showed significant increase in testosterone secretion in vitro (Jahan et al. 2013, Jahan et al. 2011, Hizbullah and Ahmed 2013).In the present investigation for experimental internal reference, obestatin effect on testosterone secretion at 10 ng/ml and 100 ng/ml was tested under both basal and hCG-stimulated conditions.The hCG (10 IU) hormone induces a significant increase in T concentration from testicular slices after 4 h of incubation compared to an untreated control group (14.00±0.50 vs. 9.43±0.57ng/ml, 100 mg of tissue, respectively, P<0.05).This indicates that testicular tissues under in vitro culture conditions were responsive to hCG.Obestatin further induced hCG-stimulated T secretion in a dose-dependent manner, and significant increases in testosterone secretion were measured after 10 ng/ml and 100 ng/ml of obestatin treatment of hCG-exposed testicular tissues (P<0.05 and P<0.001, respectively).On the other hand, obestatin at 10 ng/ml failed to modify basal T secretion, whereas at the higher tested dose (100 ng/ml), it significantly induced basal testosterone secretion P<0.05 (Table 1).These results showed that obestatin modifies the basal level of T release in vitro in a dose-dependent manner; however, it stimulates hCG-induced T secretion under both tested doses with approximately equal potency. Inhibition of basal and hCG-stimulated T secretion by ghrelin In a dose-dependent manner, ghrelin significantly inhibited basal T secretion at the dose of 10 ng/ml and 100 ng/ml (P<0.05 and P<0.0001 respectively).The addition of ghrelin to the hCG-stimulated culture media at concentrations of 10 ng/ml and 100 ng/ml significantly inhibited hCG-stimulated T release by testicular slices (10.75±0.192ng/ml, and 8.67±0.556ng/ml, respectively, vs. 14.00±0.50ng/ml in control group).This shows that ghrelin significantly decrease both basal and hCG-induced T secretion compared to corresponding control groups (Table 1). Obestatin counteracts the suppressive effect of ghrelin on both basal and hCG-induced testosterone secretion Treatment of the testicular tissue cultures with obestatin (in the doses of 10 ng/ml and 100 ng/ml) reverses the suppressive effect of ghrelin on testosterone secretion under both basal and hCG-stimulated conditions.The mean testosterone concentrations measured in both the 10 ng/ml and 100 ng/ml treated groups were more or less similar to those of the control group, indicating that obestatin modulated the suppressive effect of ghrelin under basal conditions.Then, we administered the combined doses of both peptides to the culture treated with 10 IU hCG; obestatin at both tested doses significantly increased testosterone concentration compared to ghrelin-alone treated groups, and the mean testosterone concentration in combinationtreated groups raised up to the level of hCG control group. Discussion The previous findings and receptor colocalization of obestatin in various testicular cells along with ghrelin prompted us to get insight into the opposing effects of the two peptides on reproduction.Therefore, we designed an in vitro study to demonstrate the effect of co-administration of obestatin and ghrelin on both basal and stimulated testosterone production. The role of obestatin in the male reproductive system is still not well studied despite the presence of obestatin expression in various testicular cells.Within the testis, obestatin immunoreactivity (irOBS) is detected in the Leydig and Sertoli cells, whereas, mild signals of obestatin were observed in the rat testis; efferent ductules were the most immune reactive region for the peptide.Vas deferens and seminal vesicles showed intense obestatin labeling; in addition, obestatin expression was observed in the prostate tissue.Ejaculated and selected spermatozoa were positive for obestatin in different head and tail regions (Dun et al. 2006, Moretti et al. 2013). Previous laboratory investigations showed that, a single intravenous injection of obestatin increased testosterone secretion in adult male rats, whereas chronic infusion of obestatin to the rats at the onset of puberty led to a significant increase in testosterone production and spermatogenesis.Furthermore, a study of the direct effect of obestatin on testicular levels in vitro reveals that obestatin is a positive modulator of testosterone secretion and its effect is dependent on the nutritional status of the body (Jahan et al. 2013, Jahan et al. 2011, Hizbullah and Ahmed 2013). Our hypothesis states that obestatin acts as a physiological antagonist for ghrelin regarding basal and stimulated T secretion.In order to evaluate whether obestatin can modulate ghrelin's suppression of basal and hCG-induced T secretion from adult male rats in vitro, we co-administered obestatin and ghrelin into the culture medium.Surprisingly, it was observed that addition of obestatin to culture medium reversed the inhibitory effect of ghrelin on basal and hCG-induced T secretion in a dose-dependent manner, as the testosterone concentration was significantly higher in the 100 ng/ml obestatin plus ghrelin treatment group than in the ghrelin-alone-treated group and the mean concentration in the co administered group was more or less similar to that of the untreated control group.In order to evaluate the effect of obestatin on ghrelin-induced suppression of hCG-stimulated testosterone secretion, testicular tissues in the culture medium were exposed to 10 ng/ml and 100 ng/ml, obestatin and ghrelin along with 10 IU hCG and hCG alone treatment group serve as control.Similar observations as basal effects were recorded under hCG-stimulated conditions herein effect of obestatin seems more pronounced in reversing the ghrelin's inhibitory effect on hCG-stimulated testosterone secretion.Results of this study, indicate that the effect of obestatin seems to be hCG-dependent as more pronounced effects seems to occur under stimulated conditions relating hypophyseal-pituitary-gonadal axis implication in controlling obestatin actions.However it is not clear whether the effect of obestatin is at a local gonadal level or is regulated by upstream targets.In the present experiments, we used both obestatin and ghrelin alone treatment groups under both basal and hCG-stimulated conditions as an experimental internal reference in order to clarify the effect of combined signal peptide administration.This study extends the previous findings that in addition to the opposite effects of both obestatin and ghrelin on food intake, body weight, body composition and energy expenditure, obestatin also antagonizes the actions of ghrelin on testosterone secretion from adult rat testicular slices when both peptides are co-administered.Ghrelin negatively modulate testicular functions under low energy states while the opposite effect of obestatin on the gonads has been hypothesized (Dun et al. 2006, Moretti et al. 2013). data concerning the physiological functions of obestatin are limited and are mainly in regard to its role in controlling feeding behavior, the functions of the gastrointestinal tract and energy homeostasis at the hypothalamic level (Zhang et al. 2005), while its role in the regulation of reproduction remains less characterized.We analyzed the involvement of this metabolic hormone in the direct control of testicular functions.Compelling evidence indicates that common regulatory signals are implicated in the integrated control of energy balance and reproduction (Tena-Sempere et al. 2002).The suggestion of a direct effect of obestatin on testicular tissue was supported by the findings of Luque et al. (2014) which evidenced that obestatin had no effect on prolactin, LH, FSH, or TSH expression/release from pituitary cell cultures of rats and baboon. In a conclusion, obestatin, as a peripheral signal for energy abundance, may play an important role in reproduction; conversely, ghrelin, as a peripheral signal for energy insufficiency, might play an opposite role.However, the analysis of the reproductive actions of ghrelin and obestatin remains largely incomplete, and further studies are required to clarify the effects at a pituitary levels and of the combined administration of both peptides in vivo. Table 1 . Mean and SEM of testosterone concentration in in vitro testicular culture after 4 h of incubation (n=9) in different treatment groups.
2017-08-15T03:26:48.531Z
2017-03-31T00:00:00.000
{ "year": 2017, "sha1": "fc952f9e1a470f39fe8a04a01760017ee42055c8", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.33549/physiolres.933345", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "540ca3d122faab5919068df2151b1120a441e2da", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
88488092
pes2o/s2orc
v3-fos-license
Selection of Artificial Muscle Actuators for a Continuum Manipulator 2 Artificial muscle actuators have become a popular choice as actuation units for robotic 3 applications, particularly in the growing area of soft robotics. The precise specification of 4 an artificial muscle actuator for a particular application requires the consideration of several 5 parameters that work together to achieve the performance characteristics of the actuator. This 6 paper explores the specification of artificial muscle actuator parameters by presenting and 7 applying the analytical description of the actuator, simulation by finite element method for 8 investigating material stresses under a wide variety of configurations, and a specific parameter 9 selection process. This is followed by an experimental validation using an example actuator to 10 compare against the predicted actuator performance. Some discussion of appropriateness of 11 this type of actuator as a candidate solution for use in the example application of a dexterous 12 continuum manipulator is included. Although hydraulic actuation offers high power density, mechanical rigidity and high dynamic response, 29 the force capability, F , for a given pressure, P , of a conventional linear hydraulic actuator is limited by 30 the piston area, A, given by: F = P · A. In view of the stringent diametric requirement of the present 31 application, the use of hydraulic artificial muscle actuators (AMA) (see Fig. 1), which will be shown to 32 have better peak force capability than conventional linear actuators for the same pressure and diameter, 33 is of interest Schulte (1961). While the concept of artificial muscle actuators and their use in robotics is 34 not new, the majority of application use pneumatic power to drive them Cardona (2012); Trivedi et al. 35 (2008a); Klute and Hannaford (2000); Tsagarakis and Caldwell (2000). However, through the use of a 36 hydraulic medium it may be possible to mitigate some of the issues with responsiveness and rigidity that 37 have been encountered previously. There are two important advantages to a hydraulic approach: 1) a higher 38 pressure (by a factor of 10 compared with pneumatic) can be applied so that force and power density can 39 be further increased and the actuator diameter can be decreased; 2) liquid has a much lower compressibility 40 and therefore better rigidity than compressed gas. 41 Artificial muscle actuators consist of a contained internal bladder, surrounded by a flexible, braided outer 42 sheath. It is the geometry of this outer sheath that transmits the radial expansion of the internal bladder 43 due to applied pressure to contractile force along the longitudinal axis of the muscle actuator Davis et al. 44 (2003). As the radius of the bladder, and thus the outer mesh, increases, the individual strands of the mesh 45 which are woven in a over-under crossing pattern rotate relative to each other and to the long axis of the 46 actuator and shorten the longitudinal distance from one end of the strand to the other. The load capacity of the artificial muscle actuator is then a function of the geometry and orientation of the outer sheath and the 48 pressure applied to the internal bladder Chou and Hannaford (1996). ANALYTICAL DESCRIPTION OF ARTIFICIAL MUSCLE ACTUATORS There are two methods presented in the literature for modeling the transmission of internal pressure to 50 contractile force of an AMA. The first is a theoretical approach based upon energy conservation Schulte 51 (1961), while the second is an examination of the force profile of the surface pressure Tondu et al. (1996). 52 The first approach is based on the principle that energy supplied to the actuator by the pressurized fluid must 53 leave the actuator through the application of a load over some distance. The second approach is based on 54 an examination of the distortion of the internal bladder under isobaric conditions. However, ultimately each 55 of these methods arrives at the base model Tsagarakis and Caldwell (2000). A summary of the first method 56 is presented here. It should be noted that this model does not account for possible effects of compressibility between the strands and the longitudinal axis of the actuator, γ(t), which is assumed to be uniform for all 63 strands within the braided sheath. 64 The overall length of the actuator, L a (t), and the actuator diameter, D(t), can then be represented in 65 terms of the constants, n and b, and as functions of the variable γ(t), as seen in Eq. (1) and Eq. (2). 66 Then, calculating the volume of a cylinder and substituting in the functions for L a (t) and D(t), (3) The first derivatives of L a (t) and V (t) with respect to γ(t) are calculated as From Eq. (4) and Eq. (5), the first derivative of V (t) with respect to L a (t) is given as From the principle of virtual work we have: Figure 3. Ratio of force capacity for artificial muscle actuator to force capacity of a piston and cylinder when the maximum cross-sectional area of each actuator is equivalent (F = P · A max ) as a function of braid angle. and solving Eq. (7) for the force output with Eq. (1 and 6) results in where F a (t) is the contractile force and P a (t) is the pressure differential across the bladder wall. Because 73 the term within the brackets in Eq. (9) can be greater than 1 (lim γ(t)→0 = 2), the force capability of a 74 artificial muscle can be greater than that of a hydraulic piston actuator for the same area and pressure. 75 It is assumed here that the force from the hydraulic piston actuator is F = P · A where the diameter is 76 equivalent to the maximum diameter of the AMA. In reality the available area may be reduced by the piston 77 rod cross-sectional area. Notice, however, that this advantage comes at the cost of having the force/pressure 78 relationship vary with γ(t) or actuator length L a (t), illustrated in Fig. 3. When the contraction angle is 79 35.3 • , the two actuators are equivalent. Further, it can be seen (Fig. 3) BRAID PARAMETER SELECTION When designing an artificial muscle actuator for a given application, it is likely that the available supply 89 pressure is known as well as the desired length, L max , and maximum diameter, D max , of the actuator 90 which occurs when the braid angle, γ = 54.7 • . Therefore, the optimal design is the one that maximizes is presented as where D o = b πn (obtained by evaluating Eq. 2 at γ = 90 • )is the theoretical maximum muscle diameter. which is formulated by analyzing the fractional component of the hoop stress realized within an individual 109 strand of the braided mesh. Each of the N strands encircles the bladder n times. This result can then 110 be compared against the tensile limit of the mesh material, which is typically either a nylon polymer 113 The use of the artificial muscle actuator as a means of power input to the system carries with it several 114 advantages previously discussed. However, it is also necessary to understand the failure limits for each 115 component. An expression for determining stress in the braided mesh was discussed in the previous section; 116 however, it is necessary to understand the failure limits of the internal bladder due to applied pressure 117 as well as the bladder is likely to be the weaker of the two components that make up the actuator. The strands. The value for the edge length can be calculated as where b, n, and N are the strand length, number of turns per strand, and strand number, respectively, as 128 before. The braid angle is a measure of AMA contraction and thus changes as the AMA is pressurized. 129 This relationship is described in Section 2. In addition to the parameters EL and γ, it is also necessary to 130 consider the wall thickness, t, of the bladder material when determining a proper design to prevent failure. 131 Further, the pressure within the bladder, P a is an important consideration for evaluating the stress within of a bladder segment for a given set of parameters. As expected, the greatest deformation occurs at the 155 center of the segment while the deformation at the edges is zero as specified by the boundary conditions. 156 Here it is shown that the maximum calculated deformation is on the order of 0.5 mm for this example. where the spacing between strands becomes large. Thus, for this application, the bladder stress can be used 196 to set the lower limit for the strand number. It is desirable to approach this lower limit as fewer strands Finally, the effect of bladder wall thickness were examined. Figure 11 shows that the wall stress increases 199 quickly when the wall thickness is smaller than approximately 0.15 mm and changes more gradually above Figure 10. Plot of the bladder stress calculated using FEA versus strand number. Braid angle is set to 45 • for all cases. Figure 11. Plot of the bladder stress calculated using FEA versus bladder wall thickness. Braid angle is set to 45 • for all cases. 203 The previous text provided a means of designing the AMA by analyzing the constraints in terms of strand 204 stress and bladder stress for a given set of input conditions. Equation 12, when combined with the results 205 presented in Section 3.1, allows for a determination of the optimal combination of strand diameter, D s , and 206 strand number, N , within the braid for which the strand stress does not exceed the tensile strength of the 207 braid material and the stress in the bladder wall does not exceed the limits of the bladder material. The 208 thickness of the bladder wall is also an independent design parameter that can be minimized in order to 209 allow the actuator to reach full elongation and therefore the wall thickness can be included as a design 210 variable. 211 The strand diameter and number can then be used in the manufacture of an appropriate braided mesh. For 212 the braiding machines that manufacture this sort of braided mesh, the necessary input parameter is the pick 213 count or the number of times the strands cross the center line per unit length Omeroglu (2006). This input 214 setting can be calculated from the optimized strand number, N , as where the length, L c , is the actuator length at the diameter of the core, D c (independent parameter as long 216 as it is smaller than D o ), that the mesh is being braided onto such that Eqs. 1 and 2 become Here the strand length, b, and number of turns, n, are constants which can be calculated from the input 219 parameters L max and D max and the calculated value for γ min (Eq. 11) as Therefore, for a given combination of wire diameter, D s , and strand number, N , the braiding machine can 222 be configured using the appropriate pick per unit length setting calculated from Eq. 14. TESTING OF AMA LOAD CAPACITY An evaluation of the accuracy of the predicted beam load capacity as a function of AMA extension and 224 pressure, as formulated in Section 2, was carried out. It was not possible to produce a muscle actuator at a 225 scale appropriate for this application as an inner bladder material with the correct diameter was not found 226 to be available. Thus a larger version of the muscle actuator, with an 8.8 mm maximum outer diameter, was 227 produced (Fig. 12) using latex surgical tubing (OD 3.2 mm, ID 1.6 mm) as the internal bladder and nylon 228 expandable mesh (OD 4.4 mm, ID 3.2 mm) and the outer sleeve. The actuator from Fig. 12 was connected 229 to a rigid support at one end and to a calibrated spring scale (OHAUS, 4 kg capacity) on the other end 230 (Fig. 13). The internal bladder was inflated using an instrumented syringe (BARD Caliber Inflation Device) 231 which provided a measure of the inflation pressure. (1996); Davis and Caldwell (2006). Deviations between the experimental 243 and theoretical actuator forces are likely due to frictional effects which become more observable at higher 244 braid angles. 245 Figure 14. Plot of force output from the artificial muscle actuator predicted analytically (solid line) and determined experimentally (dots). DISCUSSION The use of artificial muscle actuators for robotic applications has been expanding as they provide several 246 advantages including dexterous mobility and compliance when coming into contact with other surfaces 247 or objects Trivedi et al. (2008b). This is particularly important in the application of minimally invasive 248 surgery where the robot maneuvers in a unpredictable and sensitive environment. Further, the use of a 249 hydraulic artificial muscle actuator for this purpose provides the opportunity for greater force output for the 250 given size constraints, as shown in Fig. 3 where it is seen that the theoretical load capacity of the AMA is 251 twice that of a conventional hydraulic actuator for the same diameter. In Section 2, a method for modeling 252 the AMA and its load capacity was given showing that actuator force is a function of internal pressure and 253 actuator length. The procedure for defining this method includes several simplifying assumptions; however, 254 it was demonstrated that at the prototype scale for this application, the predicted output compared well 255 with the experimental results. As was shown, the maximum contraction of the AMA is set by the braid 256 geometry at a braid angle of 54.7 • . However, the maximum elongation of the AMA is something that can 257 be designed and thus allows for an optimization of the AMA characteristics in order to achieve the greatest 258 stroke length while avoiding failure. 259 The methods presented here make it possible to identify the appropriate braid characteristics to achieve 260 maximum AMA performance. Maximum performance is obtained by minimizing the achievable braid 261 angle. The minimum braid angle is dependent on the number of strands within the braided mesh and 262 the diameter of those strands Davis and Caldwell (2006). The limiting conditions placed on these two 263 quantities are the yield stress of the strands and the stress limit of the inner bladder. was not in its fully extended condition when the limit of the spring scale was reached. The theoretical 277 prediction was found to be accurate at small braid angles. If the theoretical calculation for AMA load 278 capacity is extended towards smaller braid angles, then the load capacity of the prototype AMA would 279 approach 80 N as the braid angle approached 10 • . If we then extend this model to the design scale for the 280 example application Berg (2013c), the predicted load capacity of the AMA would be 25.9 N as the braid 281 angle approached 10 • for the same supply pressure. CONFLICT OF INTEREST STATEMENT The authors declare that the research was conducted in the absence of any commercial or financial 283 relationships that could be construed as a potential conflict of interest. AUTHOR CONTRIBUTIONS 285 The Author Contributions section is mandatory for all articles, including articles by sole authors. If an 286 appropriate statement is not provided on submission, a standard one will be inserted during the production 287 process. The Author Contributions statement must describe the contributions of individual authors referred 288 to by their initials and, in doing so, all authors agree to be accountable for the content of the work.
2019-03-31T13:13:57.330Z
2018-01-23T00:00:00.000
{ "year": 2018, "sha1": "3de8ee8b951608ab22b1d1a1dc17d1edf8ad8f2c", "oa_license": "CCBY", "oa_url": "https://engrxiv.org/preprint/download/141/335", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "a96e884d21311e583a73621c3e66c36d97d19180", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
259389521
pes2o/s2orc
v3-fos-license
Speech Acts in Text Dialogues: An Analysis of English Textbook Merdeka Belajar for Junior High School . This study aims to investigate the types of speech acts and their functions contained in the dialogues of the English textbook titled "English in Mind Second Edition Students' Book Starter Grade VII," which was composed based on the implementation of Kurikulum Merdeka Belajar. The study employed a descriptive qualitative research approach, utilizing text dialogues from the English textbook as the primary data source. The speech acts in the dialogues were analyzed based on Searle's (1976) speech act framework. The results indicate a total of 161 speech acts (utterances) across all dialogues in the textbook. The distribution of speech acts is as follows: assertive accounted for 29.2% (47 instances), directive accounted for 24.8% (40 instances), commissive accounted for 9.9% (16 instances), expressive accounted for 35.4% (57 instances), and declaration accounted for 0.6% (1 instance). While the textbook demonstrates good representation of speech acts within Searle's framework, it should be noted that certain functions of speech act types, specifically commissive and declarative speech acts, were not consistently present in all dialogues. Based on these findings, this study suggests that textbook A. INTRODUCTION Communication serves as a means to connect with others, playing a vital role in various settings, including teaching and learning (Lunenburg, 2010;Vakilifard et al., 2015). In the realm of English education, several factors contribute to the effectiveness of the teaching and learning process, with instructional materials playing a crucial role (Maknun, 2019). English as a Foreign Language (EFL) students rely not only on verbal instruction but also on diverse resources, including textbooks and educational media, to enhance their learning experience (Namaziandost et al., 2019). However, understanding written learning materials, such as dialogues in English textbooks, can be challenging due to the absence of external cues like facial expressions, gestures, tone, or word stressing. Liu (2011) emphasizes the importance of speech acts in using and comprehending language effectively within specific situations and contexts. Speech acts are characterized as utterances that convey the speaker's intentions and have an impact on the listener (Alemi et al., 2015). They play a significant role in interpreting language and eliciting appropriate responses. Searle (1976) categorizes speech acts into five fundamental types. The first is the assertive speech act, which involves stating, concluding, recommending, bragging, claiming, and presuming (Basra & Thoyyibah, 2017). The second is the directive speech act, encompassing suggestions, permissions, requests, commands, orders, inquiries, and recommendations (Sumedi & Rovino, 2020). The third is the commissive speech act, which involves commitments, plans, refusals, threats, promises, and volunteering. Expressive speech acts, the fourth type, convey the speaker's feelings towards something or someone and include expressions of gratitude, congratulations, praise, and compliments (Ilma, 2016). The fifth type is the declarative speech act, which brings about a change in the world or a situation through statements of nomination, removal, declaration, or punishment. In the context of the Merdeka Belajar curriculum, various text-based approaches, such as Building Knowledge of the Field (BKOF), Modelling of the Text (MOT), Joint Construction of the Text (JCOT), and Independent Construction of the Text (ICOT), are employed in teaching English to junior high school students. These approaches encompass different modes, including spoken, written, visual, audio, and multimodal formats. Consequently, English textbooks, as an integral component of the curriculum, play a pivotal role in facilitating students' understanding of diverse texts presented in written and spoken forms, particularly dialogues. Hence, it is essential for textbooks to include accurate and appropriate pragmatic materials that aid students in developing their pragmatic competence in the target language. This necessitates the inclusion of realistic speech act models, such as assertive acts, directive acts, commissive acts, expressive acts, and declarative acts, complemented by a comprehensive explanation of language usage in the target language. However, the reality often falls short of the ideal. Many students struggle to comprehend the intended meanings conveyed through specific dialogues due to their limited proficiency in translating them into their native language (Moradi et al., 2013;Refualu et al., 2021). Maknun (2019) further highlights the challenges faced by Indonesian EFL learners in accurately conveying messages from various dialogues in English textbooks within the given context. Furthermore, research has consistently demonstrated that performing speech acts in a second language (L2) presents significant challenges for learners due to inherent differences between their first language (L1), culture, and the target language (TL) and culture (Kasper & Rose, 2002;Moradi et al., 2013). Consequently, these challenges lead to misunderstandings and ambiguities, as learners struggle to comprehend and convey the intended meaning and context of dialogues. Given the profound impact of textbooks on learning outcomes (Alemi et al., 2013), it is crucial to pay careful attention to the inclusion of appropriate speech act materials, particularly in dialogues. The language used in textbooks should be easily understandable and aligned with the writer's objectives, emphasizing clarity to facilitate effective communication between teachers and students (Murniasih, 2022;Swandewi et al., 2017). As an essential medium in the teaching and learning process, textbooks should provide comprehensive coverage of speech acts. Thus, it is imperative to investigate the utilization of speech acts in the written dialogues of English textbooks. Existing speech act analyses predominantly focus on frequently used types within English textbooks designed for teaching English as a foreign language, primarily those aligned with the 2013 curriculum. Consequently, the researchers have undertaken a novel research study that examines the role of speech acts in written learning materials, particularly dialogues, within the English textbook Merdeka Kurikulum. By teaching speech acts, students can develop the ability to comprehend the intended meaning of speakers in various situations, conditions, and contexts. Therefore, this study aims to determine the presence and adequacy of speech acts employed in the utterances of each dialogue within the English textbook "English in Mind Second Edition-Students' Book Starter Grade VII." The speech acts will be analyzed based on Searle's classification (1976), encompassing assertive, directive, commissive, expressive, and declarative speech acts. The chosen textbook aligns with the implementation of the Kurikulum Merdeka Belajar. B. METHODOLOGY This study is a descriptive study with a qualitative approach. The descriptive research was chosen because the researcher intended to identify and analyze the data in speech acts of each utterance presented in every conversation from dialogues from the junior high school English textbook chosen by the researchers (Cresswell, 2009). The data for analysis were collected from the written teaching materials, specifically the dialogues, in the junior high school students' book titled "English in Mind Second Edition-Students' Book Starter Grade VII," which is published by the Ministry of Education and Culture and based on the implementation of the Kurikulum Merdeka Belajar. To facilitate the research analysis, supplementary instruments in the form of data sheets were used. These data sheets served as guides for the categorization and analysis processes. The data obtained from the textbook dialogues, including words, phrases, clauses, and expressions, were compiled and evaluated using these data sheets. Two types of data sheets were employed: one for data listings and the other for categorizing speech acts. The focus of this study was solely on the dialogues presented in the textbook, as the aim was to identify and analyze the usage of different speech act types in an English textbook. Meanwhile, the researchers dismissed the incomplete text dialogues since the speakers' intents were unclear in achieving the criteria of speech acts from the speakers who participated in the dialogues. As a result, the researchers focused on textbook content analysis which analyzed the contexts of utilizing speech acts in the textbook. The speech acts types are analyzed according to Searle's (1976) speech act classification. Furthermore, the researchers used an interactive model by Miles et al. (2014) for analyzing data. The steps were data reduction, data display, conclusion, and verification. C. FINDINGS AND DISCUSSION This section presents the results of data analysis of the speech acts from the dialogues. It was found that there are fourteen dialogues in the seventh grade of a junior high school textbook. The percentage of occurrence of each type is presented in the following table: Discussion According to the textbook, the study's results have fulfilled and reflected the theory of speech acts based on Searle's (1976) classification. The findings have shown that the five types of speech acts were employed in each dialogue from the English textbook chosen by the researchers, including assertive, directive, commissive, expressive, and declarative. More precisely, the results revealed that the utterances of dialogues from the scrutinized English textbook enjoyed more expressive, assertive/representative, and directive speech acts. As stated by Refualu et al. (2021) and Maknun (2019), some speech acts such as representative, expressive, and directive were the high frequency of five different types of language speech acts presented in the English textbook of Indonesian junior high schools. This is because the first language cultures reflected the speech acts produced in the Indonesian Textbook. Besides, several studies (Fahik, 2020;Fitri et al., 2018;Syahbana & Pratama, 2017;Inawati (2016); Ilma (2016) In addition, assertive acts were the most prevalent illocutionary in the textbook. Speakers use assertive speech acts to inform the truthfulness of what was said. Budiasih et al. (2018) noted that performances of assertive speech, such as informing, stating, affirming, describing, explaining, etc., should elevate learners' competence in the target language to make them understand specific ideas or beliefs about the actual state. Additionally, the functions like stating and informing were the highest number of utterances. As speakers, we tend to produce assertive speech utterances to express a point of view or statement of fact. Similarly, Milal and Kusumajanti (2020) also pointed out that it is significant for textbooks to present straightforward ways in how people interact using some assertive speech acts. The assertive speech acts have been adequate that contained in the text dialogues. For instance, the speech type, such as the function of telling presented by Izzie, has shown accurately on page 104. The function of telling used by Izzie is telling the truth to her friends about a minor accident that she got in the morning. By performing some assertive speech acts, the speakers will get a great response from the interlocutors. Meanwhile, the commissive and declaration almost have come from ignorance (Maknun, 2019;Refualu et al., 2021 presented in all dialogues, particularly in commissive and declarative speech acts. Based on the results of the commissive type, there are only three functions of speech acts that exist in all dialogues, namely offering, refusing, and promising. Meanwhile, commissive functions, such as vowing, swearing, and pledging, could not be found. Commissive speech acts are needed in communication because the speakers' utterances will make the listeners trust the speaker's intention and aim. Through the commissive speech acts, learners will take several future actions carried out to satisfy the interests of others. It signifies that the learners need to know how to express their intentions to proclaim that they will take action, but they have not yet taken action. The textbook needed to present some other types of commissive which have not been contained, including vowing, pledging, and swearing. For instance, "I will take care of her (pledging), I will buy you a new car if you pass the test (vowing). The absence of commissive and declarative speech acts in the seventh-grade English textbook can be considered an important deficiency since these are used frequently in everyday communication (Namaziandost et al., 2019). The absence of declarative speech acts in the textbook is because they are commonly found in movies, novels, and speech. Similarly, a study conducted by Refualu et al. (2021) urged that the functions of commissive and declarative speech acts were the least percentage of all types. The declarative is the only illocutionary act that seldom appears since the specific characteristic of declarative must change the word perception or reality (Aquatama & Damanhuri, 2016 Additionally, there is essential information missing from the textbook. Most of the materials have been composed based on the writers' intuition. The conversations in the dialogues were integrated with short or simple phrases of speech acts. For example, in the speech act of "offering" from the conversation in the dialogue on page 48, a student said, "Try one, here you go" the sentence did not represent the function of offering in the certain situation, although it was implicitly known that the student offered the food another. (Castillo, 2015;Nordquist, 2019). Conclusion In conclusion, this study has examined the presence of five types of speech acts in the dialogues of seventh-grade English textbooks: representative, assertive, directive, commissive, and declarative. The analysis indicates that the textbook adequately incorporates speech acts in accordance with Searle's framework. However, it is worth noting that not all dialogues include additional functions for each type of speech act, particularly in the case of commissive and declarative speech acts. The findings also suggest that the majority of the materials were created based on the writers' intuition, resulting in conversations within the dialogues consisting of short and simple phrases. Therefore, it is essential to provide learners with expressions of speech acts that can be used in authentic conversations across various situations, conditions, and contexts. This underscores the importance of ensuring that materials used for teaching English as a foreign language (EFL) are not only grammatically accurate but also reflect the way the target language is spoken in real-life scenarios. Suggestion This study suggests that textbook designers and material developers produce various functions of speech act types. The textbooks should be rich and communicative to familiarize the students with all the sentences and functions to develop their verbal communication skills from the early stage, especially at the junior high school level. The results also have implications for textbook designers and material developers to produce various functions of speech act types authentically in terms of real interactions and real language uses to improve EFL learners' performances and interactions using the speech acts among the speakers.
2023-07-11T00:18:13.053Z
2023-06-07T00:00:00.000
{ "year": 2023, "sha1": "f09e916d4c73189407dc4afc928f329bed36f899", "oa_license": "CCBYSA", "oa_url": "http://ejournal.radenintan.ac.id/index.php/ENGEDU/article/download/15473/6230", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1348b75b83ba53a9ab895680af5f9412a874de73", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [] }
269181536
pes2o/s2orc
v3-fos-license
Automated detection of otosclerosis with interpretable deep learning using temporal bone computed tomography images Objective This study aimed to develop an automated detection schema for otosclerosis with interpretable deep learning using temporal bone computed tomography images. Methods With approval from the institutional review board, we retrospectively analyzed high-resolution computed tomography scans of the temporal bone of 182 participants with otosclerosis (67 male subjects and 115 female subjects; average age, 36.42 years) and 157 participants without otosclerosis (52 male subjects and 102 female subjects; average age, 30.61 years) using deep learning. Transfer learning with the pretrained VGG19, Mask RCNN, and EfficientNet models was used. In addition, 3 clinical experts compared the system's performance by reading the same computed tomography images for a subset of 35 unseen subjects. An area under the receiver operating characteristic curve and a saliency map were used to further evaluate the diagnostic performance. Results In prospective unseen test data, the diagnostic performance of the automatically interpretable otosclerosis detection system at the optimal threshold was 0.97 and 0.98 for sensitivity and specificity, respectively. In comparison with the clinical acumen of otolaryngologists at P < 0.05, the proposed system was not significantly different. Moreover, the area under the receiver operating characteristic curve for the proposed system was 0.99, indicating satisfactory diagnostic accuracy. Conclusion Our research develops and evaluates a deep learning system that detects otosclerosis at a level comparable with clinical otolaryngologists. Our system is an effective schema for the differential diagnosis of otosclerosis in computed tomography examinations. Introduction Otosclerosis or otospongiosis is a multifactorial disorder of the temporal bone and stapes that presents with progressive conductive, sensorineural, or mixed hearing loss in humans [1,2].It results in a sclerotic bone with abnormal osteons and is associated with genetic and environmental factors [3,4].Otosclerosis is categorized into two subtypes: fenestral (stapedial) and retrofenestral (cochlear).Retrofenestral otosclerosis always occurs with fenestral involvement and is considered to be on a continuum with fenestral otosclerosis [5].Typically, otosclerosis presents in the second to fourth decade of life and more frequently affects stapes footplate fixation by the foci located anterior to the vestibular window [6,7]. High-resolution computed tomography (CT) of the temporal bone serves as a useful aid to clinicians and remains an effective imaging modality in the diagnosis of otosclerosis [8,9].Otosclerosis mostly manifests in the common area in the fissula ante fenestram (FaFA) by localizing subtle demineralization and evaluating the oval window, stapes footplate, and round window niche [2].Otosclerosis findings on CT scan are too subtle and extremely indistinct to be seen, and the margins between normal and abnormal are difficult to delineate [10].Therefore, an automated, interpretable, and accurate detection of otosclerosis lesions may help otolaryngologists improve diagnostic efficiency and prevent untreated hearing loss and unnecessary costs. In recent years, deep learning-based diagnostic techniques have been introduced as an aid for radiologists in various fields to improve detection performance [11][12][13][14][15][16], such as outcome prediction with nonsmall-cell lung in multi-institutional CT image datasets [17], organ at risk delineation in CT images [18], and abnormality classifications from chest radiographs with major thoracic diseases [19].A fine-tuned deep neural network can be applied for the recognition and classification of CT images. Fujima et al. [20] used 140 temporal bone CT images to train and assess the utility of deep learning analysis in diagnosing otosclerosis on temporal bone CT images and achieved and 0.915 for the best accuracy and area under the receiver operating characteristic curve (AUC), respectively.Chen et al. [21] introduced W-Net with Adaptive Cross Entropy to detect ultrasmall medical objects on an otosclerosis dataset and achieved 0.954 as the best AUC.Despite their strong predictive power, deep learning models have been criticized for their poor interpretability, and we recognize that these factors represent challenges in the development of useful tools for clinicians [22]. In this study, we present an automatically interpretable deep learning approach (OtoModel) to boost otosclerosis detection on CT images and demonstrate how prediction activation maps learn the relevant features as a complementary means to understand a diagnosis, with the downstream goal of providing reliable and interpretable measures based on the location of otosclerosis.In addition, two experienced otolaryngologists and fellowship-trained radiologists compared the diagnostic performance of the proposed system. The contributions of this study are as follows: •The deep neural network models automatically recognized and extracted the Region of Interest (ROI) by detecting contour abnormalities for otosclerosis diagnosis on CT images. •The diagnostic performance of the proposed system demonstrated good accuracy and was comparable to that of clinical otolaryngologists.The saliency map improves the interpretable ability for diagnosis. •The interpretable otosclerosis detection system is potentially useful in clinical practice for identifying the presence or absence of otosclerosis on CT images. Z. Wang et al. Ethics and consent This study's ethics and consent were approved by the Institutional Review Board (IRB)of the Xiangya Hospital, Central South University, Changsha, China (IRB #2019121188 in 2019).All procedures performed in the study adhered to the appropriate guidelines and regulations.Written informed consent for participation and for the use of personally identifiable data was obtained from all participants.The study protocol, including the use of computed tomography scans, received IRB approval from Xiangya Hospital, Central South University. Data source and patient selection Fig. 1 depicts the following inclusion criteria: 1) subjects who underwent a CT scan examination; 2) only healthy subjects and subjects with otosclerosis were included; 3) modality with a noncontrast-enhanced CT; and 4) bilateral CT.Subjects were excluded if: 1) the quality of the CT images was poor; 2) the subjects could not be retrospectively identified; and 3) any treatment was performed before the CT scan.Finally, the acquired subjects were randomly divided into the training, validation, and test subsets at a ratio of 8:1:1, split by the number of subjects.The subjects in the three subsets were different from each other.The Xiangya Hospital, Central South University Institutional Review Board approved the study protocol and use of CTs. CT image datasets (as outlined in Table 1) were acquired from 182 subjects with confirmed otosclerosis, comprising 67 male and 115 female subjects.The average age within this group was 36.4 years, with an age range of 10-58 years.In addition, the dataset encompassed 157 subjects with surgically confirmed healthy otosclerosis, including 52 male and 102 female subjects.In this latter This study was approved by the medical research and ethics committee of Xiangya Hospital.All subjects underwent CT examination of the temporal bones at Xiangya Hospital between January 01, 2017 and December 31, 2020.The final diagnosis was determined by a board-certified senior otolaryngologist with more than 20 years of experience using temporal bone CT images, audiology examinations, relevant medical history, and an intraoperative confirmation of otosclerosis. Data protocol The temporal bone CT examination was performed using Philips Healthcare CT scanners, which obtained CT images with the following scanning parameters: 1 s, 90-200 mA, 110-130 kV, matrix of 512 × 512, and a bone window width with a high-resolution bone algorithm between 3500 and 4000 Hounsfield centered at 350-650 Hounsfield, in the absence of intravenous contrast.The screening image set was acquired in the axial plane with a slice thickness of 0.5 mm at 0.5-mm intervals and was reconstructed at R 0.3mm intervals to obtain overlapping slices.A board-certified clinician who specializes in otorhinolaryngology with more than 20 years of clinical experience confirmed the presence or absence of otosclerosis.Otosclerosis confirmation was performed in all subjects.CT images were loaded into the LabelImg software tool (version 1.8.6).They were annotated with rectangular bounding boxes to extract the ROIs containing the entire ear structure by an otolaryngologist and a fellowship-trained radiologist who had more than 10 years and 5 years of clinical experience, respectively. System development and training The proposed system was executed on a Dell xps8930 server (hexa-core 3.20 GHz processor, 16 Gb RAM, and one NVIDIA GeForce GTX 2080 video card), implemented in Python (version 2.7; Python Software Foundation, Wilmington, Del), and coded using the Keras [23] framework with the TensorFlow-GPU 1.15 [24] backend.For a fair comparison, we used 3 × 3 convolutions activated by rectified linear units and trained them using the Adam optimizer.We initially set Adam's learning rate to 1e-3 and then decayed the learning rate by a factor of 5 whenever the validation loss plateaued after an epoch.The optimization process was run for 100 epochs, and a batch size of 16 was selected on the basis of the experiments.To avoid overfitting, we implemented the EarlyStopping function in the Keras framework, which stops training when the monitoring volume stops improving. The proposed system takes 512 × 512 CT slices as the input, which are normalized to a range of 0 and 1 with respect to the maximum CT signal.The OtoModel system is decomposed into three stages, as illustrated in Fig. 2. The initial stage (shown in Fig. 2a) is dedicated to the precise filtering of CT slices that depict the middle ear structures in a patient during inference.This step is crucial for isolating the ROI pertinent to our analysis.Following this initial phase, a second convolutional neural network (CNN) emerges, as demonstrated in Fig. 2b.Its primary function is to refine the range of information extracted in the first stage.By focusing on the delineated ROI, this network effectively streamlines the data before it is introduced into the disease classification process.Finally, the extracted ROI patch was classified as having the presence or absence of otosclerosis, as depicted in Fig. 2C.Therefore, the result of a slice heuristically fuses all the CT diagnoses, generating the final diagnostic result (normal or otosclerosis) on the series of detection results for a patient at inference.Fig. 2d showcases a flow from batch normalization through dropout and pooling, culminating in a densely connected layer for feature integration. Explainable artificial intelligence (XAI) addresses the black-box nature of artificial intelligence.One systematic review [25] assessed the state of XAI in healthcare, noting limited research, diverse stakeholder perspectives, and the need for standardized evaluation methods.Zhang et al. [26] explored the growing potential of XAI in medical diagnosis and surgery by examining recent trends, conducting a survey, presenting a breast cancer case study, and highlighting its promising prospects.To assess the potential impact on CT slices [27,28], we used an open-source implementation of guided Gradient-weighted Class Activation Mapping (Grad-CAM) [29] in conjunction with our models.This approach allows us to evaluate the acquired features of the model and generate saliency maps that accentuate the crucial representations related to the target class. Pretrained VGG16 [30,31], Mask RCNN [32,33], and EfficientNet [34,35] were used to perform transfer learning for the slice selection CNN, fissula ante fenestram extraction CNN, and detection CNN, respectively.Slice selection begins with the radiologist carefully identifying CT slices showing the middle ear through manual screening.These initially selected slices were then fine-tuned using a pretrained VGG16 neural network model.Our proposed system reused the parameters on ImageNet (a large-scale dataset of natural images [36,37]) and enabled a reduction in the number of parameters without degrading the performance of the networks.In this study, we froze the top layers of the networks and fine-tuned them on the CT images.The proposed system executed a 10-fold cross-validation.Data augmentation was used to increase the data size and consistency for robustness.These techniques were subjected to a rotation range of 45 • , a shifting width and height range of 0.2, a zooming range of 0.2, and a horizontal flip. For the first two pretrained networks, we froze a particular layer to preserve the learning representation extracted by these networks.A fully connected (FC) layer was added within the sigmoid activation function to obtain the probability.The last pretrained network added a batch normalization layer, a global average pooling layer, a dropout layer with a threshold of 0.5, and an FC layer to enable very efficient information sharing across the layers.For the outcome, a final probability score for the presence or absence of otosclerosis on the extracted images was predicted from the FC layer.The detection result was calculated using the maximum classification probability of otosclerosis on all selected slices using the ROI bounding box.We used a gradient backpropagation approach [38] to compute a pixel-by-pixel probability map of the otosclerosis present.To compare with other state-of-the-art image classification CNNs, two additional classification CNNs (ResNet-18 [39] and InceptionV3 [40]) were also evaluated. Z. Wang et al. Evaluation metrics We used a well-established measurement criteria, namely sensitivity, specificity, accuracy, precision, and recall, which are defined as Eq. ( 1): Here, f TP represents the count of true positives, f FP signifies the count of false positives, f FN denotes the count of false negatives, and f TN stands for the count of true negatives. All screening CT images of the hold-out test subset were visually evaluated by a clinical expert group comprising a senior otolaryngologist with 20 years of experience, an otolaryngologist with more than 5 years of fellowship training, and a radiologist with 10 years of experience.To facilitate a meaningful comparison of our proposed system's performance, each clinical expert independently reviewed the CT scans within the hold-out test subset, which consisted of 35 subjects, using the same experimental settings.The evaluation used the AUC, which quantifies the entire two-dimensional area beneath the complete receiver operating characteristic curve (ROC) spanning from (0,0) to (1,1).Clinical experts recorded the diagnosis as the presence or absence of otosclerosis.All the clinical experts were blinded to the ground truth and the diagnosis from the proposed system but not to the clinical characteristics. Statistical analysis Statistical analysis was performed using Python (version 2.7, Python Software Foundation, Wilmington, Del), with P values of <0.05, indicating a statistically significant difference.Identifying the presence or absence of otosclerosis on the hold-out test subset of the subjects was evaluated for sensitivity, specificity, and accuracy.These diagnostic results were identified for the OtoModel system using a classification CNN from EfficientNet and alternative otosclerosis detection systems using classification CNNs from ResNet-18 and InceptionV3.Confidence intervals (CIs) of accuracy were calculated for the OtoModel systems.Contingency tables were identified for the otolaryngologist with more than 20 years of experience, the otolaryngologist with 5 years of fellowship training, the radiologist with 10 years of experience, and the OtoModel system with the classification CNN from EfficientNet.In addition, the ROC was used to further analyze the diagnostic probability score of the OtoModel system and clinical experts, with the AUCs compared using the Youden index, which identifies the optimal sensitivity and specificity. Results The training times for slice selection, ROI extraction, and detection CNNs were 0.5, 1.6, and 1.8 h, respectively.This testing experiment was conducted on an independent subset.Table 2 presents the performance metrics of the CNN developed for extracting the fissula ante fenestram.Table 3 shows the comparison between the sensitivity, specificity, and accuracy for identifying a confirmed diagnosis of the presence or absence of otosclerosis for the proposed system using the classification CNN from EfficientNet and the alternative otosclerosis detection systems using the classification CNNs from ResNet-18 and InceptionV3. All classification CNNs exhibited strong performance, with accuracy estimates ranging from 0.90 to 0.99.Notably, the proposed system, which uses the classification CNN derived from EfficientNet, demonstrated the highest overall diagnostic efficacy in detecting otosclerosis.Fig. 3 illustrates the ROC curve for the training set and the confusion matrix for the test set, elucidating the diagnostic capabilities of the OtoModel system in identifying the presence or absence of otosclerosis.The AUC for the proposed system was 0.98 (95 % CI: [0.97, 1.00]; P < 0.005).In addition, we plotted the sensitivity and 100 -specificity for otolaryngologists with varying levels of experience and the radiologist with 10 years of experience. For comparison, we depicted point assessments of sensitivity and specificity for otolaryngologists with more than 20 years of experience, those with 5 years of fellowship training, and the radiologist with 10 years of experience alongside the ROC curve of the proposed system in Fig. 3-a.Notably, the sensitivity and specificity point assessments of clinical experts fell within the 95 % CIs of the AUC for the OtoModel system.Specifically, the OtoModel exhibited CIs of the presence and absence of otosclerosis between 0.95 and 1.00 (Fig. 3-b).In contrast, clinical experts' diagnoses of the presence of otosclerosis ranged from 0.95 to 0.77, whereas those of the absence of otosclerosis ranged from 0.85 to 1.00.Remarkably, the OtoModel demonstrated no statistically significant differences in diagnostic performance compared with clinical experts. To investigate the interpretability, the proposed system was visualized by applying Grad-CAM, which produced a coarse localization map highlighting the target.The last convolutional layer of the last res-block was made transparent to predict the presence or absence of otosclerosis.In our study, we applied the "pointing game" method [41,42] to assess how well our explainability map aligns with radiologist-drawn contours.Here, an overlap is counted as a "hit," and no overlap is counted as a "miss."The effectiveness of the explainability map was then measured using the "hit rate" [43], as illustrated in Fig. 4. The proposed system could render dense probability maps that demonstrate the pixel-by-pixel probability of the presence of otosclerosis on the left ear (shown in Fig. 5a and c), which the OtoModel system explained as positive because of the strong localized activation maps, as illustrated in Fig. 5b and d, at the boundary of the fissula ante fenestram (white arrow). On an independent test set, the system produced explanations of the presence of otosclerosis in the right ear (shown in Fig. 6a and c), and the OtoModel system was interpreted as positive, as illustrated in Fig. 6b and d.Two explanations of otosclerosis on the left and right ears showed an absence (shown in Fig. 7a and c), and the OtoModel system was explained as negative, as illustrated in Fig. 7b and d.This explains to a certain extent why the model could accurately diagnose otosclerosis.The proposed detection can concentrate on the target area.Thus, it can extract discriminative representations for better detection of otosclerosis on the fissula ante fenestram. To this end, we roughly calculated the diagnosis time spent by the three clinical experts and our OtoModel system.Specifically, 3 clinical experts typically spend an average of approximately 3-5 min identifying otosclerosis.In contrast, the OtoModel system dramatically reduced the identification time by an average of 0.5 s, resulting in a significant reduction in the diagnosis time of 97.2 %.In addition, it is worth noting that the clinical experts accepted most of the fully automated identification results produced by our OtoModel system without any modification, except for 2 out of 100 CT scans that required extra-human intervention.For example, the localized slice of otosclerosis may not be the best representation.Additional refinements can further improve the reliability of otosclerosis diagnosis and treatment.Our clinical partners have confirmed that such performance is fully acceptable for many clinical and industrial applications, indicating the high clinical utility of the OtoModel system. Discussion Our study demonstrated the feasibility of using a deep learning system for the automatic interpretable detection of otosclerosis in high-resolution temporal CT scans.The OtoModel system achieved a high diagnostic performance for identifying the presence or absence of otosclerosis, with an AUC of 0.98.Furthermore, there was no statistically significant difference between the clinical experts, who had varied levels of experience in identifying otosclerosis.To the best of our knowledge, only two previous studies [20,21] have reported the utility of deep learning for the diagnosis of otosclerosis on temporal bone CT.In contrast, our proposed system provided automatically interpretable otosclerosis detection in CT volumes to simulate the decision-making of clinicians and outperformed previous studies with a 0.98 accuracy.In addition, our study demonstrated a precise explanation of the presence of otosclerosis. Our study investigated three different types of pretrained CNNs, including EfficientNet, ResNet-18, and InceptionV3, and found that EfficientNet B4 provided the best diagnostic performance for detecting otosclerosis.EfficientNet [44] used a simple and highly effective compound scaling approach, which easily scaled up a baseline CNN to any target resource and maintained the model's efficiency.EfficientNet consists of eight models from B0 to B7, which refer to variants with more parameters and higher accuracies.Due to clever scaling of the depth, width, and resolution, EfficientNet demonstrated higher accuracy values than the alternative otosclerosis system.The proposed system used the EfficientNet B4 model, as it contains 19 M parameters, which is feasible for our experimental setup, as B5, B6, and B7 include 30 M, 43 M, and 66 M parameters, respectively. Our interpretability analysis using Grad-CAM confirms the effectiveness of the model by focusing on key areas.Dense probability maps and targeted representation facilitate robust otosclerosis detection, particularly in the anterior fenestram.As shown in the probability map for the left ear in Fig. 4, there is a strong activation (interpreted as positive) at the fenestram-anterior boundary.The independent test set provides an explanation for the presence of otosclerosis in the right ear (Fig. 5) and absence of otosclerosis in the other two cases (Fig. 6).This targeted detection system increases robustness by focusing on specific areas such as the anterior window cleft for better detection of otosclerosis.There were certain limitations and challenges encountered during the implementation of our deep learning system.First, our OtoModel system comprises three individual CNNs connected sequentially rather than an end-to-end architecture.This approach may increase the training workload due to the individual training phases.In addition, our study faced limitations in terms of data availability, leading us to rely on transfer learning from pretrained models to optimize training efficiency.Future research with larger datasets could enhance the diagnostic performance of the proposed system.Furthermore, in our study, all subjects with otosclerosis were evaluated at a single institution using a uniform imaging protocol, primarily focusing on fenestral otosclerosis, with only a few cases of cochlear otosclerosis.Detecting different subtypes of otosclerosis in screening CT examinations is more challenging but holds the potential to aid in developing precise surgical treatment plans and saving valuable diagnosis time for otolaryngologists or radiologists. In summary, our study has demonstrated the feasibility of implementing a deep learning system to detect otosclerosis on highresolution temporal bone CT scans and its potential to improve the quality and efficiency of clinical practice.There were no statistically significant differences between the OtoModel system and clinical experts with various levels of experience in identifying the presence or absence of otosclerosis.However, before its implementation in clinical practice, the technical advancements of the OtoModel system must be further evaluated in large prospective studies with multiple institutions that use different CT units and imaging protocols. Fig. 2 . Fig. 2. Overview of the OtoModel system.The system mainly consists of three stages: a) Identify the slices with the middle ear structure and output the final detection result (absence or presence) by heuristically fusing the diagnostic results of a patient.b) Extract useful image features from the selected image slices and predict the coordinates of a rectangular bounding box, which defines the region of the fissula ante fenestram.The extracted patches are downsized to 112 × 112 and are used as input images to the detection CNN.c) Compute the classification probability of otosclerosis on all selected slices containing the fissula ante fenestram.The last three layers advance a gradient backpropagation method to calculate a pixel-by-pixel probability map of the otosclerosis present.d) depicts a simplified diagram of a neural network architecture.CNN: convolutional neural network. Fig. 3 . Fig. 3. Comparisons of the diagnostic performance.a) The AUC of the OtoModel was 0.98, indicating a high overall diagnostic accuracy in the test set.Note that the sensitivity and specificity of the clinical experts are closely proximate to the ROC curve of the OtoModel system.b) Confusion matrix of the diagnostic accuracy in the test set. Fig. 4 . Fig. 4. Comparing with radiologists under the hit rate scheme. Fig. 5 . Fig. 5. Demonstration of the confirmed presence of otosclerosis in the left ear.(a) Extracted fissula ante fenestram image patch of a 28-yearold woman and (b) extracted fissula ante fenestram image patch of an 18-year-old woman analyzed by the proposed system as positive with abnormal signals.(c)-(d) Pixel-by-pixel probability map for the extracted image patches showing the high-probability areas in the lesions on which the schema based its explanation of otosclerosis (arrows). Z .Wang et al. Fig. 6 . Fig. 6.Demonstration of the confirmed presence of otosclerosis in the right ear.(a) Extracted fissula ante fenestram image patch of a 23-year-old man and (b) extracted fissula ante fenestram image patch of a 27-year-old woman analyzed by the proposed system as positive with abnormal signals.(c)-(d) Pixel-by-pixel probability map for the extracted image patches showing the high-probability areas in the lesions on which the schema based its explanation of otosclerosis (arrows). Table 1 Demographic distribution and clinical characteristics of the enrolled subjects. Table 2 Performance of the fissula ante fenestram extraction CNN. CNN: convolutional neural network.Z.Wang et al. Table 3 Sensitivity, specificity and accuracy for the proposed system and the alternative otosclerosis detection systems. Z.Wang et al.
2024-04-17T15:23:18.305Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "eb09f84d3d40120f584c1ad180b79e076edb85f5", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2405844024057013/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9cacffac8d08d07b2336c1254d9da8b561085bf9", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
7052944
pes2o/s2orc
v3-fos-license
Effect of Stay-Green Wheat, a Novel Variety of Wheat in China, on Glucose and Lipid Metabolism in High-Fat Diet Induced Type 2 Diabetic Rats The use of natural hypoglycemic compounds is important in preventing and managing Type 2 diabetes mellitus (T2DM). Forty male Sprague-Dawley rats weighing 150–180 g were divided into four groups to investigate the effects of the compounds in stay-green wheat (SGW), a novel variety of wheat in China, on T2DM rats. The control group (NDC) was fed with a standard diet, while T2DM was induced in the rats belonging to the other three groups by a high-fat diet followed by a streptozotocin (STZ) injection. The T2DM rats were further divided into a T2DM control group (DC), which was fed with the normal diet containing 50% common wheat flour, a high dose SGW group (HGW) fed with a diet containing 50% SGW flour, and a low dose SGW group (LGW) fed with a diet containing 25% SGW flour and 25% common wheat flour. Our results showed that SGW contained cereal antioxidants, particularly high in flavonoids and anthocyanins (46.14 ± 1.80 mg GAE/100 g DW and 1.73 ± 0.14 mg CGE/100 g DW, respectively). Furthermore, SGW exhibited a strong antioxidant activity in vitro (30.33 ± 2.66 μg TE/g DW, p < 0.01). Administration of the SGW at a high and low dose showed significant down-regulatory effects on fasting blood glucose (decreasing by 11.3% and 7.0%, respectively), insulin levels (decreasing by 12.3% and 9.7%, respectively), and lipid status (decreasing by 9.1% and 7.5%, respectively) in T2DM rats (p < 0.01). In addition, the T2DM groups treated with SGW at a high and low dose showed a significant increase in the blood superoxide dismutase (1.17 fold and 1.15 fold, respectively) and glutathione peroxidase activities (1.37 fold and 1.30 fold, respectively) compared with the DC group (p < 0.01). The normalized impaired antioxidant status of the pancreatic islet and of the liver compared with the DC group was also significantly increased. Our results indicated that SGW components exerting a glycemic control and a serum lipid regulation effect may be due to their free radical scavenging capacities to reduce the risk of T2DM in experimental diabetic rats. Introduction Diabetes mellitus is a chronic metabolic disorder that represents a major and growing public health problem all over the world. Diabetes mellitus incidence increased from approximately 100-135 million affected adults worldwide in 1995 to an estimated 285 million in 2010 [1]. Type 2 diabetes mellitus (T2DM), that represents the predominant type of diabetes mellitus, is characterized by a chronic hyperglycemia and a marked lipid metabolic disorder [2]. It may result from genetic or several environmental factors, such as high-fat diet and less exercise. Several factors are associated with the pathology of diabetes leading to the dysregulation of both glucose and lipid metabolism. Among these, oxidative stress due to free radicals production, such as increased O 2− from mitochondria, affects and impairs cell membranes and results in insulin resistance, β-cell dysfunction, impaired glucose tolerance and finally T2DM [3]. Antioxidants possess the ability to scavenge free radicals and reduce the deleterious consequences that affect the lipids, regulating the oxidative stress-sensitive signaling pathways. Thus, their use in T2DM patients is widely considered as an efficient method to alleviate diabetes and its complications [3]. A diet containing a high amount of natural antioxidants may represent a safe and effective method for the control and prevention of T2DM [4]. Many in vitro and in vivo studies demonstrated that frequent consumption of whole grain foods improved metabolic homeostasis and reduced the risk of T2DM and its complications [4,5]. Some whole grain foods, such as wheat [6,7], barley [8,9] and oat [10], contain relative high amounts of phytochemicals such as phenolic compounds, phytosterols and folate, widely known as powerful antioxidants able to control and prevent T2DM by reducing oxidative stress. In addition, phenolic compounds represent the most diverse and complex class of phytochemicals considered as the major contributors to the total antioxidant activity of cereal grains [6,11]. Stay-green wheat (Triticum aestivum L.) (SGW), which is called "Lvfeng 1", is a mutant type of the common wheat, with delayed leaf senescence because of an extended duration of the active photosynthesis during the grain-filling period [12][13][14]. SGW maintains a relatively high antioxidant enzyme activity and resistance to photo-oxidative stress [12,14], but little is known about the type and concentration of the active phytochemical components of SGW grains and their antioxidant properties. Additionally, the relevant medicinal and therapeutic properties of SGW, as well as its potential role on health improvement, have not yet been reported. Hence, the aim of the present study was to analyze the active components content of SGW and their antioxidant activities in vivo and in vitro to evaluate their alleviative effects on glucose and lipid metabolic disorders in high-fat diet and streptozotocin induced T2DM rats. Materials SGW was kindly provided by Shaanxi Houji Featured Agriculture R&D Centre. All chemicals used were purchased from Sigma-Aldrich (St. Louis, MO, USA) unless otherwise indicated. Determination of the Phytochemical Content and Total Antioxidant Activity in Vitro The oxygen radical absorbance capacity (ORAC) assay was performed according to the method previously reported [15] and the results were expressed as μmol Trolox equivalent (TE) per g dry weight. Total phenolic contents in the wheat grain were measured using Folin-Ciocalteu reagent as reported [16] and expressed as milligrams of gallic acid equivalent (GAE) per 100 g dry weight. Total flavonoid contents were determined by a colorimetric method as previously described [15] and expressed as milligrams of catechin equivalents (CE) per 100 g dry weight. Total anthocyanins were determined as described [17] and expressed as milligrams of cyaniding-3-glucoside equivalent (CGE) per 100 g dry weight. Phenolic compositions were identified on the basis of their characteristic UV-Vis spectra and retention times by the HPLC method [18]. Results were confirmed by the peaks using synthetic syringic acid, vanillic acid, caffeic acid and ferulic acid, and quantified with an external calibration curve with the corresponding standards. Animal Treatment The animal experiment was performed within the jurisdictional framework of the Animal Management Rules of the Ministry of Health of China and the guidelines for the Care and Use of Laboratory Animals of Xi'an Jiaotong University (approval number XJTULAC2012-012; 10 February 2012). Forty male Sprague-Dawley (SD) rats weighing 150-180 g were provided by the Experimental Animal Center of Xi'an Jiaotong University and housed in individual cages exposed to a 12 h light/dark cycle. They were allowed free access to water and a hand-made standard pelleted diet (common flour 50%, bran 13%, whey powder 1%, fish oil 7%, bean pulp 26%, salt 0.5%, calcium bicarbonate 1.4%, minerals 0.5% and vitamins 0.6%). After five days of adaptation, glucose and lipid levels in the blood taken from the tail vein were measured. The non-diabetic control group (NDC) consisting of 9-10 rats, was fed the standard pelleted diet. To induce T2DM, rats were fed a high-fat diet (10% lard, 20% sucrose, 10% yolk powder, 0.2% sodium deoxycholate, and 1% cholesterol combined with 59% standard diet) for seven weeks. Subsequently, rats were fasted overnight and the following morning they received a single intraperitoneal injection of streptozotocin (STZ) diluted in citrate buffer (pH 4.0) at a dose of 30 mg/kg bodyweight. After 72 h, the blood glucose levels were determined by collecting blood from the rats' tail vein. The rats with glycemia >16.8 mmol/L were considered as T2DM. The T2DM rats were randomly divided into three groups, each consisting of 10 animals. One group of rats was the T2DM control (DC) and was fed the standard diet and the other two experimental groups were fed, respectively, the diet containing 50% SGW flour (HGW, 116.67 g/kg·day) and the diet containing 25% SGW flour plus 25% common wheat flour (LGW, 58.33 g/kg·day). Food and water intake were determined once a day by estimating the amount consumed. After eight weeks of administration, blood samples were collected from the inferior vena cava of the anesthetized rats. Then, cardiac puncture was carried out, and the pancreas and liver were harvested and stored in 10% formaldehyde solution for two weeks and then transferred and kept in 80% ethyl alcohol until histopathological analyses. Oral Glucose Tolerance Test (OGTT) A blood sample was taken at 0 min from the tip of the tail. Next, D-glucose (2 g D-glucose/kg body weight dissolved in 0.9% saline) was orally administered by gavage. Rats were fasted for 12 h before the OGTT. Insulin and Homeostatic Model Assessment-Insulin Resistance (HOMA-IR) Determination Insulin levels were quantified using rat insulin enzyme linked immunosorbent assay (ELISA) kits (Crystal Chem, Downers Grove, IL, USA). HOMA-IR was calculated using the formula HOMA-IR = (glucose × insulin)/22.5, where the concentration of glucose is expressed in mmol/L and that of insulin in mIU/L [5]. Blood Biochemical Analysis According to the standard methods, serum glucose and lipid components such as total cholesterol (TC), triglyceride (TG), high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C), non-esterified fatty acids (NEFA), superoxide dismutase (SOD) and glutathione peroxidase (GSH-px) were measured by colorimetric enzyme kits according to the manufacturer's protocols (Sigma-Aldrich). The SOD and GSH-px activities were defined as the amount of enzymatic reaction of 1 mL serum per minute. Histopathological Examination The pancreas and liver tissues were fixed in 10% buffered formalin and embedded in paraffin. The paraffin-fixed tissue specimens were sliced into 4 μm-thick sections. The sections were mounted on glass slides and stained with Hematoxylin and Eosin (HE staining) and examined with a light microscopy. Statistical Analysis Quantitative data are expressed as mean ± SEM. Student's t-test was used to compare the changes in phytochemical contents and total antioxidant activity in both groups. One-way analysis of variance (ANOVA) followed by Tukey's post hoc test was used to compare changes in water intake, blood insulin and lipids status, and oxidative stress parameter levels of these four experimental groups. Two-way repeated measures ANOVA followed by the Bonferroni multiple comparisons test was used to identify the factors "treatment" and "time" and to analyze their interactions (treatment × time) influencing the body weight, food intake, and blood glucose levels (SPSS 19.0 for Windows, IBM, Chicago, IL, USA). p < 0.05 was considered statistically significant. Non-quantitative results were derived from at least three independent experiments. SGW Phytochemical Content and Total Antioxidant Activity in Vitro SGW total antioxidant activity in vitro was measured by the ORAC method (Table 1). SGW exhibited a stronger and significant antioxidant activity (30.33 ± 2.66 μg TE/g DW) compared with the common wheat (24.12 ± 1.03 μg TE/g DW) (p < 0.05). Moreover, the SGW analysis showed that the total flavonoid content (46.14 ± 1.80 mg GAE/100 g DW) was more than twice the content in the common wheat (20.25 ± 0.78 mg GAE/100 g DW), while the total phenolic content (88.82 ± 5.91 GAE/100 g DW) was similar with the content in the common wheat (85.84 ± 3.02 mg GAE/100 g DW). Total anthocyanin content in the SGW was 1.73 ± 0.14 mg CGE/100 g DW, while anthocyanins were not detected in the common wheat. The analysis of the phenolic composition showed that SGW was rich in vanillic acid (74.54 ± 6.41 μg/g) compared with the common wheat (22.23 ± 0.93 μg/g). Rats Body Weight, Food and Water Intakes The body parameters such as body weight, food and water intakes, are shown in Figure 1. Figure 1A shows the changes in body weight during the entire feeding period. Two-way repeated measure ANOVA results indicated a significant treatment × time interaction positively associated with body weight (p < 0.01). The rats belonging to the three groups fed with a high-fat diet significantly increased their body weights compared with the rats in the NDC group (p < 0.01) before the SGW intervention. However, the rats in the T2DM groups showed a remarkable weight decrease soon after the start of the SGW intervention, while the weight of the rats in the NDC group continued to increase (p < 0.01). Additionally, the weight of the rats in the HGW group was decreased compared with the weight of the rats in the DC group (p < 0.01), while the weight difference between the rats in the LGW and HGW group was not significant. As shown in Figure 1B, two-way ANOVA also demonstrated a significant interaction between treatment and time on food intake (p < 0.01). Both the food and water intake results showed that the rats in the T2DM groups ate and drank more than the rats in the NDC group (p < 0.01) ( Figure 1B,C). However, the food and water intake of the rats in the intervention groups significantly decreased to the normal level in the last two weeks during the intervention period when compared with the intake of the rats in the DC group (p < 0.01). LGW group. According to two-way repeated measures ANOVA, there was a significant interaction between the effects of treatment and time on body weight (p < 0.01) and food intake (p < 0.01), respectively. (C) Water intake was measured before and after the intervention. Results were expressed as means ± SEM (n = 9-10). * p < 0.01 vs. NDC group, # p < 0.01 vs. DC group. Blood Glucose and Insulin Level Changes According to the two-way repeated measure ANOVA results, a significant treatment × time interaction positively associated with blood glucose was observed (p < 0.01). Figure 2 shows that the fasting glucose level in the T2DM rats was higher than the level in the rats of the NDC group. The areas under the OGTT curves belonging to the diabetes rat groups (87.32 ± 2.01 for DC group, 73.55 ± 1.58 for HGW group and 76.71 ± 1.32 for LGW group) were significantly higher than the areas belonging to the NDC group (20.64 ± 1.03) (p < 0.01). The fasting glucose level in the HGW and LGW groups significantly decreased when compared with the level in the DC group (p < 0.01), but no difference was found between the two treatment groups. The fasting insulin level and the homeostasis model assessment-insulin resistance index (HOMA-IR) showed the same trend of the OGTT curves ( Table 2). According to two-way repeated measures ANOVA, there was a significant interaction between the effects of treatment and time on blood glucose (p < 0.01). Results were expressed as means ± SEM (n = 9-10). # p < 0.01 vs. DC group. Blood Lipid Changes The serum lipid parameters in the fasting condition, such as TC, TG, LDL-C, and NEFA in the T2DM rats were significantly increased while HDL-C was significantly decreased compared with the NDC group (p < 0.01) ( Table 3). Both HGW and LGW groups improved their lipid status except HDL-C after the SGW feeding, showing a statistically significant difference in the level of the serum lipid parameters compared with the DC group (p < 0.01). However, no significant difference was observed between HGW and LGW groups. Table 4 showed the effects of SGW against oxidative stress in diabetes rats. Both SOD and GSH-Px activities in the DC groups were decreased compared with the NDC group (p < 0.01). After the administration of high and low doses of SGW, the SOD activities were increased from 148.78 in the DC group to 174.62 (p < 0.01) and 171.00 U/mL (p < 0.05) in the HGW and LGW groups, respectively. Moreover, increased GSH-Px activities were also observed in the HGW and LGW groups compared with the DC group (1.37 and 1.30 fold more than the DC group, respectively, p < 0.01). However, no changes were found between the HGW and LGW groups. Table 4. Oxidative stress parameter levels in the rats of different groups after stay-green wheat (SGW) intervention. Histological Changes Examination of HE-stained sections of the liver ( Figure 3A) in the NDC group showed the typical architecture of the hepatic lobules. The hepatocytes form branches and anastomosis cords radiating from the central vein. In contrast, the DC group showed degenerative changes in the hepatocytes. The cells of the hepatic lobules possessed many vacuoles giving them a foamy appearance. The LGW group showed a hepatic degenerative change compared with the DC group, while in the HGW group the degenerative changes were reduced, presenting just few vacuoles, and becoming similar to the liver architecture of the NDC group. The HE staining of the pancreas ( Figure 3B) in the NDC group displayed a uniform arrangement of the pancreatic structure, with the pancreas islets containing several pancreatic β-cells. On the other hand, the pancreatic islet in the DC group showed remarkable damages compromising and deforming its shape with a reduction in the pancreatic β-cells content. The two SGW treatment groups partially recovered the damaged status and increased the number of pancreatic β-cells. Discussion Based on this T2DM model of high-fat diet intake and treatment with STZ, our results showed for the first time that SGW consumption improved the glucose and lipid profile in the experimental T2DM rats by scavenging the free radicals. Oxidative stress plays a crucial role in the pathogenesis of the T2DM and cardiovascular diseases. Persistent hyperglycemia causes an increase in the production of ROS through glucose auto-oxidation to induce oxidative stress [19]. Consumption of grain cereals is highly associated with reduced incidence of these diseases [20]. This may be due to a wide range of bioactive components with antioxidant effect, such as dietary fibers and phytochemical components in the grains. The novel variety wheat we used in our experiment, the SGW, did not present any significant difference in proteins, starch, fibers and fat content compared with the common wheat (data not shown). However, our results displayed a higher amount of total flavonoid and anthocyanin compared with the common wheat, as well as several biological functions such as the ability to decrease LDL-C, and remove ROS to prevent and/or efficiently treat oxidative stress-related diseases [7]. Indeed, our data showed a strong correlation between the SGW diet and the antioxidant activity through the improvement of the SOD and GSH-Px activities, which are considered as markers of the antioxidant system in the organism [11]. SGW diet indeed exerted its beneficial effects by scavenging the free radicals thanks to its higher content of cereal antioxidants. Moreover, the diet supplementation with SGW exhibited a clear hypoglycemic effect, which is in agreement with the effect of many cereal grains, such as barley and whole wheat [15,21]. These results demonstrated that the natural antioxidants in the grain could chelate metals as well as inhibit the free radicals by limiting the action of the lipoxygenase enzyme [22]. Additionally, SGW is functionally called "stay green" wheat since it actually maintains its photosynthetic competence for a longer time compared to the common wheat. Thus this peculiarity confers to this wheat a remarkably increased antioxidant power [13]. This phenotype delayed the senescence of the plants during the grain filling stage and showed a better redox state due to a higher activity of the antioxidant enzymes [12]. These differences may also affect the composition of the grain wheat. However, some epidemiological studies [23] showed no significant effects after the whole grain intake on insulin sensitivity and lipid mechanism, probably because of the short intervention time of only 6 weeks. Moreover, the different subjects and the food types are also important factors that may affect the results. T2DM leads to glucose and fatty acids metabolic disorders. The typical symptoms of T2DM are represented by a higher need of food and water intake associated with the loss of body weight [24,25]. In the present study, the control rats constantly increased their body weight, whereas diabetes rats increased their body weight only after a high-fat diet intake. Subsequently, the body weights decreased due to the decrease in glucose metabolism and increase in fat metabolism [5]. SGW ameliorated these parameters at the end of the intervention period, indicating a regulatory effect on these metabolic disorders. Except the body weight, SGW was also able to regulate food and water intakes to reach the values showed by the NDC groups. The loss of the β-cell function is a key event in the pathogenesis of diabetes. The insulin level and the insulin resistance both represent the main cause of inducing the hyperglycemia by destroying the structure of the pancreas and the β-cell function [26,27]. The β-cells are extremely sensitive to oxidative stress due to their low antioxidant abilities and their increased sensitivity to apoptosis [28]. The increased free radical production triggers β-cells injury through KATP pathway and up-regulation of the activity of the antioxidant enzymes, such as GSH-Px, SOD, and catalase. Our results showed that SGW was able to restore the normal status of the destroyed islets probably thanks to its high antioxidant content, especially flavonoid and anthocyanin. Therefore, SGW rich diet might be a promising strategy to prevent oxidative damages of the pancreatic islets for transplantation [29]. Furthermore, diabetes leads to a progressive accumulation of lipid metabolites. TG, TC, LDL-C, HDL-C and NEFA are considered as important biomarkers of hyperlipidemia [4]. Our results also confirmed that the SGW showed a strong hypolipidemic effect as well as a hypoglycemic effect through the reduction of TG, TC, LDL-C and NEFA except for HDL-C. However, the increasing amount of SGW feeding dose did not result in an enhanced effect, probably because the low dose of SGW was already enough to reduce the T2DM in the rat model. The abnormal blood lipid content could also represent the main factor in the pathogenesis of liver damage [30]. The SGW consumption could reverse most of the histological changes in the liver of the diabetes group. Based on some reports, the diabetic animals exhibited a reduction in the antioxidant ability, thus compromising the lipid metabolism function [31,32]. Since SGW contains many antioxidants, especially a high content of total flavonoid and total anthocyanin, it could mediate the metabolism of the lipids in diabetic rats via the support of the antioxidant ability and thus efficiently fight against hyperlipidemia. However, the specific molecular mechanism of SGW on diabetes needs further investigations. Conclusions The present results showed that SGW consumption reversed most of the pathological changes of the T2DM rats due to a high-fat diet feeding and STZ injection. The beneficial effects of SGW may be due to its strong antioxidant abilities thanks to its high content of total flavonoid and anthocyanin. Therefore, SGW possessed a significant anti-hyperglycemic effect and may represent a valid in vivo anti-oxidative dietary supplement for type 2 diabetes patients.
2016-03-01T03:19:46.873Z
2015-06-26T00:00:00.000
{ "year": 2015, "sha1": "82e6f98491f53b7d238f35da1a1a360d536ebc28", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/7/7/5143/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "82e6f98491f53b7d238f35da1a1a360d536ebc28", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
231853824
pes2o/s2orc
v3-fos-license
Alteration of object recognition memory after chronic exposure to dichlorophenoxyacetic acid (2,4-D) in adult rats The described neurological symptoms associated with pesticide exposure include memory and concentration problems. Most experimental studies of the association between dichlorophenoxyacetic acid (2,4-D) and neurotoxicity have focused on brain development, and few have been conducted in adult animals. The aim of this study was to assess whether chronic oral or inhalation exposure to 2,4-D affects object recognition memory in adult rats. Forty albino Wistar rats were used and distributed into 4 groups (n = 10): I: animals nebulized with distilled water; O: animals receiving feed treated with nebulized distilled water; DI: animals nebulized with 9.28 x 10 grams of active ingredient per hectare (g.a.i./ha) of 2,4-D; and DO: animals receiving feed treated with 9.28 x 10 g.a.i./ha of nebulized 2,4-D. The animals were exposed for 6 months. To assess recognition memory, the object recognition test was used. Compared to control animals, animals exposed to 2,4-D spent less time exploring objects (p <0.05) and obtained an object recognition index score of -1. Route of exposure to 2,4-D had an effect only on the time spent exploring objects, which was shorter in animals exposed orally. Chronic exposure to a high concentration of 2,4-D alters the ability of adult animals to recognize objects. Introduction The herbicide dichlorophenoxyacetic acid (2,4-D), belonging to the class of phenoxyacetic acids, was developed in 1941 during the Second World War and has been commercially used in the United States since 1947. It was the first selective herbicide used in the cultivation of soybean, corn, wheat, sugarcane, pastures and rice to prevent and combat certain pests (Song, 2014). Exposure to 2,4-D can occur directly during spraying or handling in professional or domestic applications and indirectly via residues in soil, air, water reserves or food (Carneiro et al., 2015;Raina-Fulton, 2014). Exposure to 2,4-D can cause several symptoms depending on the route of exposure. Symptoms of respiratory contamination include loss of appetite and weight, a burning sensation in the throat and effects on the central nervous system (RED, 2005). In cases of oral exposure, depending on the formulation, symptoms are similar to those of exposure to some central nervous system depressants, such as aromatic chlorinated hydrocarbons, sedative drugs or alcohol (RED, 2005). Some studies have investigated the effects of 2,4-D exposure and the implications for the central nervous system. Studies with rats exposed subcutaneously or orally in an acute or subchronic manner have revealed degenerative changes in the central nervous system (Elo & Ylitalo, 1979), depression in the operant response and ataxia (Schulze & Dougherty, 1988), behavioral changes, depression, and lethargy. Furthermore, exposure to 2,4-D has been reported to affect the activities of serotonergic and dopaminergic substances in the brain and cerebrospinal fluid at high doses (Elo & MacDonald, 1989) and to result in dysfunction of the neurotransmitters/neurohormones dopamine and serotonin in the brain (Bortolozzi et al., 2004). Epidemiological studies in humans suggest a relationship between the use of 2,4-D and the development of Parkinson's disease (Tanner et al., 2009). The herbicide 2,4-D is widely used worldwide for the control of leaf weeds. There is evidence linking exposure to 2,4-D to changes in the central nervous system, behavior and the concentrations of neurotransmitters in the brain (Bortolozzi et al., 2002). The exposure of developing animals to 2,4-D decreases myelination, resulting in behavioral changes and oxidative stress (Ferri et al., 2007). However, studies evaluating the association between 2,4-D and neurotoxicity in adult animals, with fully developed brains, are scarce. The objective of this study was to evaluate whether chronic exposure to 2,4-D orally or via inhalation at a concentration corresponding to a common concentration of environmental exposure for humans affects object recognition memory in rats. Methodology Animals used in research have been treated according to institutional guidelines and with the internationally accepted principles for laboratory animal use and care as found in the international guidelines, with due consideration to the alleviation of distress and discomfort. This study was approved by the Ethics Committee on Animal Use of the Universidade do Oeste Paulista (Protocol No. 4485). This is a prospective, quantitative, interventional and experimental study (Pereira et al., 2018). This study was based on the studies of Mello et al. (2018) e Antunes & Biala (2012). Forty adult male Wistar rats (200-250 g) were used. The rats were housed in large plastic cages in an air-conditioned vivarium (temperature of 22 ± 2°C) under a 12-h light/12-h dark cycle and were randomly distributed into 4 groups (n = 10): I: animals nebulized with distilled water; O: animals receiving feed nebulized with distilled water; DI: animals nebulized with 9.28 x 10 -3 g of active ingredient per hectare (g.a.i/ha) of 2,4-D; and DO: animals receiving feed nebulized with 9.28 x 10 -3 g.a.i/ha of 2,4-D (Mello et al., 2018). The nebulization protocol for the animals and feed was performed as described by Mello et al. (2018) using two boxes (32x 24x 32 cm), each connected to an ultrasonic nebulizer (Soniclear Ind. Com. Imp. E Exp. Ltda., São Paulo, Brazil). A dose adjustment was performed according to the box area to simulate environmental exposure. Animals exposed by inhalation were nebulized for five consecutive days a week, simulating occupational exposure. The feed of the animals exposed orally was changed every two days, and each ration was treated the day before offer. All animals were exposed for 6 months. The object recognition test was carried out in a medium density fiberboard (MDF) box measuring 100x100 cm between 08:00 and 17:00 (Antunes & Biala, 2012), one day after the last exposure to the herbicide 2,4-D. The box was cleaned with 10% alcohol after each test. The test room had a 15-W red light that provided 3 lux of lighting over the center of the apparatus. The experiments were recorded on video, for five minutes, using an 8 MP camera video resolution: 1080 p (1920 × 1080 px) oriented at the apparatus. Throughout each trial, the same observer stayed inside the room to perform the recording from a position higher than the box. The test was divided into three sessions: habituation, training (one hour after the first session) and testing (24 h after training) (Figure 1). The habituation session (for habituation to the apparatus) occurred only once. Each animal was placed individually in the center of the apparatus without objects and allowed to explore for five minutes. After 60 min, the animal was again placed in the center of the apparatus and exposed to two identical objects (identical in size, shape and color), which were defined as familiar objects F1 and F2 (Lego ® square toys -São Paulo, Brazil), for five minutes; this was the training session. The animal was placed in front of the objects facing the wall. Exploration behavior was considered to have occurred Research, Society and Development, v. 10, n. 1, e23310111695, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i1.11695 4 when the animal touched the object with their nose or front legs or approached within 2 cm of the objects. After a retention interval of 24 h, the test session took place. An animal was placed in the center of the apparatus and exposed to two objects in the same position as in the training session; however, whereas one of the objects was the same as one from the training session, called the familiar object, the second object was new (the unfamiliar object) (Lego ® round toy -São Paulo, Brazil). The exposure time in the test session was five minutes. The exploration time of the objects was measured manually using a digital timer and used to calculate a recognition index using the following formula (Antunes & Biala, 2012): Exploration time for the unfamiliar object -Exploration time for the familiar object Figure 1 show the box used to carry out the tests and the sequence of them. In the first moment (Habituation session), the box does not contain objects, it only serves for the animal to get used to the environment. After an hour, the animal is placed back in the box that now contains two identical objects (Training session). Twenty-four hours later, the animal is placed in the box where one of the previous objects was exchanged for another (Testing session) to assess whether the animal's recognition memory. The data were compared among groups by one-way analysis of variance with contrasts performed by the Tukey method. Validation of data normality and homoscedasticity assumptions was conducted using the Shapiro-Wilk and Levene tests, respectively. All analyses were conducted using the free software R with a 5% significance level (R Development Core Team, 2019). Results In the testing session of the object recognition test, animals exposed to 2,4-D showed a shorter time to explore the objects than the control animals (p <0.05). Furthermore, among the herbicide-exposed animals, there was difference between exposure routes (p = 0.012), with orally exposed animals exploring objects for less time than the animals exposed via Research, Society and Development, v. 10, n. 1, e23310111695, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i1.11695 5 inhalation ( Fig. 2A). Regarding the animals' cognitive ability to discriminate the new object from the familiar object, the animals exposed to 2,4-D showed a negative index, showing a lesser ability to recognize the new object than the control animals; however, among 2,4-D-exposed individuals, there was no difference between exposure routes (p = 0.542) (Fig. 2B). In Figure 2A, we can see that animals in the control group spent more time exploring objects than animals exposed to 2,4-D. When we evaluate only animals exposed to 2,4-D, those exposed orally explore the objects for less time than those exposed by inhalation. In Figure 2B, animals exposed to 2,4-D showed a negative recognition index, that is, they did not recognize the objects. In this analysis, also the animals exposed orally to 2,4-D had a lower index than those exposed by inhalation. This shows that the oral route has a greater impact on the ability to recognize objects than the inhalation route. Discussion In this study, animals exposed to a high concentration of 2,4-D showed impairment of exploration and recognition memory for new objects in their environment. The impairment in exploration ability was greater for animals exposed orally than for those exposed via inhalation. Memory is the acquisition, formation, conservation and evocation of information (Izquierdo, 2006). In the process of memory formation, there are four stages: coding, storage, consolidation and evocation (Kandel et al., 2014). Long-term memory is a form of memory in which information is stored for a long period. The establishment of long-term memory takes an average of three to eight hours. Before the completion of this process, the information to be consolidated may change due to the actions of drugs or to declines or increases in neurotransmitters, such as acetylcholine, dopamine and norepinephrine. The consolidation process occurs in the hippocampus, an area of the medial temporal lobe (Camina & Guell, 2017). Declarative memory refers to the ability to archive and consciously retrieve information related to the experiences lived by the individual (Ullman, 2004). Declarative memory is susceptible to impairment by neuronal dysfunctions, which may be related to verbal, visual and recognition deficits (Tulving, 2002). In the present study, animals exposed to 2,4-D had impaired long-term memory, since 24 h after the training session, they did not recognize the objects. Recognition memory covers a category of declarative memory, with two components: remembrance (wherein events are recalled in a conscious and contextualized way) and familiarity (referring to the exhibition of memory without contextualization). Memory is mediated by the hippocampus, and familiarity is mediated by the entorhinal and perirhinal regions (Vann et al., 2009). Injuries to the perirhinal cortex can interfere with object recognition memory (Aggleton et al., 2010). In general, recognition memory is evaluated in animals by tests such as the object recognition test, which involves the presentation of a familiar object and a new object (Antunes & Biala, 2012). The object recognition memory model proposed initially by Ennaceur and Delacour (1988) and later by Ennaceur (2010) has been used in rats and mice to assess cognitive deficits (Grayson et al., 2015) and to evaluate the effects of pharmacological and herbicide interventions and changes in recognition memory (Ait-Bali et al., 2020;Antunes & Biala, 2012). The object recognition test allows the assessment of hippocampal function as well as the functioning of other cortical regions involved in object recognition (Antunes & Biala, 2012). Rodents have been used in object recognition tests to assess the neurotoxic effects of drugs; many drugs have been evaluated (Antunes & Biala, 2012). Studies using the object recognition test in rodents indicate that rodents explore new objects longer than familiar ones over test retention intervals of 1, 5, 4 and 24 h (Mazumder et al., 2017). Our results showed that when the animals were subjected to chronic exposure to a high concentration of 2,4-D, they did not discriminate between the two types of objects, regardless of exposure type, and did not explore the new object as measured by time of exploration. Several experimental studies have addressed the association of dichlorophenoxyacetic acid (2,4-D) and neurotoxicity in brain development (Evangelista et al., 1995;Duffard et al., 1996;Rosso et al., 1997;Bortolozzi et al., 1999;Rosso et al., 2000;Bortolozzi et al., 2004;Ferri et al., 2007;Carneiro et al., 2015). However, chronic exposure to 2,4-D occurs mainly among occupationally exposed workers engaged in agricultural activities (Zhang et al., 2011), that is, in the context of the adult (developed) brain. There is no description in the literature of injury assessment in the task of object recognition in animals exposed to 2,4-D. Therefore, in this study, we chose to evaluate adult animals and their ability to recognize objects following chronic exposure to 2,4-D. The mechanism of action of chlorophenoxyacetic herbicides remains unclear. Chlorophenoxyacetic herbicides act on oxidative phosphorylation and metabolic routes involving acetyl coenzyme A, causing toxicity to the central nervous system (Bradberry et al., 2004). Some studies have shown that 2,4-D can cause oxidative stress in specific regions of the brain of newborn rats, such as midbrain, striate cortex and prefrontal cortex (Ferri et al., 2007). The impairment in object recognition memory observed in our study may have been due to the action of reactive oxygen species induced by 2,4-D in the perirhinal cortex and/or hippocampus. Conclusion In conclusion, chronic exposure to a high concentration of 2,4-D impairs the ability of adult animals to recognize objects. This data shows that the application of 2,4-D in crops must be done carefully, as its use can cause cognitive damage. Complementary studies in adult animals exposed to various concentrations of 2,4-D that evaluate histological lesions and oxidative stress, demarcating those areas affected by 2,4-D in the adult brain, may clarify the pathogenesis of the impairment of object recognition memory when exposed to this herbicide.
2021-02-09T07:19:39.390Z
2021-01-09T00:00:00.000
{ "year": 2021, "sha1": "18068132caf9a831cc6f498c7bf98db3db134807", "oa_license": "CCBY", "oa_url": "https://rsdjournal.org/index.php/rsd/article/download/11695/10448", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "18068132caf9a831cc6f498c7bf98db3db134807", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219573237
pes2o/s2orc
v3-fos-license
The Computational Patient has Diabetes and a COVID Medicine is moving from a curative discipline to a preventative discipline relying on personalised and precise treatment plans. The complex and multi level pathophysiological patterns of most diseases require a systemic medicine approach and are challenging current medical therapies. On the other hand, computational medicine is a vibrant interdisciplinary field that could help move from an organ-centered approach to a process-oriented approach. The ideal computational patient would require an international interdisciplinary effort, of larger scientific and technological interdisciplinarity than the Human Genome Project. When deployed, such a patient would have a profound impact on how healthcare is delivered to patients. Here we present a computational patient model that integrates, refines and extends recent mechanistic or phenomenological models of cardiovascular, RAS and diabetic processes. Our aim is twofold: analyse the modularity and composability of the model-building blocks of the computational patient and to study the dynamical properties of well-being and disease states in a broader functional context. We present results from a number of experiments among which we characterise the dynamic impact of COVID-19 and type-2 diabetes (T2D) on cardiovascular and inflammation conditions. We tested these experiments under different exercise, meal and drug regimens. We report results showing the striking importance of transient dynamical responses to acute state conditions and we provide guidelines for system design principles for the inter-relationship between modules and components in systemic medicine. Finally this initial computational Patient can be used as a toolbox for further modifications and extensions. Introduction Computational medicine is increasingly effective to understand and predict complex physiological and pathological conditions in scenarios of single organ disease to comorbidities.Both mechanistic and phenomenological models are important aspects of computational medicine.When we formulate hypotheses on the mechanisms (usually involving molecules) underlying the behaviour of the various endpoints of a process, we could build a mechanistic model; when we formulate hypotheses based on the empirical observations of a phenomenon, we could build a phenomenological model.Most models are actually a combination of the two and there are certainly overlaps between phenomenological modeling, statistical and machine learning.Mechanistic and phenomenological modeling aim at reproducing the main features of a real system with the minimum number of parameters and still providing explainability, interpretability and often causality.The objective is to gain a better understanding of how each of the different components of a biomedical system contribute to the overall process, its emerging properties and the causality relation of the occurred events.A mechanistic and phenomenological model could be formulated using ordinary or partial differential equations [1], stochastic processes [2], logic [3] or in terms of a tailor-made syntax which could facilitate formal analysis and verification [4,5].The dedicated modeler may introduce a series of models of a process at different scales, from the molecular level to the whole body level, or describing processes occurring in different organs under the same disease conditions.Although there is growing awareness of long range communications in the body -for instance the communicome [6] or the gut-brain axis [7], the integration of various models in order to capture the behavior at systems medicine level has not been pursued so much.Examples of such multi level communication are given by the extensive network of comorbidities.Comorbidity is the term used to address diseases, often chronic ones, co-occurring in the same individual.An important challenge is the homogenisation of models across multiple spatial and time scales, which requires cell-level models to be systematically scaled up to the tissue/organ level, and related asymptotic techniques for the analysis of multiple timescale problems, such as those arising in processes communications.The cardiovascular system is usually described using a cardio-centric view.As an example, the heart is considered as the only pump in the system.Other pumps are actually the skeletal muscle which returns blood from the periphery to the central circulation.Another pump is embedded in the elastic arteries that use elastic properties to propel the blood forward.This system is subtly coupled with the cardiovascular-associated nervous system and the blood pressure control which include the regulated inputs from many other organs, most notably lungs, kidney and pancreas [8].Therefore, the concept of cardiovascular disease could be reformulated as a more complexly connected system and disease landscape, perhaps inclusive of comorbidities, which could allow a better patient stratification and prognosis and consequently better drug discovery. In particular, infectious diseases are good examples of the need of inter-organ and inter-process modeling approaches as a pathogen fitness may require colonising different body environments.A current example is given by the COVID-19 pandemia.Diabetes is a frequent comorbidity; the Coronado study has shown that 29% of the people with T2D infected with COVID-19 were intubated and 10.6% die in one week [9].The mortality statistics show that fighting the COVID-19 pandemia requires a focus on comorbidities.Many of the older patients who become severely ill have evidence of underlying illness such as cardiovascular disease, kidney disease, T2D or tumours [10].They make the largest percentage of patient who cannot breathe on their own because of severe pneumonia and acute respiratory distress syndrome and require intubation: about a quarter of intubated coronavirus patients die within the first few weeks of treatment [11]. Objectives Objective 1 Introduce modular and composable paradigms for the design of computational patients. In Sec. 3 we propose a modular approach for the design of personalised computational physiology systems.The complexity underlying multifactorial diseases requires the introduction of multi-scale, extensible and adaptable models where modular principles are used to break organism complexity and composable criteria to select, link and combine different components in a hierarchical fashion. Objective 2 Show how our approach may help in disclosing cascade effects of comorbidities. In Sec. 6 we illustrate a concrete example where personalised comorbid conditions' dynamics can be modeled and analysed using our framework.We focus on developing an integrated computational system modeling ripple effects of comorbidities on blood pressure regulation.To this end, in Sec. 4 we revised the physiological background required to understand the main underlying biological processes involved in this mechanism.Building upon previous studies, we devise a customisable computational patient in the form of a computational tool composed of extended versions of three publicly available mathematical models describing the circulatory system [12], type-2 diabetes [13], and renin-angiotensin system (RAS) [14], one of the main pathways regulating inflammatory response and blood pressure.Respiratory failure is a key feature of severe COVID-19 and a critical driver of mortality; 10.6% of all diabetic patients hospitalised die within one week.Hence, in Sec. 5 we propose a set of equations modeling the impact of type-2 diabetes on blood vessels' stiffness and the influence of additional external factors which can be personalised according to patient's characteristics and lifestyle habits.We introduce a variety of such elements describing the repercussions on blood pressures caused by ageing, type-2 diabetes, viral infections like COVID-19, ACE inhibitor treatments, meals, and physical exercise. 3 Computational tool From integration to modularity and composability In recent decades, the interest and the scientific effort in developing integrated quantitative and descriptive computational systems modeling physiological dynamics has rapidly grown.By 1997 the Physiome Project [15] and the EuroPhysiome Initiative [16] have actively devised and organised rich collections of mathematical models describing the functional behavior of components of living organisms, such as organs, cell systems, biochemical reactions, or endocrine systems.Such a modular approach has been primarily used to reduce complexity by deconvolvingthe human physiome into elementary subunits.Indeed, each computational module can be seen as a standalone biological entity describing one of the structures, processes, or pathways of the whole organism.Yet, modeling physiological interactions, multiscale signalling, and comorbidities requires the combination of multiple components to build more sophisticated computational systems.Several approaches have been proposed where different mathematical models have been integrated into a single system in order to describe synergistic effects and emerging phenomena [12].Despite being widely used and accepted, such system design paradigm often requires an overwhelming amount of work in merging multiple systems together, and in tuning and validating the integrated model.Besides, technological advances in computer science in the last twenty years have dramatically changed coding languages and paradigms.Hence, different research groups have developed their computational systems on many different coding platforms, frameworks, and libraries, including general-purpose languages like MatLab, Java, Python, C, but also special-purpose ones like JSim.The variety of implementation platforms combined with the mathematical effort required to merge many different systems is in conflict with the urgent need of user-friendly, extensible, and adaptable system design paradigms where components can be selected and assembled in various combinations to satisfy specific requirements.Personalised medicine requires the introduction of novel system design paradigms where modules break organism complexity and composable criteria are used to select and combine different components.Instead of merging, tuning, and validating the whole integrated system, each module could be tuned and validated independently.Composable criteria may allow researchers to primarily focus on multi-scale signalling between modules.Tuning and validation may apply just on inter-module signals which will make the overall system independent on module-specific implementation characteristics. Module design and personalisation In order to move towards this modern system design, each module can be seen as a black box processing signals coming from other modules and combining them with external subject-specific parameters in order to provide a set of responses (see Fig. 2).Subject-specific parameters may be derived from on-line clinically relevant measures, such as heart pressure or insulin levels, or from the Electronic Health Record [18,19], such as morbidities, treatments, or clinical examinations.Such elements can be used to personalise the module taking into account unique subject characteristics.Incoming signals from other components may impact some of the variables and parameters of the module, but cannot change its architecture.Finally, the outputs provided by each module can be simultaneously used as inputs for other components or tracked as clinically relevant latent variables. Usage guidelines The computational system has been designed in order to allow for three levels of user interaction.computational scientists and coders may take advantage of publicly available code by improving or forking the GitHub repository [21].The repository structure has a modular design so that new packages can be included independently.Each new package should correspond to a new mathematical model.Multiple packages can be combined together in order to generate more complex computational systems.Medical practitioners and biologists with some Python experience may just download the repository, reproduce the simulations on their computers, or modify some parameters.In order to make the computational tool available for clinicians and practitioners without coding skills, the whole computational system has been incorporated into a website with a graphical user interface.Users may profit from this user-friendly interaction as the system can be customised in many different ways creating multiple scenarios by modifying several parameters, including patient-specific characteristics and constants related to models' interactions.In modular systems several modules can be used independently to model physiological processes, disregarding their mutual relationships (A).The selection and combination of different components in a hierarchical fashion by means of composable criteria allows a better exploration of the parameter space (B).The actual interpenetration of multiple systems can be achieved by modeling the dynamics of their mutual relationships providing further information on the underlying phenomena (C).Such deeper exploration of the parameter space enhances the evaluation of initial conditions and trajectories in the phase space (right). Numerical methods All the necessary code for the experiments has been implemented in Python 3, relying upon open-source libraries.The mathematical equations described in Sec. 5 form a set of ODE systems and algebraic equations that have been sequentially solved using the LSODA integration method [22,23] provided by the function solve_ivp included into the scipy Python package [24].All the experiments have been run on the same machine: Intel R Core TM i7-8750H 6-Core Processor at 2.20 GHz equipped with 8 GiB RAM. Physiological background The objective of our model is showing how the combined effects of comorbidities may lead to severe cardiovascular and pulmonary conditions.To this aim, we include in our model some of the main factors, pathways, and morbidities affecting blood pressure with a focus on pulmonary vessels, i.e. oxygenation, arterial stiffness, diabetes, RAS, and COVID-19.In this section we revise the physiological background of the elements involved in our computational system. The link between hypertension, oxygenation and blood pressure variability Exposure to chronic hypoxia causes pulmonary hypertension and pulmonary vascular remodelling [25].COVID results in decreased oxygen that can result in impaired functioning of the heart and brain and cause difficulty with breathing (a PaO2 reading below 80 mm Hg or a pulse ox (SpO2) below 95 percent is considered low).When the left side of the heart cannot pump blood out to the body normally, blood backs up in the lungs and increases blood pressure there.The COVID-19 virus can activate the blood clotting pathway.Studies have reported that 30% of COVID-19 patients showed signs of blood clots in their lungs which means that a blood clot that has traveled to the lung.One of the recommendations is to give a low dose of heparin, which prevents clot formation or tissue plasminogen activator (tPA), which helps to dissolve blood clots [26,27].High blood pressure can damage the arteries by making them less elastic, which decreases the flow of blood and oxygen and leads to heart disease.The relationship between blood pressure and stroke recurrence is controversial.Recent researches stress that both high mean value of blood pressure and blood Illustrations adapted from The Sourcebook of medical illustration [20]. pressure variability (particularly long term) are important.Although some variation in blood pressure throughout the day is normal, higher variation in blood pressure is associated with a higher risk of cardiovascular disease and all-cause mortality [28,29].In young people here is a an increased blood supply response to hypoxia which could vanish in elderly with high blood pressure.This compromised response may be caused by the high blood pressure-induced impairment in the function of the blood vessels [30]. Arterial stiffness Arterial stiffness is a broad term used to describe loss of arterial compliance and changes in vessel wall properties.Both arterial stiffness and high blood pressure variability can be indicators of cardiovascular risk [31,32,33,34,35].Ageing increases arterial stiffness and that increased arterial stiffness gives rise to increased blood pressure variability [36].Although arterial stiffness can be assessed using a variety of techniques, carotid-femoral pulse wave velocity is the preferred measure.It has been shown that increased arterial stiffness is an early risk marker for developing type-2 diabetes [37], and a causal association between T2D and increased arterial stiffness has been proved on a large cohort of patients [38,39]: 1 standard deviation increase in T2D is associated with 6% higher risk in increased arterial stiffness; see also [40].Arterial stiffness is also related to Inflammageing which is a chronic low-grade inflammation that develops with advanced age.It is believed to accelerate the process of biological ageing and to worsen many age-related diseases [41,42].In particular inflammatory cytokines (which may be activated by angiotensinII) result in increased arterial stiffness; on the contrary reductions in inflammation (for example due to anti-inflammatory cytokines), exercise reduce arterial stiffness [43,44]. The renin-angiotensin system and SARS-CoV-2 The renin-angiotensin system (RAS) is a hormone system regulating vasoconstriction and inflammatory response [45]. The key regulator of the RAS is the peptide hormone Angiotensin II (ANG-II) generated by the angiotensin-converting enzyme (ACE) which cleaves the decapeptide Angiotensin I (ANG-I), or proangiotensin.ANG-II exerts its biological functions through two G-protein-coupled receptors, the ANG-II receptor type 1 receptor (AT1R) and ANG-II receptor type-2 receptor (AT2R), and the heptapeptide Angiotensin (1-7) (ANG-(1-7)) which binds and activates the G-protein-coupled Mas receptor (MAS).ANG-(1-7) can be generated both by the angiotensin-converting enzime 2 (ACE2) from ANG-II, or by the neutral endopeptidase enzyme (NEP) from ANG-I.The three G-protein-coupled receptors (AT1R, AT2R, and MAS) are the main factors helping the body to carry out the role of ANG-II in regulating blood pressure over the course of the day [46][47].On one side, AT1R stimulates vasoconstriction, hypertension, and inflammatory response.The effect of AT1R is counterbalanced by MAS, promoting vasodilation, hypotension, and vasoprotection.The role of AT2R is currently debated [48].Under normal physiologic conditions, AT2R counteracts most effects of AT1R.However, recent developments have shown how its vasodilatory effects were not associated with significant reduction in blood pressure [49].In the kidney, AT2R stimulation produced natriuresis, increased renal blood flow, and reduced tissue inflammation [50,51,52].External factors impacting the RAS include: glucose concentration, ACE inhibitor treatements, and viral infections binding to ACE2, such as SARS-CoV-2.Glucose concentration has a direct impact both on AT1R and ACE activity.A high glucose concentration may determine chronic hypertensive conditions.Therefore, hypertensive treatments usually include ACE inhibitor drugs which are used to compensate the overproduction of ANG-II and AT1R [53].Viral infections such as COVID-19 may also have a negative impact on RAS, as the virus binds to ACE2 in order to gain entry into the host cell, impairing the activity of ACE2 in generating ANG-(1-7) by hydrolyzing ANG-II [54].Illustrations adapted from The Sourcebook of medical illustration [20]. Mathematical model of diabetic computational patients In this section we present a concrete example describing a set of mathematical models that can be used to model a computational patient. We focus on modeling a diabetic computational patient by combining four modules: RAS 5.1, diabetic 5.2, circulatory 5.3, and stiffness 5.4 models.Fig. 3 shows a schematic representation of the computational system.The computational patient can be customised in two different ways.First, the system has been designed in order to be personalised using patient-specific values for some parameters such as age, glucose levels, arterial blood pressure, presence of comorbidities or treatments (see Table 1).Should the physiological analysis require the inclusion of additional conditions, new modules can be included and composed according to patient's needs. Pharmacokinetic model Pharmacokinetic (PK) models are used to describe drug absorption and excretion dynamics.Equation 1 describes the analytical solution of a single-compartment pharmacokinetic model with first-order absorption and first-order elimination rates after oral administration [55].The equation has been used to model ACE inhibitors' dynamics and their effects on the RAS.A uniform dose size d at constant time intervals τ has been assumed [56]: where [Drug] n (t ) is the drug concentration after the n-th dose, t = t(n − 1)τ is the time after the n-th dose, k a and k e are the absorption and elimination rates respectively, F is the absorbed fraction of the drug, and V the volume of distribution. Pharmacokinetic parameters have been reported in table 3. Pharmacodynamic model Pharmacodynamic models are used to illustrate the effects of drug treatments on the body.The pharmacodynamic model used to describe local RAS dynamics has been derived from [57,14] (see Eqs. [16][17][18][19][20].The original model has been extended with four additional equations (Eqs.2-5).The variation of [AN G17], [AT 1R] and [AT 2R] have been included as their dynamics can be useful in understanding how RAS regulates blood pressure [58].The concentration of ANG-(1-7) depends on the activity of two enzymes, NEP and ACE2, cleaving ANG-I and ANG-II respectively. [AT 1R] and [AT 2R] rather depend on [AN GII] and on glucose concentration G. The dynamics of ACE2 activity (k ACE2 ) has been introduced as an indicator of SARS-CoV-2 infectivity [54]: where s V represents the severity of the viral infection and e AI the efficiency of anti-inflammatory pathways.A higher concentration of [AN GII] may also induce cells to produce more ACE2, thus increasing its activity [54] and enhancing viral entry.Hence ACE-inhibitor treatments may have a protective role as they reduce ACE activity lowering ANG-II levels (see Fig. 3). Pharmacodynamic parameters and initial conditions have been reported in table 4. Adding comorbidities: type-2 Diabetes type-2 diabetes is a metabolic disease whose progression and severity is caused by increasing failure of insulinproduction due to beta cell death. There are complex multifactorial links between diabetes and cardiovascular disease [59,60,61,62].The main pathophysiology cornerstone is a state of chronic, low-level inflammation.This immune activation may facilitate both the insurgence and progression of insulin resistance in diabetic and pre-diabetic states and increases their cardiovascular risk.An extension of a model from Topp and collaborators (Eqs.6-9) combines insuline resistance, functional β-cell mass dynamics with glucose dynamics and insulin dynamics [13].The insulin and glucose dynamics are faster than the beta cell dynamics.Mild hyperglycaemia leads to increasing beta cell numbers, but above a threshold of 250mg/dL blood glucose, beta cell death is greater than cell division.Additional terms (not shown) include and non-functional beta cells (β f and β nf ), activated macrophage, pathogenic T cells, insulin resistance, mTOR levels and beta cell antigenic protein concentrations [63].The distinction between beta cells into functioning and non-functioning cells allows to take into account for the reduction and exhaustion of insulin produced by the beta cells.Although the preliminary outcomes of the DIRECT study suggests that beta cells can be restored to normal function through the removal of excess fat in the cells [64,65], we have not taken into account the recovery of the pancreatic function.Inflammation is key in diabetes, and the interaction between inflammation and metabolism can be considered a key homeostatic mechanism [66].The model considers both the effect of exercise and dietary [67].This model was analysed using sensitivity analysis and investigation to determine its properties (not shown).Sensitivity analyses are commonly used in inverse modelling to determine how significant each parameter is to the output variables of the system.A local analysis describes the sensitivity relative to point estimates of the parameters whereas a global analysis examines the entire parameter distribution. where G is the glucose concentration, I is Insulin concentration, β f functioning β-cells, I R insulin resistance, and Cyt is the concentration of pro-inflammatory cytokines [68,69,70].H m and H w are two step functions describing glucose intake during meals and glucose consumption during workouts respectively: where g m is the glycemic load, s m the carbohydrate serving, and t m the meal starting time; z w the number of burned calories and t w the workout starting time; I tm,i(1+∆m) (t) and I tw,i(1+∆w) (t) are indicator functions. Here we consider progressive alteration of arterial stiffness and hypertension in diabetic patients.It is noteworthy that low chronic inflammation related to metabolic active abdominal obesity (abnormal secretion of adipokines and cytokines like TNF-alfa and interferon) and the impaired immune-response to infection (abnormal cytokine profile and T-cell and macrophage activation) cause an increased risk of severe COVID-19.Diabetic patients are frailer with respect to the normal population when considering COVID-19 multi-organ and multi-process disruption. Circulatory system model Circulatory system models are used to describe blood flow, volume, and pressure dynamics.from the open-loop circulatory model proposed in [12].The heart model is composed of four sections (chambers) corresponding to right atrium, right ventricle, left atrium, and left ventricle.Each chamber is modeled as a bellows pump comprised of a one-way valve (pulmonary, tricuspid, mitral, and aortic) and a time-varying elastance (Eq.28) controlling blood outflow [71,72].Blood inflow is passive.The systemic circulation has been modeled with seven vascular segments: proximal aorta, distal aorta, arteries, arterioles, capillaries, veins, and the vena cava.Each vessel has been designed using a resistance element reflecting the impact on blood flow reduction and a compliance element indicating the tendency of arteries and veins to stretch in response to pressure.High-frequency effects caused by wave reflections at great arterial bifurcations (distal and proximal aorta) are modeled with inertance elements.Arterioles, veins and vena cava have unique nonlinear PV relationships as described in [73] (see Eqs. 50-52, 54, and 42).The pulmonary circulation is composed of five vascular segments: proximal and distal pulmonary artery, small arteries, capillaries, and veins.Wave reflections in the proximal and distal pulmonary arteries are modeled with inertance elements.The coronary circulation model consists of four segments: epicardial and intramiocardial arteries, coronary capillaries, and coronary veins.Following [12], large and small artery and vein segments proposed in [74] have been condensed into intramiocardial arteries and coronary veins, respectively.Baroreceptors are special sensory neurons that are excited by a stretch in the carotid sinus and aortic arch vessels.Their feedback is processed by the brain in order to maintain proper blood pressure.Baroreceptors' firing frequency to the brain has been modeled as a second-order response to the aortic pressure change [75,73].The second-order differential equation has been rewritten into two first order equations in order to make it compatible with common Python solvers (Eqs.109 and 110). Circulatory system parameters and initial conditions have been reported in table 7. Stiffness model The complexity underlying multifactorial diseases requires the introduction of computational systems representing multi-organ and inter-process communication.To this aim, we propose a mathematical model describing the impact of comorbidities on the circulatory system.Several factors influencing blood pressure and arterial stiffness have been modeled including: diabetes, renal impairments, viral infections, lifestyle and ageing. Ageing affects the circulatory system in multiple ways.Baroreceptors' feedback and pathways to the heart's pacemaker system decrease their efficiency over time.Heart muscle cells tend to degenerate and its walls get thicker slowing down the time the heart takes to fill with blood increasing pressure on the vessels.Additionally, blood vessels show a decrease in performance, since arteries tend to narrow and become more rigid. Glucose concentration affects the renin-angiotensin-aldosterone pathway as it controls the concentration and activity of Renin, ACE, and AT1R.AT1R activity is strongly related to vasoconstriction, hypertension, and inflammatory response.Hence, arterial stiffness gets even worse increasing the risks of clogged arteries and strokes.Besides, SARS-CoV-2 strongly bind to ACE2 decreasing its availability and impacting downstream RAS pathways regulating blood pressure.Lower levels of available ACE2 reduce the concentration of ANG-(1-7), the endogenous ligand for the G protein-coupled receptor MAS, a receptor associated with cardiac, renal, and cerebral protective responses.Hence, vasoprotection and hypotension feedbacks deteriorate increasing inflammatory response and pressure on blood vessels. The combined effect of comorbidities and ageing factors on arterial stiffness and inflammation may lead to critical circulatory conditions and fibrosis.High glucose concentrations strengthen RAS hypertension feedbacks and lower blood vessels' lumen, especially on capillaries, arterioles, and venules.By affecting blood pressure regulation pathways, SARS-CoV-2 infections may impair vasoprotection regulation by the RAS endangering the whole circulatory system with disruptive repercussions among the elderly.The combination of all such factors may lead to acute diseases such as thrombophlebitis, cardiomyopathy, myocardial infarction, pulmonary embolism, heart failure, and eventually to patients' death. The diabetic model (Sec.5.2) accounts both for hyperglycemic conditions and lifestyle habits.After lunch and dinner, glucose concentration in blood vessels peaks, while it is scaled down by insulin or physical exercise.The RAS model (Sec.5.1) has been used to simulate peptides' and drug concentration dynamics taking into account glucose concentration, ACE inhibitor treatments, renal conditions, and viral infections binding to ACE2 (such as COVID-19).Abnormal ACE2 activity (k ACE2 − k ACE2,0 ) has been assumed as proportional to SARS-CoV-2 infectivity (see Eq. 5). ACEi or ARB treatments could also increase ACE2 abundance and thus enhance viral entry [54].In case of severe renal conditions, only a fraction of drug diacid is expelled before the subsequent administration (see Eq. 1 and Fig. 4).The drug surplus left inside the body may reinforce inflammation.Overall, the inflammatory response has been modeled as a function of all such contributions: where k SARS represents SARS-CoV-2 affinity with ACE2, k D the inflammation rate due to ACEi surplus, k G the inflammation rate due to glucose surplus, and k ef f the anti-inflammatory response rate. One of the main processes associated with arterial stiffness occurring during ageing is DNA methylation, consisting in the addition of methyl groups to the DNA molecule which may modify the activity of a DNA segment without changing the sequence.DNA methylation has been modeled as a linear function of the age A [31]: As a result, blood vessels' compliance parameters have been reduced by a factor accounting for the combined effect of inflammation (Eq.12) and ageing (Eq.13): where C i is the compliance of the blood vessel i for a young healthy individual and C i is the reduced compliance.The circulatory model (Sec.5.3) has been used to simulate blood pressure dynamics in critical vessels where blood pressure spikes may lead to acute diseases. Extending the model to COVID-19 treatments One of the main issues related to COVID-19 is blood clotting.Studies have reported that 30% of COVID-19 patients showed signs of blood clots in their lungs.One of the recommendations is to give a low dose of heparin, which prevents clot formation or tissue plasminogen activator (tPA), which helps to dissolve blood clots [26,27].Besides, several observational studies and clinical trials reported that vitamin D supplementation reduced the risk of influenza and inflammation by raising its blood concentrations above 40-60 ng/mL (100-150 nmol/L) [76,77,78,79].Hence, we extended our model by taking into account such preliminary COVID-19 treatments.In fact, both heparin and vitamin D have an indirect impact on blood pressure by making blood less dense, reducing clotting formation, and lowering inflammation.We modeled the impact of such treatments by including additional terms on blood pressure inside the cardiovascular model: where [heparin] is the heparin dosage and [D] the vitamin D concentration. Experiments The models presented in Section 5 have been solved to analyze the effects of comorbidities like diabetes, renal impairment, and viral infections affecting the circulatory system.Table 2 reports the set of experimental conditions that have been analysed.Five computational patients have been created corresponding to different physiological states.These scenarios have been further stratified by the age of the computational patient, given that arterial stiffness has been modeled as a function of the increased DNA methylation during ageing.Drug concentrations (Fig. 4), inflammation levels, and blood pressure dynamics in lung vessels (Fig. 5) in comorbid conditions have been compared to the dynamics obtained in healthy states or using ACE inhibitor treatments. Table 2: computational patients' conditions used for the simulations.The diabetic and the RAS models do not depend on patient's age.Lifestyle habits have been set as three meals and one light workout session in the afternoon for all patients.The RAS model has been simulated for constant glucose cases using the daily glucose peak predicted by the diabetic model right after the main meals.Glucose concentration ranged between the extremes of normal glucose at 6 − 7 mmol/L (corresponding to 108 − 125 ml/dl) and high glucose at 10 − 11 mmol/L (corresponding to 180 − 200 ml/dl) based on experimental studies [80,81,82].The time window of the RAS simulations has been set to five days, corresponding to five daily ACEi administrations [53].The simulation results have been used to compute arterial stiffness and to reduce compliance parameters of blood vessels in the open-loop circulatory model.In the following simulations the arterial blood pressure (ABP) signal used in [12] has been used instead of personalised clinical measurements. Label Age Description Fig. 4 shows the dynamics of the concentration of ACEi and glucose-insulin dynamics over the first five days of treatment.Due to renal impairment, the computational patient was not able to expel the drug dose before the next administration.The inflammatory response and the corresponding blood pressure dynamics in lungs' vessels are shown in Fig. 5. Comorbid conditions tend to increase blood pressure variability in all scenarios.However, as arterial stiffness grows with the age of the computational patient, the variability increases as well, possibly leading to irreversible deterioration of blood vessels' walls.ACEi treatments may help in reducing inflammation levels, but may not be sufficient to recover healthy blood pressures.One of the most serious effects illustrated by simulations consists of an increased mean value of blood pressure and blood pressure variability especially on small pulmonary vessels and capillaries (see Fig. 5), increasing the risk of clogged arteries, fibrosis, and strokes.Besides, experimental results shows how fluctuations of variables over time may change and present different shapes especially on small vessels.In computational patients with comorbidities blood pressure dynamics in pulmonary capillaries exhibit higher mean values and variability, but beat frequencies can be observed as well.), the corresponding lungs' pressures phase space (top-right) and dynamics over time (bottom) COVID-19 The COVID-19 mortality statistics underline the relevance of deeper analysis on multi-factorial diseases in fighting the pandemic [9].Underlying morbidities such as cardiovascular disease, kidney disease, T2D or tumours have been observed in patients with severe infection, especially among the elderly [10].By affecting blood pressure regulation pathways, SARS-CoV-2 infections may impair vasoprotection regulation endangering the whole circulatory system with severe repercussions.By taking advantage of our composable framework, experimental results offer an overview on how the combination of multiple diseases with SARS-CoV-2 may lead to acute conditions.Fig. 5 clearly shows how the computational patient with comorbidities and SARS-CoV-2 has higher risk of pulmonary vessels' deterioration. The combined effect of heparin and vitamin D can help in reducing blood pressure mean by making the blood more fluid.However, they do not affect blood pressure variability determined by vessels' rigidity.Hence, the risk in developing cardiovascular diseases related to blood pressure variability may still be high despite treatments.Notably the results of our experiments agree with hypotheses suggesting that healthy blood vessels protect children from serious effects of COVID-19. It is noteworthy that autopsy-based findings have demonstrated a variety of damages caused by COVID-19 infection, among which extensive coagulopathy, acquired thrombophilia and endothelial cell death [83].Here we consider the sole effect on blood pressure. Discussion The modularity and composability of different available mechanistic and phenomenological models presents the challenge to define a mathematical framework connecting different systems' descriptions, their dynamics, and constraints.Let's imagine to put together a model based on ODEs and a model in terms of a discrete space discrete time Markov chain.This has then to be done in the light of behavioral properties that can be sets of trajectories or measures on the trajectory space (typically those learned from data with statistical methods).Cell-level models (using ODEs, delay differential equations, DDE, or agents) need to be systematically scaled up to the tissue level; for the multiple timescale problems, the challenge is to obtain a model order reduction, i.e. to abandon high dimensional bioengineering systems in favour of simpler effective mathematical models.The tissue level could be modeled using PDE or cellular Potts model which may provide better representation for detailed and heterogeneous cell-cell, cell-tissue, cell-matrix interaction cases.Integrative models, could be made by single scale models, describing the biological process at different characteristic space-time scales, and scale bridging models, which define how the component models are coupled to each other.While at the tissue level, physical quantities usually vary across space and time, in a continuous fashion, and can be thus represented using systems of PDE [84]. Emerging properties of variances from model composability Many physiological variables have a circadian trend; sometimes also a seasonal one.For example blood pressure decreases during sleep and shows a sharp uprise at the time of awakening.This early morning variation is often concurrent with an increase of acute myocardial infarction, sudden cardiac death, and stroke [85].Common clinical parameters such as diastolic and systolic blood pressure, heart patterns, blood cell counts are usually evaluated as averages.Little importance is given to higher moments such as variances during the day or during a longer interval of time.The lack of continuous measures for most of the quantities has generated a medical practice that disregard of unobserved or partially observed data.Some authors identified a disease and age-related loss of complex variability in multiple physiologic processes including cardiovascular control, pulsatile hormone release, and ECG data [86,87].Our composable model reveals interesting patterns, particularly fluctuations in blood pressure, particularly during COVID acute infection when the diabetic model is coupled with the RAS and the cardiovascular models.We believe that the use of extensive models could enable to understand concurrent patterns of alteration in different districts. 7.2 How such computational patient model could be deployed and further developed computational Patient will benefit from using machine learning and data analysis of large amount of data such for instance that obtained from UK Biobank as modeling will have a truly catalytic effect in synergy with machine learning.The computational patient model requires adequate artificial intelligence support to generate diagnosis and validate its correctness.A decision-making process could be based on the development of a personalised statistics of changes in health, end-stage disease, signs and symptoms (CHESS, [88]).This ideally would develop through monitoring of the individualized response to therapeutic interventions, in addition to changes in risk profile.One aspect is a dedicated CHESS scale based on all the variables and observable considered considered by the model(s) [88,89].It will act as a personalised patient simulator and will draw temporal trajectories of disease and comorbidities progression.The trajectories will change with drug regimes, medical intervention, and lifestyle changes. Any data used will be anonymises or de-identified using ad hoc software (see for instance [90]) and we will follow the FAIR principle (Findable, Accessible, Interoperable, Reusable) and the GRPD regulation.One meaningful approach to extract useful indication is to use a clinical decision support system which incorporates medical experience, research results and personal judgement [91].We believe the computational patient models to be in a research only state and therefore we do not make further integrations. The future foreseen is that AI will assist our health and disease conditions in a more effective way than nowadays: a medical check up will be supported by well-tuned artificial intelligence and patient-based modeling .At the clinical level, computer-aided therapies and treatments will develop into intervention strategies undertaken under acute disease conditions or due to external factors (infections) to contrast cascade effects.In non acute states, predictive inference will propose prevention plans for comorbidity management, particularly in presence of multiple therapies. Therefore this approach is meaningful in perspective of a computational medicine characterised by a close coupling between bioinformatics, clinical measures and modeling prediction and perhaps remote patient monitoring. Conclusion computational scientists and bioengineers' vision is a framework of methods and technologies that, once established, will make it possible to investigate the human body as a whole.It calls for a total transformation in the way healthcare currently works and is delivered to patients.Underpinning this transformation is substantial technological innovation with a requirement for deeper trans-disciplinary research, improved IT infrastructure, better communication, large volumes of high quality data and machine learning and modeling tools.Machine learning could be automatised (i.e.autoML) and models should be modular so to be organised to answer specific and personalised medical questions.Simulations are increasingly regarded as valuable tools in a number of aspects of medical practice including lifestyle changes, surgical planning and medical interventions.The idea is that cross-modality data is obtained for the patient and machine learning techniques estimate parameters to be input into modeling framework.We believe that a deeper understanding and practice of modeling in medicine will produce better investigation of complex biological processes, and even new ideas and better feedback into medicine.Finally, computational models are cheap and this will make possible to predict drugs interaction and to make better use of generic drugs.In this sense the personalised model will become a product associated with the drug. Disclaimer The computational tool has not been validated and should not be used for clinical purposes. To enable code reuse, the Python code for the mathematical models including parameter values and documentation is freely available under Apache 2.0 Public License from a GitHub repository [21].Unless required by applicable law or agreed to in writing, software is distributed on an "as is" basis, without warranties or conditions of any kind, either express or implied. Figure 1 : Figure1: In modular systems several modules can be used independently to model physiological processes, disregarding their mutual relationships (A).The selection and combination of different components in a hierarchical fashion by means of composable criteria allows a better exploration of the parameter space (B).The actual interpenetration of multiple systems can be achieved by modeling the dynamics of their mutual relationships providing further information on the underlying phenomena (C).Such deeper exploration of the parameter space enhances the evaluation of initial conditions and trajectories in the phase space (right). Figure Figure design inspired by[17]. Figure 2 : Figure 2: Modular paradigms are used to break organism complexity into simpler components which can be analysed and modeled independently (left).Integrating different modules requires an overwhelming amount of work in merging one after the other multiple systems together, and in tuning and validating the final model (top right).Composable criteria favor a dynamic and adaptable selection of different components allowing researchers to primarily focus on modeling the relationships between modules (bottom right). Figure 3 : Figure 3: Schematic representation of the circulatory system composed of heart, pulmonary circulation, systemic circulation, and baroreceptors (left).External factors affecting the renin-angiotensin system (ACEi and SARS-CoV-2) are shown in violet (right). The biochemical reaction network used to model the renin-angiotensin system is shown in Fig.3.External factors include hypertension treatments and viral infections binding to ACE2, such as SARS-CoV-2.Hypertension drugs usually target ACE inhibiting ANG II production.ANG II promotes vasoconstriction, hypertension, inflammation, and fibrosis by activating AT1R.Therefore, reducing ANG II production with ACE inhibitors increases vasodilation and vasoprotection effects stimulated by AT2R and ANG-(1-7).On the other hand, SARS-CoV-2 infections reduce ANG-(1-7) and ANG-(1-9) production rate, by binding to ACE2 in order to gain entry into the host cell.Hence, vasoprotection effects promoted by ANG-(1-7) decline, possibly leading to hypertension and inflammatory response. Figure 4 : Figure4: Drug concentration for healthy individuals and patients with renal impairments (left).Glucose-insulin phase space for healthy and diabetic individuals (right).
2020-06-12T01:01:24.560Z
2020-06-09T00:00:00.000
{ "year": 2020, "sha1": "bb53fa8dec859ec1b68de11a8ca04905aea24a9f", "oa_license": "CCBY", "oa_url": "https://www.medrxiv.org/content/medrxiv/early/2020/07/20/2020.06.10.20127183.full.pdf", "oa_status": "GREEN", "pdf_src": "ArXiv", "pdf_hash": "bb53fa8dec859ec1b68de11a8ca04905aea24a9f", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Biology", "Medicine" ] }
246199474
pes2o/s2orc
v3-fos-license
Cellular Cytotoxicity and Oxidative Potential of Recurrent Molds of the Genus Aspergillus Series Versicolores Molds are ubiquitous biological pollutants in bioaerosols. Among these molds, the genus Aspergillus is found in the majority of indoor air samples, and includes several species with pathogenic and toxigenic properties. Aspergillus species in the series Versicolores remain little known despite recurrence in bioaerosols. In order to investigate their toxicity, we studied 22 isolates of clinical and environmental origin, corresponding to seven different species of the series Versicolores. Spore suspensions and ethyl acetate extracts prepared from fungal isolates were subjected to oxidative potential measurement using the dithiothreitol (DTT) test and cell survival measurement. The DTT tests showed that all species of the series Versicolores had an oxidative potential, either by their spores (especially for Aspergillus jensenii) or by the extracts (especially from Aspergillus amoenus). Measurements of cell survival of A549 and HaCaT cell lines showed that only the spore suspension containing 105 spores/mL of Aspergillus jensenii caused a significant decrease in survival after 72 h of exposure. The same tests performed with mixtures of 105 spores/mL showed a potentiation of the cytotoxic effect, with a significant decrease in cell survival for mixtures containing spores of two species (on A549 cells, p = 0.05 and HaCaT cells, p = 0.001) or three different species (on HaCaT cells, p = 0.05). Cell survival assays after 72 h of exposure to the fungal extracts showed that Aspergillus puulaauensis extract was the most cytotoxic (IC50 < 25 µg/mL), while Aspergillus fructus caused no significant decrease in cell survival. Introduction Air pollution is a complex and dynamic phenomenon involving the exposure of living organisms to harmful airborne substances. The health effects of indoor and outdoor air pollutants are a major public health issue due to the duration of exposure, even to relatively low concentrations of air pollutants [1][2][3]. According to the World Health Organization, three million deaths per year worldwide are due to stroke (36%), ischemic heart disease (36%), lung cancer (14%), chronic obstructive pulmonary disease (8%), or acute lower respiratory disease (6%) caused by exposure to airborne particles [4]. Airborne physical, chemical, and biological particles form a heterogeneous mixture whose composition varies continuously in space and time [2,3,5]. One of the mechanisms explaining part of the health effects of airborne particles is their ability to synthesize or catalyze the formation of reactive oxygen species (ROS) when they reach the lung cells. These molecules then cause oxidative stress and inflammation of the airways. This ability to synthesize and/or catalyze the formation of ROS is determined by the particles' oxidative potential (OP), which can be easily measured by biochemical and cell-free assays. Although the physical and chemical Molecular Identification Isolates belonging to the genus Aspergillus series Versicolores are phenotypically similar, which makes identification by culture and microscopy difficult, even from selective media. We therefore used molecular biology to identify the 21 isolates [15,17]. DNA extraction was performed as described in our previous study [20] using a modified protocol of the Nucleospin™ Plant II kit (Macherey-Nagel, Duren, Germany). Fungal material was transferred to a 2 mL microcentrifuge tube with glass beads. The microcentrifuge tube was incubated twice 15 min at 80 • C and then 15 min at −80 • C. It was then placed into a swing-mill with 400 µL of lysis buffer PL1 for 15 min at 20 Hz, and incubated with 10 µL of RNAse and 20 µL of proteinase K (Promega, Madison, WI, USA) at 10 mg/mL at 65 • C for 15 min. Then, 400 µL of chloroform (MP Biomedicals-Thermo Fisher Scientific, Waltham, MA, USA) was added. The clear supernatant was recovered and extracted as described by the supplier. DNA was purified using the NucleoSpin gDNA Clean-up kit (Macherey-Nagel, Duren, Germany) following the instructions of the manufacturer. Quantification and quality of the fungal DNA were performed using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA) [39]. All Aspergillus isolates belonging to the series Versicolores (n = 22) were grown on MEA+ and stored on slant agar at 4 • C and in a cryoprotective agent composed of sterile water and 10% glycerol (Carlo Erba, Val-de-Reuil, France) at −80 • C before any further testing. We also included a reference strain (CBS 245.65) to validate our identification technique by amplification and sequencing of the BenA gene. Preparation of Calibrated Spore Suspensions Spores suspensions were made from isolates grown for 10 days on MEA+. The surface of the colonies was covered with collection liquid. The resulting crude suspensions were filtered through sterile cotton to remove hyphae and fungal debris and then through sterile polytetrafluoroethylene (PTFE) filters of 5 µm porosity (Sartorius-Thermo Fisher Scientific, Waltham, MA, USA) to remove clusters of spores stuck together. The spores were then counted on a KOVA®Glasstic slide (KOVA-Thermo Fisher Scientific, Waltham, MA, USA) before performing decimal dilutions ranging from 10 5 to 10 spores/mL, which are the most commonly encountered concentrations in indoor environments [13]. Final concentrations and quality of spore suspensions were confirmed by flow cytometry on a Cytoflex S hemocytometer (Beckman Coulter, Brea, CA, USA). Preparation of Fungal Extracts Fungal extracts were made from isolates grown for 21 days on MEA+. For all isolates, 12 agar plugs were taken and introduced into 5 mL glass tubes. To each of these tubes, 2 mL of ethyl acetate (Sigma-Aldrich-Merck, Darmstadt, Germany) acidified with 1% acetic acid (Sigma-Aldrich-Merck, Darmstadt, Germany) was added. After vortexing each tube for 30 s, the tubes were centrifuged at 1500 rpm for 15 min. The supernatant (1.5 mL) was collected and filtered through a 0.22 µm pore size syringe tip filter (Thermo Fisher Scientific, Waltham, MA, USA) to remove spores. The extracts were then evaporated in a SpeedVac Plus concentrator (Savant-Thermo Fisher Scientific, Waltham, MA, USA) at room temperature. The dry extracts were stored in the dark at room temperature until use. The dry extracts were then solubilized in a mixture of culture medium and dimethyl sulfoxide (DMSO) (Pan Biotech-Dominique Dutscher, Tourgéville, France) (5%). Each extract was then diluted to obtain four concentrations: 250 µg/mL, 125 µg/mL, 50 µg/mL, and 25 µg/mL. Oxydative Potential Measurement Measurement of the oxidative potential of spore suspensions and fungal extracts was performed by the dithiothreitol (DTT) assay. This test is a commonly used cell-free method for assessing the oxidative potential of redox-active chemicals in air. The consumption of DTT (in excess) upon contact with the tested suspensions or solutions is monitored and the depletion of DTT is proportional to the concentration of ROS in the reaction mixture. Existing protocols [8,40] were adapted to suit our tests. The reaction mixture consisted of 550 µL of Dulbecco's Phosphate Buffer Solution (DPBS) (Gibco-Thermo Fisher Scientific, Waltham, MA, USA) incubated in 2 mL microtubes at 37 • C, to which 50 µL of 0.5 mM DTT (Thermo Fisher Scientific, Waltham, MA, USA) and 50 µL of spore suspension or fungal extract were added. Then, 100 µL of 5,5'-dithiobis-(2-nitrobenzoic acid) (DTNB) (Thermo Fisher Scientific, Waltham, MA, USA) was added after 0, 15, and 30 min of contact staining the reaction medium a yellowish color with an optical density (OD) proportional to the amount of DTT remaining. For each incubation time, 150 µL of mixture was transferred in triplicates to a 96-well microplate. OD was then measured at 412 nm by spectrophotometry (BioTek-Agilent Technologies, Santa Clara, CA, USA). For each assay, a blank (DPBS) was performed in triplicates. For the measurement of the oxidative potential of the extracts, a solvent control (DMSO) was added. Cell Culture In order to study the activity of spores and fungal extracts on the skin and at the respiratory level, we chose two cell lines: the A549 (adenocarcinomic human alveolar basal epithelial cells) (ATCC®CRM-CCL-185TM, USA) cell line and the HaCaT (aneuploid immortal keratinocytes) (AddexBio T0020001, USA) cell line. Cells were grown in 25 mL flasks at 37 • C in an environment containing 5% CO 2 with 5 mL of culture medium containing Dulbecco's Modified Eagle Medium (DMEM) (Gibco-Thermo Fisher Scientific, Waltham, MA, USA) supplemented with 10% Fetal Bovine Serum (FBS) (Pan Biotech-Dominique Dutscher, Tourgéville, France), 1% penicillin-streptomycin 10,000 U/mL (Gibco-Thermo Fisher Scientific, Waltham, MA, USA), and 0.01% gentamicin 10 mg/mL (Sigma-Aldrich-Merck, Darmstadt, Germany). Cells were resuspended by adding 1 mL of trypsin 0.05% (Gibco-Thermo Fisher Scientific, Waltham, MA, USA) in the case of A549 cells or 0.25% (Pan Biotech GmbH, Aidenbach, Germany) in the case of HaCaT cells. Cells were counted using a KOVA™ Glasstic™ slide (KOVA-Thermo Fisher Scientific, Waltham, MA, USA) and then suspended in DMEM at a cell concentration of 75,000 cells/mL. In each well of a 96-well microplate, 200 µL of the cell suspension was introduced. The microplate was placed in the incubator for 24 h. The culture medium was then replaced with the medium containing the spore suspensions or fungal extracts before being incubated again for 24 to 72 h. For each line and for each condition, six replicates were performed. After exposure, cells were fixed with 50 µL of 50% cold (4 • C) trichloroacetic acid (Sigma-Aldrich-Merck, Darmstadt, Germany) for one hour. The plate was rinsed five times with running water and then air-dried. Then, 50 µL of 0.4% sulforhodamine B (SRB) (Sigma-Aldrich-Merck, Darmstadt, Germany) in 1% acetic acid was deposited for 30 min at room temperature in each well to stain the proteins. The plate was then washed four times with 1% acetic acid and air-dried. The dye was solubilized with 100 µL of 10 mM tris(hydroxymethyl)aminomethane (TRIS) base buffer pH 10.5 (Sigma-Aldrich-Merck, Darmstadt, Germany) with 10 min of agitation at room temperature. OD was then measured at 570 nm using a microplate reader by subtracting the reference OD read at 655 nm to eliminate interference. The result obtained for each condition was compared to a culture control (DMEM + 10% FBS) consisting of eight replicates for each experimental condition, expressing the result as percentage of cell survival for each condition compared to the control. For the extracts, a culture control with 5% DMSO was performed in eight replicates in order to estimate the toxicity of the fungal extracts independently of the solvent. Statistical Analyses Descriptive statistics were calculated to provide information on oxidative potential. Rates of DTT consumption by spore suspensions and fungal extracts were subjected to the Mann-Whitney test. The percentages of cell survival were subjected to the Kruskal-Wallis test. Only results at p < 0.05 were considered statistically significant. Statistical analyses were performed using SAS system v.9.4 (SAS Institute Inc. Cary, NC, USA) and XLSTAT v.2021.2.1.1120 (Addinsoft, Paris, France). Isolates Identification Identification of isolates by amplification and sequencing of the BenA gene is presented in Table 1. All sequences are presented in Supplementary Table S1. The reference strain CBS 245.65 was well identified as belonging to the species Aspergillus amoenus. The other 21 isolates could be identified as belonging to seven different species of the series Versicolores: Aspergillus amoenus (n = 1), A. creber (n = 7), A. fructus (n = 1), A. jensenii (n = 3), A. protuberus (n = 1), A. puulaauensis (n = 1), and A. sydowii (n = 7). Aspergillus creber and A. sydowii were the most represented species (n = 7 isolates for each of the two species). Aspergillus creber was only found in bioaerosols, while Aspergillus sydowii was only found in clinical samples. Oxidative Potential of Spores The DTT consumption rate obtained for the blank (natural oxidation in the reaction mixture in open air) was 0.040 nmol/min. We were able to measure an oxidative potential for each of the 22 isolates of the seven species tested as well as a decrease in DTT consumption rate proportional to the concentrations of the spore suspensions. However, we observed a very heterogeneous oxidative potential even between isolates of the same species. Indeed, the lowest concentration causing a significant increase in DTT oxidation compared to blank for Aspergillus amoenus was 10 5 and 10 spores/mL for strain CBS 245.65 and isolate HAB06, respectively. The same observation was made for Aspergillus creber isolates recovered only from the environment (10 5 spores/mL for 08FM2_A49, 10 3 spores/mL for HAB07 and HOSP150313_5_98, 10 2 spores/mL for HAB02 and HAB32, and 10 spores/mL for HAB64 and HOSP050413_5_135), and Aspergillus sydowii recovered only from clinical samples (10 4 spores/mL for 8051266672_C3 and 0062415698_C7, 10 3 spores/mL for 9071870945_C5 and 0062445522_C8, 10 2 spores/mL for 4040348777_C2, and 10 spores/mL for 0062445523_C9 and 0112723999_C11). For Aspergillus jensenii, a species for which we identified both clinical and environmental isolates, the finding was the same: the lowest concentration significantly increasing DTT consumption compared to blank was 10 3 spores/mL for 9041799386_C4 and 10 spores/mL for 4070377575_C6 and HAB01. Of the seven species used for the DTT test, Aspergillus protuberus was the species with the lowest oxidative potential, while Aspergillus jensenii was the species with the highest oxidative potential. Oxidative Potential of Fungal Extracts Kinetics of DTT consumption by fungal extracts grouped by species are presented in Figure 1. Because the majority of the fungal extracts were yellowish, the final concentration chosen for the DTT assays in the reaction mixture was 25 µg/mL, a concentration for which the OD measurement at 412 nm was not different from the blank (DPBS) in the absence of DTT and DTNB. In microtubes containing only DPBS (blank), we measured an average of 23.87 nmol of DTT after 30 min of incubation (DTT consumption rate of 0.038 nmol/min). Our solvent control (DMSO) showed no oxidative potential (no significant difference from the blank (DPBS)). All extracts showed oxidative potential at a concentration of 25 µg/mL with significantly higher DTT consumption (p < 0.05) than those measured for the blank and solvent control. Among the fungal extracts (n = 22) of the seven species tested, the extracts (n = 2) of Aspergillus amoenus showed a significantly higher oxidative potential than the other extracts. However, similarly to the spore suspensions, a significant intraspecific variability in DTT consumption was observed. Because the majority of the fungal extracts were yellowish, the final concentration chosen for the DTT assays in the reaction mixture was 25 µg/mL, a concentration for which the OD measurement at 412 nm was not different from the blank (DPBS) in the absence of DTT and DTNB. In microtubes containing only DPBS (blank), we measured an average of 23.87 nmol of DTT after 30 min of incubation (DTT consumption rate of 0.038 nmol/min). Our solvent control (DMSO) showed no oxidative potential (no significant difference from the blank (DPBS)). All extracts showed oxidative potential at a concentration of 25 µg/mL with significantly higher DTT consumption (p < 0.05) than those measured for the blank and solvent control. Among the fungal extracts (n = 22) of the seven species tested, the extracts (n = 2) of Aspergillus amoenus showed a significantly higher oxidative potential than the other extracts. However, similarly to the spore suspensions, a significant intraspecific variability in DTT consumption was observed. In order to evaluate the impact of the presence of spores of different species, we prepared spore suspensions containing spores of the two, three, four, or five most recurrent species with a final concentration of 10 5 spores/mL, to which our cells were exposed for 72 h, considering the results already obtained. The mixtures were made from the four species found in the environment: Aspergillus amoenus, A. creber, A. jensenii, and A. protuberus, plus Aspergillus sydowii, as it was the most frequent species in our clinical samples. As shown in Figure 2, all combinations containing spores belonging to two different species showed a significant decrease in cell survival (p < 0.05) for the A549 cell line, with the lowest value for the Aspergillus amoenus/A. creber mixture (92.04% ± 0.93), and the highest value for the Aspergillus protuberus/A. sydowii mixture (96.21 ± 1.66). HaCaT cells showed a higher sensitivity than A549 cells at equivalent exposure time ( Figure 2B). Indeed, all mixtures containing spores of two or three different species showed a significant decrease in cell survival (p < 0.001 and p < 0.05, respectively). The lowest percentage of cell survival was observed for the Aspergillus creber/A. jensenii mixture (95.43% ± 0.57), while the highest percentage of cell survival, significantly lower than the negative control, was observed for the Aspergillus amoenus/A. creber/A. sydowii mixture (97.12% ± 1.22). In the absence of 50% or less cell survival for any of these experimental conditions, no IC50 (50% cell survival inhibiting concentration) could be calculated. 72 h, considering the results already obtained. The mixtures were made from the four species found in the environment: Aspergillus amoenus, A. creber, A. jensenii, and A. protuberus, plus Aspergillus sydowii, as it was the most frequent species in our clinical samples. As shown in Figure 2, all combinations containing spores belonging to two different species showed a significant decrease in cell survival (p < 0.05) for the A549 cell line, with the lowest value for the Aspergillus amoenus/A. creber mixture (92.04% ± 0.93), and the highest value for the Aspergillus protuberus/A. sydowii mixture (96.21 ± 1.66). HaCaT cells showed a higher sensitivity than A549 cells at equivalent exposure time ( Figure 2B). Indeed, all mixtures containing spores of two or three different species showed a significant decrease in cell survival (p < 0.001 and p < 0.05, respectively). The lowest percentage of cell survival was observed for the Aspergillus creber/A. jensenii mixture (95.43% ± 0.57), while the highest percentage of cell survival, significantly lower than the negative control, was observed for the Aspergillus amoenus/A. creber/A. sydowii mixture (97.12% ± 1.22). In the absence of 50% or less cell survival for any of these experimental conditions, no IC50 (50% cell survival inhibiting concentration) could be calculated. Cell Survival after Exposure to Fungal Extracts The percentages of cell survival for the A549 and HaCaT lines after 72 h of exposure to the fungal extracts are shown in Figure 3. DMEM with 5% DMSO used to solubilize the fungal dry extracts did not show a significant decrease in cell survival on the A549 and HaCaT lines (98.65% and 98.68%, respectively). Cell survival of the A549 and HaCaT lines after exposure to fungal extracts showed a dose effect for all species. An overall comparison of cell survival percentages obtained for all concentrations and extracts, regardless of species, revealed a greater sensitivity of HaCaT cells to fungal extracts than A549 cells (p < 0.0001). For both cell lines, among the fungal extracts tested, Aspergillus puulaauensis was the most cytotoxic (IC50 < 25.0 µg/mL for both cell lines), followed by Aspergillus creber (IC50 = 116.9 µg/mL for cell line A549 and < 50.0 µg/mL for cell line HaCaT) and Aspergillus jensenii (IC50 = 148.9 µg/mL and 209.3 µg/mL for A549 and HaCaT cell lines, respectively). In decreasing order of cytotoxicity: Aspergillus amoenus and Aspergillus protuberus (for which the comparison of their cytotoxic activity showed no significant difference), Aspergillus sydowii, and finally Aspergillus fructus, for which no significant decrease in cell survival was observed. son of cell survival percentages obtained for all concentrations and extracts, regardless of species, revealed a greater sensitivity of HaCaT cells to fungal extracts than A549 cells (p < 0.0001). For both cell lines, among the fungal extracts tested, Aspergillus puulaauensis was the most cytotoxic (IC50 < 25.0 µg/mL for both cell lines), followed by Aspergillus creber (IC50 = 116.9 µg/mL for cell line A549 and < 50.0 µg/mL for cell line HaCaT) and Aspergillus jensenii (IC50 = 148.9 µg/mL and 209.3 µg/mL for A549 and HaCaT cell lines, respectively). In decreasing order of cytotoxicity: Aspergillus amoenus and Aspergillus protuberus (for which the comparison of their cytotoxic activity showed no significant difference), Aspergillus sydowii, and finally Aspergillus fructus, for which no significant decrease in cell survival was observed. Isolates Identification Amplification and sequencing of the BenA gene allowed the identification of the 21 isolates collected as belonging to seven different species of the series Versicolores: Aspergillus amoenus, A. creber, A. fructus, A. jensenii, A. protuberus, A. puulaauensis, and A. sydowii. Aspergillus creber was the most common species in the bioaerosols (7 out of 11 isolates), which was consistent with data on fungal diversity in bioaerosols from the USA [17], Croatia [16], Italy [41], and our previous study [20] (France). Aspergillus amoenus, A. jensenii, and A. protuberus have also been found in bioaerosols in other studies [16,20]. However, the specific diversity here was quite low. Indeed, Aspergillus cvjetkovicii, A. fructus, A. griseoaurantiacus, A. pepii, A. tabacinus, A. sydowii, A. tennesseensis, or A. venenatus have also been isolated from bioaerosols [16,17,26], which was not the case in our study. Regarding the species found in clinical samples, our results were only partially consistent with available clinical data [18,42]: Aspergillus sydowii seemed to be the most frequent species found in clinical samples, but we did not find A. creber or A. amoenus, which were the two most frequent species cited after A. sydowii. Oxidative Potential To our knowledge, this study was the first to provide data on the oxidative potential of species belonging to the series Versicolores. Each isolate showed a significant oxidative potential, but for different spore concentrations. The species whose spores showed the highest oxidative potential was Aspergillus jensenii, which is considered as one of the most frequently found species in bioaerosols after A. creber. DTT consumption rates were between 0.129 and 0.085 nmol/min for suspensions with 10 5 and 10 2 spores/mL, respectively, which was lower than those known for Aspergillus fumigatus (between 0.422 and 0.185 nmol/min for spore suspensions with 10 5 and 10 2 spores/mL, respectively), but closer to those measured for Aspergillus brasiliensis (between 0.240 and 0.007 nmol/min for spore suspensions with 10 5 and 10 2 spores/mL, respectively) [8]. Due to the lack of studies on this subject, we have no comparative information regarding the intraspecific variability that we observed with our isolates. At a concentration of 25 µg/mL, all extracts showed a higher oxidative potential than the blank. We were able to measure an oxidative potential with an average rate of DTT consumption of 0.116 nmol/min for all species of the series Versicolores combined. This value was not significantly different from that obtained for spore suspensions at 10 5 spores/mL (p = 0.16). The species for which the extracts showed the highest DTT consumption was Aspergillus amoenus, which has previously been found in air, but is more common in food [17,43]. Since this was the first study of the oxidative potential of fungal extracts, we did not have a point of comparison to determine whether or not these values were high compared to extracts of Aspergilli from the series Versicolores made with other solvents, or to extracts made with the same solvent but for other species. Thiols (such as DTT) can interact with spores, and are able to modify their structure by giving a giant cell a shape close to that of chlamydospores [44]. This interaction could account for some of the DTT consumption; however, the incubation time required to obtain these giant cell forms is three days, which is much longer than the 30 min incubation time in our study. As mentioned in the introduction and in previous studies, the species of the series Versicolores showed a significant phenotypic polymorphism. This polymorphism was found at the microscopic level with the presence of spores of variable colors, and can be smooth, rough, or verrucous; and at the macroscopic level with a variety of textures and synthesis of pigments and exudates, even between isolates belonging to the same species [17,20,45]. Pigments and exudates contain secondary metabolites with various activities: antifungal and antibacterial agents (asperversin, averufins, aspergillomarasmine A) [46,47] or cytotoxic agents (versixanthones) [27]. One of the molecules found in the pigments of Aspergillus of the series Versicolores is melanin, which, although historically known for its antioxidant properties, in fact has a duality of functional activity through eumelanin (photoprotective and antioxidant) and pheomelanin (phototoxic and prooxidant) [48][49][50]. A variable production of pheomelanin and/or eumelanin or other unidentified compounds from one isolate to another could explain this variability in the oxidative potential measured for the extract from isolates belonging to the same species of the series Versicolores. Cell Survival For the first time, we exposed cells of the A549 and HaCaT lines directly to spore suspensions and total acidified ethyl acetate extracts of series Versicolores species. The cell survival measured after 24, 48, and 72 h of exposure did not show any significant difference with those measured for the negative control, except for the A549 cell line exposed to spore suspensions at 10 5 spores/mL of the Aspergillus jensenii species, considered as the most frequently found species of the series Versicolores in bioaerosols after A. creber [16,20]. The tests performed with spore suspension mixtures aimed at mimicking the interactions between different species of the series Versicolores most frequently and simultaneously found in bioaerosols and their impact on cell survival. After 72 h of exposure, all mixtures containing 10 5 spores/mL of two species in equivalent amounts showed a significant decrease in cell survival of the A549 and HaCaT lines. We could make the same observation for mixtures containing 10 5 spores/mL of three different species in equivalent amounts for the HaCaT cell line. This indicated that the simultaneous presence of spores of at least two different species generated interactions leading to the stimulation of secondary metabolite biosynthetic pathways [51]. These metabolites, absent when spores of only one species were present in suspension, caused a decrease in cell survival. In other words, spore toxicity was exacerbated by the concomitant presence of spores of other species. An increase in the species richness of airborne Aspergilli of the series Versicolores is therefore a risk factor that increases their toxicity. The lack of a significant decrease in cell survival with mixtures of spores from more than three different species probably was related to a relative decrease in the number of spores of each species in the mixture. Similarly, cytotoxicity was higher for mixtures containing spores from two species than those from three different species. Cell survival tests performed on total extracts with acidified ethyl acetate showed that the HaCaT cell line was more sensitive than the A549 cell line, and that the extract made from Aspergillus puulaauensis was the most toxic. However, this mold was not the most common species of the series Versicolores in the air [20,22,41]. On the other hand, the two other species that showed the highest decrease in cell survival (Aspergillus creber and A. jensenii) were the most frequent in the air [16,20]. These species are indeed known to synthesize sterigmatocystin, 5-methoxysterigmatocystin, and versicolorin A and B, which are cytotoxic mycotoxins that can be extracted by acidified ethyl acetate [52][53][54]. A Link between Oxidative Potential and Cell Survival? All spore suspensions showed a significant decrease in the amount of DTT present in the reaction medium, especially for the 10 5 spores/mL Aspergillus jensenii suspensions. Only the A549 cell line after 72 h of exposure to Aspergillus jensenii spore suspensions at a concentration of 10 5 spores/mL showed a significant decrease in cell survival. This may suggest a possible correlation between these two tests. However, measurement of the oxidative potential revealed that extracts of Aspergillus amoenus consumed DTT most rapidly, while at an equivalent concentration (25 µg/mL), cell survival was not altered. On the contrary, the oxidative potential of A. puulaauensis was not significantly different from that measured for the other species but showed the highest cytotoxicity. An overall comparison of the data obtained for oxidative potential and cell line survival for extracts at 25 µg/mL clearly showed that there was no relationship between the results of these two tests. This meant that the oxidation generated by the spores and extracts was not sufficient to decrease cell survival, and that the observed toxicity was due to other mechanisms than oxidation alone, such as a direct activation of caspase-3 by sterigmatocystin [55]. Are Clinical Isolates More Dangerous Than Environmental Isolates? In our study, we observed an important intraspecific heterogeneity in terms of toxicity. Statistical tests performed did not show a higher oxidative potential or cytotoxicity for the clinical isolates than for the environmental isolates. However, it is interesting to note that the most represented species among the clinical isolates was Aspergillus sydowii, which grows more easily at 37 • C than the other species of the series Versicolores [20]. Limitations In this first study, the measurement of the oxidative potential was only performed with the DTT test, which is a cell-free test. Similarly, the cytotoxicity tests were conducted on two cell lines (one for skin and one for the airways) using SRB staining. Moreover, other reference strains (such as Aspergillus amoenus CBS 245.65) should be tested in future studies. Conclusions In conclusion, DTT tests allowed us to highlight that Aspergillus jensenii spores and A. amoenus extracts had the highest oxidative potentials. Cell survival assays showed a decrease in cell survival only for Aspergillus jensenii spores at a concentration of 10 5 spores/mL after 72 h of exposure of A549 cells and for all fungal extracts, especially for A. puulaauensis. We were able to demonstrate a greater sensitivity of HaCaT cells to the fungal extracts than for A549 cells. The cell survival tests performed with the spore mixtures allowed us to observe a potentiation of the toxicity when spores of different species were present concomitantly in suspension. The comparison of the two methods showed that the measurement of an oxidizing potential alone was not predictive of cellular toxicity. These first data on the toxicity of spores and extracts of Aspergilli species of the series Versicolores allowed us to affirm that a great intraspecific variability exists in terms of biological activity, and that these species do not all present the same hazard for human health, hence the need to identify them within bioaerosols. Identification and quantification of new metabolites must be undertaken to explain this variability in biological activity. Studies on toxicity of these metabolites are also necessary to contribute to health risk assessment of molds belonging to Aspergillus series Versicolores. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/microorganisms10020228/s1, Table S1: BenA gene sequences of the isolates used in this study.
2022-01-23T16:45:36.872Z
2022-01-20T00:00:00.000
{ "year": 2022, "sha1": "92b5ef99f9765180c15a36d3d2107eb0216a5de5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2607/10/2/228/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "057c183590ba9dcd47de69c94bd5f5b5718b4dde", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
201631489
pes2o/s2orc
v3-fos-license
The joint effect of insomnia symptoms and lifestyle factors on risk of self-reported fibromyalgia in women: longitudinal data from the HUNT Study Objectives To investigate the association between insomnia symptoms and risk of self-reported fibromyalgia in women, and to explore whether leisure time physical activity and body mass index (BMI) modify this association. Design Prospective cohort study. Setting We used longitudinal data from the Norwegian Nord-Trøndelag Health Study collected in 1995–1997 (baseline) and 2006–2008 (follow-up). Participants A total of 14 172 women who reported to be free from fibromyalgia at baseline. Primary outcome measures We estimated adjusted risk ratios (RRs) with 95% CI for self-reported fibromyalgia at follow-up associated with baseline insomnia symptoms, leisure time physical activity and BMI. Results Overall, 466 incident cases of fibromyalgia were reported during the follow-up period of approximately 11 years, corresponding to a crude absolute risk (AR) of 3.3%. Compared with women without insomnia symptoms (crude AR=2.8%), women who reported one, two or three symptoms had RRs of fibromyalgia of 1.39 (95% CI: 1.08 to 1.80), 1.86 (95% CI: 1.33 to 2.59) and 2.66 (95% CI: 1.75 to 4.06), respectively. Compared with highly physically active women without insomnia symptoms (crude AR=2.7%), women with one or more insomnia symptoms had a RR of fibromyalgia of 1.90 (95% CI: 1.30 to 2.79) if they reported low physical activity and a RR of 1.55 (95% CI: 1.12 to 2.13) if they reported high physical activity. We found no synergistic effect between insomnia symptoms and BMI on risk of fibromyalgia; however, overweight and obese women with one or more insomnia symptoms had RRs of 2.35 (95% CI: 1.73 to 3.21) and 2.18 (95% CI: 1.42 to 3.35) compared with the reference group of normal weight women without insomnia symptoms (crude AR=2.3%). Conclusions Insomnia symptoms are strongly and positively associated with risk of fibromyalgia in adult women. Leisure time physical activity may compensate for some of the adverse effect of insomnia symptoms on risk of fibromyalgia. InTRODuCTIOn Fibromyalgia is a musculoskeletal pain syndrome with chronic widespread pain as the main symptom. 1 2 The aetiology and pathophysiology remain undetermined, but a disturbance in the central regulation of pain seems to be an important contributor to the development of fibromyalgia. 3 Depending on the diagnostic criteria used, the prevalence of fibromyalgia is between 2% and 7% in the general adult population but up to fourfold higher among women than men. 4 Almost all women with fibromyalgia report some sleep problems, 5 and several studies applying polysomnographic recordings have documented signs of disordered sleep in fibromyalgia. 6 This is not surprising considering that chronic widespread pain has been identified as a strong and independent risk factor for insomnia. 7 Conversely, epidemiological studies indicate that insomnia symptoms increase the risk of fibromyalgia and widespread pain among an otherwise healthy population. [8][9][10] For instance, in a longitudinal study, we have showed that sleep problems were strongly and positively associated with risk of fibromyalgia. 8 However, this study had several methodological limitations, for Strengths and limitations of this study ► The strengths of the current study include the prospective design, the large study sample of women and the possibility to adjust for several potential confounding factors. ► Fibromyalgia was assessed by self-reports at both baseline and follow-up. ► The questions on sleep at baseline referred to symptoms the last month and the question on impaired daytime function was only related to work ability. ► The questions on sleep did not capture whether the insomnia symptoms occur despite adequate opportunity and conditions for sleep or if they were explained by another sleep disorder. ► We have no information about changes to insomnia symptoms, leisure time physical activity and body mass index during the follow-up. Open access example, sleep problems were assessed by a single question and baseline information about fibromyalgia was not available. More recently, two longitudinal studies have shown that insomnia symptoms are associated with increased risk of fibromyalgia and widespread pain in a working population 9 and among elderly. 10 Although these latter studies indicate an independent association between insomnia symptoms and risk of fibromyalgia, it is not clear if number of insomnia symptoms is dose dependently associated with risk of fibromyalgia and if lifestyle factors can modify this association. Some evidence indicates that leisure time physical activity and maintenance of normal body weight to some extent can reduce the adverse effect of sleep problems on risk of chronic pain in the low back and neck/shoulders. 11 Furthermore, excessive body weight may represent an independent risk factor of fibromyalgia, 12 whereas regular physical activity seems to reduce the risk of fibromyalgia. 12 Thus, it is conceivable that leisure time physical activity and obesity influence the association between insomnia symptoms and risk of fibromyalgia. Improved knowledge about the interplay between insomnia symptoms and lifestyle factors would be valuable for improved prevention of fibromyalgia. The aim of the current study was to investigate the prospective association between insomnia symptoms and risk of self-reported fibromyalgia in women, and to explore if leisure time physical activity and body mass index (BMI) modify this association. MATeRIAlS AnD MeThODS Study population This prospective population-based study utilises longitudinal data on women participating in the Nord-Trøndelag Health Study (the HUNT Study). All inhabitants in the Nord-Trøndelag County in Norway aged 20 years or older were invited to participate in three consecutive surveys; first in 1984-1986 (HUNT1), then in 1995-1997 (HUNT2) and last in 2006-2008 (HUNT3). Information on lifestyle and health-related factors were collected by questionnaires and a clinical examination at all three surveys. The invitation files were created from periodically updated census data from Statistics Norway. In the second and third surveys, the invitation letter was sent by mail attached along with a three-page questionnaire. This questionnaire was returned when the participants attended the clinical examination. At the clinical examination, the participants were given a second questionnaire that they were asked to complete at home and return in a pre-stamped envelope. More detailed information about the HUNT Study can be found at http:// www. ntnu. edu/ hunt. Information on fibromyalgia and insomnia symptoms were not collected at HUNT1, and the current study is therefore based on data from HUNT2 and HUNT3. At the HUNT2 baseline survey, a total of 47 312 women were invited and 75.5% (n=35 280) participated. At the HUNT3 follow-up survey, 47 293 women were invited and 58.7% (n=27 758) participated. In the current study, we used data from the 20 415 women who participated in both HUNT2 and HUNT3. Of these, we excluded 1159 women who reported fibromyalgia at baseline at HUNT2. Furthermore, we excluded women with incomplete baseline information on insomnia symptoms (n=3541) and leisure time physical activity (n=761). Moreover, 161 women defined as underweight (BMI <18.5 kg/m 2 ) were excluded due to possible pre-clinical disease that could influence insomnia, lifestyle factors or fibromyalgia. Of the remaining 14 793 women, 14 172 answered the question about fibromyalgia at the follow-up survey (HUNT3). Fibromyalgia At baseline, women reported physician diagnosed fibromyalgia according to the following question: 'Has a doctor ever said that you have fibromyalgia (fibrositis/ chronic pain syndrome)?', with response options 'Yes' or 'No'. At follow-up, incident fibromyalgia was identified by the question 'Have you had, or do you have fibromyalgia?', with response options 'Yes' or 'No'. Insomnia symptoms At baseline, classification of insomnia symptoms was based on the following three questions: (1) 'During the last month, have you had problems falling asleep?', (2) 'During the last month, did you ever wake up too early, not being able to fall asleep again?' and (3) 'During the last year, have you been troubled by insomnia to such a degree that it influenced your work ability?' Questions 1 and 2 had the response options: 'Never', 'Occasionally', 'Often' and 'Almost every night', whereas question 3 had the response options 'No' and 'Yes'. Participants were classified to have insomnia symptoms if they answered, 'Often' or 'Almost every night' on at least one of the questions 1-2 or 'Yes' on question 3. Body mass index Standardised measurements of body height (to the nearest centimetre) and weight (to the nearest half kilogram) obtained at the clinical examination at baseline was used to calculate BMI (kg/m 2 ). Participants were then classified according to cut-offs suggested by the WHO 13 : normal weight (BMI: 18.5-24.9 kg/m 2 ), overweight (BMI: 25.0-29.9 kg/m 2 ) or obese (BMI ≥30.0 kg/ m 2 ). Women defined as underweight (BMI <18.5 kg/m 2 ) were excluded from the analyses to reduce the possibility of reverse causation due to undetected disease. leisure time physical activity At baseline, leisure time physical activity was assessed by the question: 'How much of your leisure time have you been physically active during the last year? (Think of a weekly average for the year. Your commute to work counts as leisure time)'. The participants were then asked to specify number of hours per week of light (no sweating or heavy breathing) and/or hard (sweating and heavy breathing) physical activity with the response Open access options: 'None', '<1 hour', '1-2 hours' and '≥3 hours' for both light and hard activities. Based on this information, we constructed a new variable with three categories combining information on light and hard activities: low activity (<1-hour light and no hard activity), moderate activity (≥1-hour light and no hard activity) and high activity (any hard activity). Other variables Potential confounders were assessed at baseline. Age was determined from the Norwegian national identity number and categorised into '20-29 years', '30-39 years', '40-49 years', '50-59 years', '60-69 years' and '≥70 years'. Education was assessed by the question: 'What is your highest level of education?' and divided in four categories: 'Primary school', 'High school', 'College≤4 years' and 'College>4 years'. The Hospital Anxiety and Depression Scale (HADS) was used to assess symptoms of anxiety and depression. HADS is a validated and well-established selfrating questionnaire including seven questions on anxiety and seven questions on depression. 14 As recommended, the cut-off score was set to ≥8 on both anxiety and depression and were dichotomised as presence or no presence of anxiety and/or depression. 14 15 Smoking was assessed by questions about past and present smoking and then divided into three categories: 'Never smoked', 'Former smoker' and 'Current smoker'. Chronic musculoskeletal pain was assessed by the question: 'During the last year, have you had pain and/or stiffness in your muscles and limbs that has lasted for at least three consecutive months?' Response options were 'Yes' and 'No'. If answering 'Yes', the participants were asked to indicate the affected body area(s): neck, shoulders, elbows, wrists/hands, upper back, low back, hips, knees, ankles/feet (ie, a maximum of nine chronic pain sites). We then constructed a new variable using number of chronic musculoskeletal pain sites to categorise participants into four strata: no chronic pain, 1-2 chronic pain sites, 3-4 chronic pain sites and ≥5 chronic pain sites. Use of hypnotics and/or sedatives was assessed by the question: 'How often have you taken sedatives or sleep medication in the last month?' with the response options 'Daily', 'Weekly, but not every day', 'Not as often as every week' and 'Never'. Statistical analysis A modified Poisson regression was used to estimate risk ratios (RRs) of fibromyalgia associated with insomnia symptoms and number of insomnia symptoms. The precision of the RRs was assessed by 95% CIs using robust variance estimation. Women with insomnia symptoms were compared with the reference group of women with no insomnia symptoms. Crude estimates of absolute risk (AR) were calculated for the total sample, as well as for each of the reference categories to help determine the clinical importance of the associations. All associations were adjusted for potential confounding by age (20-29, 30-39, 40-49 and 50-59 years), BMI (18.5-24.9, 25.0-29.9 and ≥30 kg/m 2 ), leisure time physical activity (high activity, moderate activity and low activity), education (primary school, high school, college ≤4 years, college >4 years and unknown) and smoking (never, former smoker, current smoker and unknown). Furthermore, since anxiety and/ or depression are associated with both fibromyalgia and insomnia symptoms, we included HADS (no anxiety or depression, anxiety and/or depression and unknown) in the multi-adjusted model. We estimated the joint effect of insomnia symptoms and leisure time physical activity on risk of fibromyalgia, using highly physically active women without insomnia symptoms as the reference group. Furthermore, in the analysis of the joint effect of insomnia symptoms and BMI on risk of fibromyalgia, normal weight women without insomnia symptoms formed the reference group. These analyses were adjusted for all the potential confounders described above (excluding the variable under study). Potential effect modification between the variables was assessed as departure from additive effects calculating the relative excess risk due to interaction (RERI). We calculated RERI estimates with 95% CIs by the following equation: RERI = RR low activity and insomnia symptoms − RR low activity and no insomnia symptoms − RR high activity and insomnia symptoms + 1, 16 that is, RERI >0 indicates a synergistic effect beyond an additive effect. The same RERI calculation was performed for the joint effect of BMI and insomnia symptoms. Supplementary analyses were conducted to assess the robustness of the results. First, we included the use of hypnotics and/or sedatives as a covariate in the multi-adjusted models. Likewise, since some persons with multisite pain may have undiagnosed fibromyalgia, we included number of chronic pain sites (no chronic pain, 1-2 chronic pain sites, 3-4 chronic pain sites and ≥5 chronic pain sites) as a covariate in the multi-adjusted models. Finally, in the analyses of joint effect, we attempted to classify the participants into more contrasting categories of physical activity, that is, we excluded 1686 women who reported to be physically active <1 hour per week from the group of low physical activity. Patient and public involvement No patients were involved in the development and design of this prospective study. Table 1 presents the baseline characteristics of the 14 172 participants stratified by the presence of insomnia symptoms. The proportion of women who reported one or more insomnia symptoms at baseline (HUNT2) was 20% (2,397 women). Overall, 466 incident cases of fibromyalgia were reported during the follow-up period of approximately 11 years (crude AR=3.3%). Table 2 shows the association between insomnia symptoms and risk of fibromyalgia. The risk of fibromyalgia Open access (table 2). When all symptoms of insomnia were merged into one group, women who reported one or more insomnia symptoms had a RR of 1.64 (95% CI: 1.34 to 2.02), compared with women with no insomnia symptoms (table 2). Table 3 shows the joint association between insomnia symptoms and leisure time physical activity on risk of fibromyalgia. Compared with the reference group of highly physically active women with no insomnia symptoms (AR=2.7%), women with one or more insomnia symptoms had RRs of 1.90 (95% CI: 1.30 to 2.79) if they reported low activity and 1.55 (95% CI: 1.12 to 2.13) if they reported to be highly physically active (table 3). Furthermore, women without insomnia symptoms who reported low physical activity had a RR of 0.95 (95% CI: 0.69 to 1.29). The RERI estimate between insomnia symptoms and leisure time physical activity on risk of fibromyalgia was 0.40 (95% CI: −0.37 to 1.19). Table 4 shows the joint association between insomnia symptoms and BMI on risk of fibromyalgia. There was no evidence of interaction, that is, the RERI estimate between insomnia symptoms and BMI was −0.01 (95% CI: −0.99 to 0.97). Supplementary analyses The supplementary analysis, including hypnotics and/ or sedatives as a covariate in the multi-adjusted models, had negligible effect on the estimated associations. The association between number of insomnia symptoms and risk of fibromyalgia became somewhat attenuated when Open access Table 3 The joint effect of insomnia symptoms and leisure time physical activity on risk of fibromyalgia at 11-year follow-up (20-29, 30-39, 40-49, 50-59, 60-69 and ≥70 years), body mass index (18.5-24.9, 25.0-29.9 and ≥30 kg/m 2 ), education (primary school, high school, college ≤4 years, college ≥4 years and unknown), The Hospital Anxiety and Depression Scale (no depression and no anxiety, depression and/or anxiety, and unknown) and smoking (never, former, current smoker and unknown). ‡Any hard activity per week. §≥1-hour light and no hard activity per week. ¶<1-hour light activity per week. RR, risk ratio. *Participants were classified to have insomnia symptoms if they answered 'Often/always' on one of the questions about 'Problems falling asleep' and 'Waking up too early' or 'Yes' on the question about 'Impaired work ability due to sleep problems'. †Adjusted for age (20)(21)(22)(23)(24)(25)(26)(27)(28)(29)30-39, 40-49, 50-59, 60-69 and ≥70 years), leisure time physical activity (high activity, moderate activity and low activity), education (primary school, high school, college ≤4 years, college ≥4 years and unknown), The Hospital Anxiety and Depression Scale (no depression and no anxiety, depression and/or anxiety, and unknown) and smoking (never, former, current smoker and unknown). BMI, body mass index; RR, risk ratio. adjusting for number of chronic pain sites (no chronic pain, 1-2 chronic pain sites, 3-4 chronic pain sites and ≥5 chronic pain sites), that is, women who reported one, two or three insomnia symptoms had RRs of 1.04 (95% CI: 0.81 to 1.35), 1.30 (95% CI: 0.94 to 1.80) and 1.67 (95% CI: 1.10 to 2.53), respectively. Comparing inactive (no light and no hard activity) women to highly active women strengthened the association, that is, inactive women with insomnia symptoms had a RR of 2.04 (95% CI: 1.10 to 3.80) compared with highly active women without insomnia symptoms. DISCuSSIOn The results from this prospective study indicate a strong and independent association between insomnia symptoms and risk of fibromyalgia. The risk increased with number of insomnia symptoms and was more than twofold higher among women who reported three or more symptoms compared with women who reported no symptoms. High level of leisure time physical activity may to some extent attenuate the adverse effect of insomnia symptoms on risk of fibromyalgia. We found no synergistic effect of insomnia symptoms and BMI, but overweight and obese women with insomnia symptoms had more than twofold increased risk of fibromyalgia compared with normal weight women with no insomnia symptoms. Prospective studies have shown that sleep problems increase the risk of localised 11 and generalised chronic pain. 17 18 However, the different definitions of both sleep problems and pain limit the possibility to directly compare our results with previous findings. In a large longitudinal study based on a previous wave of the HUNT Study, we showed that sleep problems were strongly and positively associated with the risk of fibromyalgia in women at 10-year to 11-year follow-up. 8 However, the study had several methodological limitations, for example, sleep problems were assessed by a single question and baseline information about fibromyalgia and chronic pain was not available. More recently, two studies based on the same data as the current study showed that a proxy of the 4th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) insomnia diagnosis was Open access associated with increased risk of fibromyalgia 9 and chronic widespread pain. 17 The current study extends on these findings by showing the dose-dependent association between number of insomnia symptoms and risk of fibromyalgia. Taken together, these findings suggest that reducing both mild and severe sleep problems may be an important target to reduce the incidence of fibromyalgia. The underlying mechanism for the association between insomnia symptoms and susceptibility to develop fibromyalgia is unclear but can be related to the possible relation between sleep problems and central sensitisation of the nervous system. 19 For instance, sleep restriction and poor sleep quality may impair endogenous nociceptive-inhibitory function and increase pain, 20 as well as induce generalised hyperalgesia in otherwise healthy people. 21 Furthermore, there may exist a link between poor sleep and low-graded inflammation, 22 which is supported by experimental studies showing that pro-inflammatory cytokines can be involved in the development of hyperalgesia. [23][24][25] Our results show that moderate and high leisure time physical activity may modify the adverse effect of insomnia symptoms on risk of fibromyalgia. Although the precision in our analysis of additive interaction was low, the estimate suggests a synergistic effect of insomnia symptoms and leisure time physical activity on risk of fibromyalgia. This result is partly in line with a previous study showing that leisure time physical activity to some extent compensate the risk of mild sleep problems on chronic pain in low back and neck/shoulders. 11 However, sleep problems were assessed by a single question and the definition of leisure time physical activity differed from the current study. Furthermore, it is possible that pain in the low back and neck/shoulder represent a condition that differs in nature from fibromyalgia, and that insomnia symptoms and physical activity influence these pain conditions differently. Interestingly, in the current study, the beneficial effect of moderate and high physical activity was present only among women with symptoms of insomnia. A possible explanation for this finding is that the anti-inflammatory effect of physical activity 26 27 reduces inflammation induced by disturbed sleep and short sleep duration. 28 29 This notion is supported by studies showing that a single bout of low-intensity physical exercise can induce hypoalgesia and improve fibromyalgia symptoms, 30 indicating that physical exercise reduces pain perception 31 and increases pain tolerance. 32 Although the exact underlying mechanism remains undetermined, our findings suggest that regular recreational physical activity may reduce the risk of fibromyalgia in persons with symptoms of insomnia. Although excessive body weight has been linked to increased risk of fibromyalgia, 12 we found no evidence that BMI modifies the effect of insomnia symptoms on risk of fibromyalgia. However, a high BMI was associated with an increased risk of fibromyalgia within all strata of insomnia symptoms. Strengths of the current study include the prospective design, the large study sample and the possibility to adjust for several potential confounding factors. Furthermore, the large sample size allowed us to analyse the joint effect of insomnia symptoms and lifestyle factors. Some limitations should also be considered when interpreting the results. First, no information about the time of the fibromyalgia diagnosis was collected and fibromyalgia was assessed by self-reports at both baseline and follow-up. These questions have not been validated and some of the women may not have met the 33 classification criteria for a diagnosis of fibromyalgia. 33 It should also be noted that the data collection was carried out before the classification criteria for fibromyalgia was revised in 2010. 34 Furthermore, we cannot exclude the possibility that women reporting multisite pain have undiagnosed fibromyalgia. Second, our classification of insomnia is somewhat different from the International Classification of Sleep Disorders (3rd edition) criteria for insomnia diagnosis. 35 For instance, the questions on sleep in HUNT2 only refer to symptoms the last month and the question on impaired daytime function in HUNT2 is only related to work ability. Furthermore, the questions on sleep in HUNT2 do not capture whether the insomnia symptoms occur despite adequate opportunity and conditions for sleep or if they are explained by another sleep disorder. Furthermore, the assessment of leisure time physical activity was based on self-report. It should also be noted that insomnia symptoms, leisure time physical activity and BMI were collected only at the baseline survey, and we have no data on changes to these variables during the follow-up period. Finally, the study population consisted of a heterogeneous group of women, and future studies should investigate whether there exist subgroups where insomnia symptoms and lifestyle factors have different impact on risk of fibromyalgia. In conclusion, insomnia symptoms are associated with increased risk of fibromyalgia in adult women. Notably, the risk increases proportionally with number of insomnia symptoms. Leisure time physical activity may modify some of the adverse effect of insomnia symptoms on risk of fibromyalgia. These findings indicate that preventing sleep problems and promoting a healthy active lifestyle are important to reduce the incidence of fibromyalgia.
2019-08-25T13:04:22.281Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "a3ba3dba17aeaf8e94af2904170e0de4821b7dbc", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/9/8/e028684.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ce2ddc70d4b90cd882499a857adbf0edb29256ae", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248056532
pes2o/s2orc
v3-fos-license
Using LinkedIn Endorsements to Reinforce an Ontology and Machine Learning-Based Recommender System to Improve Professional Skills : Nowadays, social networks have become highly relevant in the professional field, in terms of the possibility of sharing profiles, skills and jobs. LinkedIn has become the social network par excellence, owing to its content in professional and training information and where there are also endorsements, which are validations of the skills of users that can be taken into account in the recruitment process, as well as in the recommender system. In order to determine how endorsements influence Lifelong Learning course recommendations for professional skills development and en-hancement, a new version of our Lifelong Learning course recommendation system is proposed. The recommender system is based on ontology, which allows modelling the data of knowledge areas and job performance sectors to represent professional skills of users obtained from social networks. Machine learning techniques are applied to group entities in the ontology and make predictions of new data. The recommender system has a semantic core, content-based filtering, and heuristics to perform the formative suggestion. In order to validate the data model and test the recommender system, information was obtained from web-based lifelong learning courses and information was collected from LinkedIn professional profiles, incorporating the skills endorsements into the user profile. All possible settings of the system were tested. The best result was obtained in the setting based on the spatial clustering algorithm based on the density of noisy applications. An accuracy of 94% and 80% recall was obtained. Introduction and Related Studies Employability requirements have changed to address the new realities and challenges faced by organizations. According to [1] cited by [2], in several fields, some discrepancies emerge between employee competences and the labor market. In view of this, professionals must improve and/or develop their competences and skills according to trends in the labor market in order to be competitive. Nowadays, organizations can use different Social Networks Sites (SNSs) to research into candidates [3], as well as to gather information on labor trends so as to adjust academic plans, analyze student profiles and extract their skills, and identify the most desired skills for the purpose of adjusting curricula [4]. In the professional field, LinkedIn has played a very important role in terms of the dissemination of jobs, where, according to the work presented by one of its missions is to match jobs with suitable professionals [5]. To support this process, the Recommender Systems (RSs) have emerged, which collect the information linking users to items and use it to make relevant and meaningful suggestions. By [6] the RSs base the prediction of user interests, according to their explicit or Table 1. Use of ML in recommender systems. Ref. Year Objective Filtering Technique ML Technique Metrics [20] 2015 Recommend LLL courses according to common professional interests Collaborative + ML ML supervised Accuracy/Recall [21] 2018 Recommend courses according to academic profile and jobs on offer Collaborative + Content + ML ML supervised Accuracy/Recovery [22] 2019 Recommend exercises based on information regarding academic performance ML ML non-supervised NDCG **, MAE * and F1 *** [23] 2019 Recommend courses to maximize learning based on past performance ML ML by way of support Accuracy/F1 [24] 2019 Recommend consultancy according to academic-industrial research interests ML ML supervised NDCG [25] 2020 Recommend learning resources according to context, taking colleagues' learning into account ML ML non-supervised Accuracy/Recall/F1 MAE *: Mean absolute error; NDCG **: Normalized Discounted Cumulative Gain; F1 ***: Harmonic mean between precision and recall. In some cases, ML is used in RSs to make inferences about how past actions correspond to future outcomes. [23] applies this technique to predict future performance based on students' academic record in order to re-schedule courses for target course preparation and exam preparation. [24] proposes an RS for different users in academia, using ML to extract information about research areas of interest to professors from publications and curricula, and then combining them with their background to assign courses and research work for supervisory purposes. Additionally, semantic web techniques have been combined with ML techniques, allowing knowledge to be exploited from available information, updated and new relationships between data inferred [14]. The combination of an ontology-based recommender system with ML techniques has been used as an approach to improve the accuracy of recommendations, in addition to addressing information overload, with a view to solving cold start problems [26]. Based on the above [2], they propose a hybrid RS to suggest lifelong learning (LLL) courses to improve professional competences for users whose profile is built from their LinkedIn profile data. The RS core involves semantic filtering that uses ontology to model employment sectors and areas of knowledge to represent professional competences. The ontology is updated using events based on professional record profile data extracted from LinkedIn, using ML to cluster entities in order to make predictions about the new data. As a line of future work, they recommend considering the use of LinkedIn endorsements to enrich the user profile and thus improve recommendations. Recently, there have been several research work oriented analyses of data provided by LinkedIn, which in addition to data related to academic and professional training, also highlights endorsements, which is one of the key features of LinkedIn, asking the viewer to endorse a skill for the candidate as proof of their skill level [27]. In a review of relevant literature, we found the research shown in Table 2, related to the use of LinkedIn endorsements. This is highlighted in [4], in which an RS is proposed that classifies profiles according to skills and peer endorsements, identifying the most desired skills that should be covered by curricula and areas of learning, and suggesting possible corrections in learning programs. [32] indicates that in addition to this information, endorsements of the skills of LinkedIn users can be analyzed to complement the profile. The system proposed by [32] recommends relevant job opportunities for users seeking employment in the area of information technology, based on the curriculum to be entered into the system. In addition, it makes it easy for recruiters to search for the best talent based on job requirements by analyzing not only the resume, but also LinkedIn endorsements, which could improve the user profile. Based on the above, an improvement in RS of [2] is proposed, incorporating LinkedIn endorsements as an additional system input. These are taken into account in order to validate the skills determined in the RS profiles to improve recommendations. In literature on the subject, we found some RSs aimed at developing professional skills through a wide range of training. In this respect, the objective of the RSs proposed by [33][34][35] is to identify the professional competencies that users need to develop in their current job, so that they can be taken into account in their training plan. For [4], along with RSs, the analysis of SNSs data has become a common practice for collection of information about users, since [36], SNSs such as Facebook and LinkedIn have become integrated into everyday life. Ref. [37] indicates that social media provide innovative pedagogical frameworks for teaching and learning that allow students to develop digital skills deemed useful for a successful professional career. He also points out that semantic social networks offer a series of advantages related to their strategic use, such as the creation of a network of contacts, which can have an impact on their professional development by connecting with professionals and following labor market trends. As the use of SNSs has increased decade by decade, so has interest in using SNSs in recruitment and hiring processes [3]. In this sense, for [28], LinkedIn is the most influential web tool in terms of professional use, unlike other SNSs such as Facebook or Twitter, which focus on social relationships. In LinkedIn, users have greater visibility in professional terms, as they can share their training, skills and work experience [4]. For [37], LinkedIn focuses on professional networking and career development, designed to help people make connections, share experiences and resumes, and find jobs. It is also a tool that can be used to find relevant, quality content. According to [3], allowing users to provide written recommendations and endorsements of skills that appear on a user's profile, LinkedIn has also introduced endorsements, in which users are asked to endorse the skills of other users [27]. The data contained in SNSs are very diverse [3], and RSs also obtain information from multiple sources; to handle heterogeneity, they have adopted semantic knowledge representation as one of the theories to help deal with this [26]. Based on this review and lending continuity to the RS presented by [2], it is proposed that LinkedIn skills endorsements be incorporated in order to validate the skills obtained in the user profiling stage to optimize performance and prediction of the continuing education course recommendation system. The system uses LinkedIn user records in the area of software development. This article is structured as follows: Section 2 details the RS proposal; Section 3 describes the evaluation of the proposal; and Section 4 provides conclusions and recommendations for future research. Materials In the field of learning and in personalized recommendation services, different user dimensions, such as personal data, interests, knowledge levels, and context, among others, must be taken into account in order to respond appropriately to their needs [38]. Based on this, LinkedIn records used by [2] were employed to create user profiles, which contained personal, academic and professional data of users, as they in turn contain relevant information to help determine the area of knowledge, skills and labor sector that are taken into account when recommending courses. Additionally, LinkedIn endorsements were incorporated. For the purposes of evaluating the proposal, this was confined to the specific domain of software development. For user and course profiles, professional skills and job sectors were coded using the taxonomies defined in [2]. The ontology used was stored in Neo4j, and selection of the tool in [2] was based on the fact that it is a NoSQL graph database manager, whereby the relationship is the most important database element representing the interconnection between nodes. This makes it ideal to represent knowledge graphs, allowing nodes and relationships and efficient operations to be managed on them. As for the different programs that make up the system, the ones developed by [2] were used as a basis and programmed using Python 3. Cypher, the Neo4j query language, was used to query the graph. Methods In order to recommend continuing education courses to improve and develop professional skills, a hybrid RS based on ontology and ML is planned to determine the skills to be updated and/or developed according to labor market trends. User profiles are created from their LinkedIn records, taking into account endorsements, in order to validate and determine the level of skills. The recommendation process has three filtering stages at whose core is a semantic filtering, which is combined with filtering by content for the initial prediction of courses, and a heuristic one to obtain the final range of courses to be recommended. The following is established as a basis for designing the RS: The system input data is obtained from LinkedIn, to create user profiles, and from the web for courses, without requiring any additional information upload by the users. Courses will be recommended using the skills they develop as a criterion. Skills in the user and course profiles need to be defined under the same terms in the recommendation process. Modular design allows new algorithms to be incorporated into the system and multiple configuration options offered, as well as including parameterizations for the different algorithms, by enabling or disabling filtering stages in order to validate and evaluate the proposals. The RS architecture proposed is shown in Figure 1. The system consists of two main phases: the off-line phase (I), in which the user profiles (A) and (B) are built, definitions are loaded and data models are built from the ontology (C) and ML (D) update processes; an on-line phase (II), in which the course recommendation process is performed, for which purpose the user profile is to be given the recommendation (1) must be built, and which is performed in three filtering stages (2, 3 and 4). The system consists of two main phases: the off-line phase (I), in which the user profiles (A) and (B) are built, definitions are loaded and data models are built from the ontology (C) and ML (D) update processes; an on-line phase (II), in which the course recommendation process is performed, for which purpose the user profile is to be given the recommendation (1) must be built, and which is performed in three filtering stages (2, 3 and 4). Profile Creation A semantic profiling based on ontology is performed in order to generate the different profiles used, making use of taxonomies. This process consists of the user profiling process (A) and the course profiling process (B). After analyzing the records of LinkedIn users and LLL courses extracted from the Web, two taxonomies were then defined to hierarchically code the areas of knowledge and job performance sectors needed for the recommendation process. The levels defined for the areas of knowledge are: area; sub-area; specialty; sub-specialization; and knowledge. For job performance sectors, the hierarchy levels are: sector; field; type; domain; and position. The following are coded with a taxonomy of areas of knowledge: user skills with their level of specialization (LS) and their degree of updating (DU), the skills to be developed on LLL courses with an LS, and also the skills required to qualify for a course. The job performance sector taxonomy is used to hierarchically code the different positions held by users. The user profile information is given by: -Demographic information. Off-Line Phase Profile Creation A semantic profiling based on ontology is performed in order to generate the different profiles used, making use of taxonomies. This process consists of the user profiling process (A) and the course profiling process (B). After analyzing the records of LinkedIn users and LLL courses extracted from the Web, two taxonomies were then defined to hierarchically code the areas of knowledge and job performance sectors needed for the recommendation process. The levels defined for the areas of knowledge are: area; sub-area; specialty; sub-specialization; and knowledge. For job performance sectors, the hierarchy levels are: sector; field; type; domain; and position. The following are coded with a taxonomy of areas of knowledge: user skills with their level of specialization (LS) and their degree of updating (DU), the skills to be developed on LLL courses with an LS, and also the skills required to qualify for a course. The job performance sector taxonomy is used to hierarchically code the different positions held by users. The user profile information is given by: Job performance sectors. This is a set of hierarchical codes defined by a taxonomy for job performance sectors, their description and dates. -Skill sets, where each skill is coded with three arguments: a hierarchical code that is defined by a taxonomy of areas of knowledge, a level of specialization and degree of update. Course profile information is provided by: -Demographic information. -Skills to be developed. Each skill is coded with two arguments: a hierarchical code defined by a taxonomy of areas of knowledge and a level of specialization. -Required skills, where each is coded with a hierarchical code that is defined by a taxonomy of areas of knowledge. -Related skills. These are given by a set of codes of areas of knowledge, where the skills developed on the course may be of interest or complement them. Ontology and ML A key element for smooth performance of RSs is to ensure that there is a wealth of semantic data mining, and to prevent the loss of information obtained from data retrieval. Another process associated with this phase aims at building data models to represent the domain of skills and job performance sectors, according to areas of knowledge. The ontology, in its broadest sense, seeks to represent positions, skills and existing relationships between them. Additionally, it should offer the capacity to represent different groupings of positions under the criterion of similarity according to the skills they use. Likewise, the different relationships that can be found between skills must be represented. The classes defined in the ontology are shown in Table 3. Likewise, class hierarchy is given by the different relationships between classes of ontology as described in Table 4. The ontology is updated in two stages from user profiles: through events (C), where the model is updated with the relationships between job performance sectors and areas of knowledge according to user profile competences; through machine learning (D), the positions of the entities in the ontology are grouped together using the density based spatial clustering of applications with noise algorithm (DBSCAN) and K-means algorithms. For [39] K-means algorithm mainly uses the Euclidean distance function to measure the similarity between data objects, and the sum of squared errors is used to found spherical clusters with more uniform distribution of data. The work presented by [40] defines DBSCAN as a clustering algorithm based on data point distribution density. The algorithm can identify the degree of data density and classify the data points in the distribution. At the same time, sporadic data points can be identified as noise, rather than being classified within a class. In the event driven update, the ontology stores the relationship between the characters and areas of knowledge, which is given by skills with attributes, as well as the number of times that skill is present, its LS average and the promise of DU, which is calculated taking into account the number of users that possess the skill. The relationships are determined, and their attributes calculated, from a training dataset of user profiles via events in the ontology. For the ontology substrate, new entities, relationships and attributes are defined in the taxonomies and inferences that are allowed to be made, which will be used in semantic filtering. Among the new relationships, synonyms and uses of terms in other languages are contemplated-for the purposes of testing this proposal, the use of terms in Castilian Spanish and English were both considered, due to their common use in the domain used. In the case of areas of knowledge, relationships of the type "is of interest" are used in their entities to indicate that users who possess a skill related to those entities may be interested in developing other skills with other knowledge, in order to obtain a more comprehensive profile. Additionally, in work performance sectors, the position entities can have "opt for" relationships, where from one position one can opt for similar or higher positions in an orderly relationship defined by a ladder. ' Job performance clusters are created in the machine learning update. An alternative for calculation of related skills involves using ML. It is proposed that unsupervised clustering algorithms be used in instances of position entities, in order to group similar positions, with these groupings being based on the similarity of skills associated with the relevant positions. Job performance charge clustering, based on the similarity of the skill set, can be used to determine the set of related positions for a particular position, or a particular user's skill set. From the job grouping determined, related skills can be established as those skills that appear most frequently in the positions in the grouping, and which the user does not possess. On-Line Stage The course recommendation process is carried out at this stage. As shown in Figure 1, this is made up of the objective user profile construction process (1) and recommendation process, which is carried out in three filtering stages (2), (3) and (4). (1) Process involved in creating the target user profile This process is similar to the profile construction of the off-line stage. It differs in the semantic transduction stage in that it incorporates coding of the endorsements in the user profile. In the information obtained from the LinkedIn record of the user to whom the recommendation is to be made, are the endorsements and use of skill coding algorithms, and the hierarchical code of the area of knowledge is determined. To determine the LS and DU, heuristics are used that take into account the relative frequency of the number of users who endorse the skill. The endorsements are part of the profile of the user to whom the recommendation is to be made and are taken into account in the recommendation process as skills that the user has acquired. Recommendation process The recommendation process followed is the one proposed by our previous work [2], where the first stage corresponds to a modification of the semantic filtering algorithm (2) in order to determine the user's own, related and ontological skills, as given by the incorporation of LinkedIn endorsements. Then a second content-based filtering (3) is applied for initial course prediction, and finally, filtering and sorting heuristics (4) are applied for the final recommendation of LLL courses to improve and/or develop professional skills based on a user's record. The process diagram represented in Figure 2 is followed in order to prepare course recommendations for each user. (2) Semantic filtering to determine similar skills In order to determine related skills for a user, we first determine his or her related work performance areas using the ontology, based on his or her skills. For this process, the set of work performance sectors where the user's own skills appear is determined and filtered as follows: those related work performance sectors whose skills are covered by a given percentage of the user's skills are selected, and/or those related work performance sectors whose associated skills cover a percentage of the user's own skills are selected. In any case, the user's job performance sectors are excluded from the set of related job performance sectors. (2) Semantic filtering to determine similar skills In order to determine related skills for a user, we first determine his or work performance areas using the ontology, based on his or her skills. For t the set of work performance sectors where the user's own skills appear is dete filtered as follows: those related work performance sectors whose skills are c given percentage of the user's skills are selected, and/or those related work p sectors whose associated skills cover a percentage of the user's own skills are any case, the user's job performance sectors are excluded from the set of rela formance sectors. Alternatively, from the user's job performance sectors it can be determin groupings (clusters) they belong, constructed according to ML algorithms. In new data, i.e., a job performance sector that is not part of any of the cluste Alternatively, from the user's job performance sectors it can be determined to which groupings (clusters) they belong, constructed according to ML algorithms. In the case of new data, i.e., a job performance sector that is not part of any of the clusters obtained during training, it is possible to predict the cluster, to which the user belongs, by making use of the user's skills. Given the groupings, the set of job performance sectors that belong to each of them will form the user's related performance sectors. In the case of DBSCAN, a core node with a distance less than an epsilon value is located and associated with the cluster, to which it belongs, to predict clustering for new data; in the case of k-means, the centroid with the smallest distance is located and associated with that cluster. In both cases, the difference with one of the similarity functions is used as distance, according to Equation (1). Finally, which related work performance sectors are deemed prior to the user's work performance sectors are determined in an orderly relationship imposed by a ladder, and these are removed from related performance areas. Subsequently, with sectors of work performance, the user's own and related skills are determined, which are the most frequent skills associated with related sectors of work performance that add value, in other words, that the skill does not exist among the set of skills or, if it does, that it has a higher level of specialization or degree of update than the user's skills; finally, those skills that add value and are included under the relationship "is of interest" in the knowledge network are added to related skills. Given the inputs: UJS, Set of user job performance sectors. USK, Set of user skills. UESK, Set of validated user skills. The following Algorithm 1 is used to semantically determine related skills (RSK), the result of modifying the algorithm described in [2], after taking endorsements into consideration: Where: Onto_Get_Jobs(sk) performs job performance sectors whose users have the sk skills registered via events in the ontology and can be configured to use registered skills belonging to the same subspecialty sub-specialization of sk when no job performance sectors are obtained for the same. Onto_CutOff_Jobs(jobs,sks,uskp,jskp) performs filtering of job performance sectors, js,js ∈jobs, according to a percentage jskp coverage of sks skill set over the skills associated with job performance sector js,js ∈jobs or a percentage skp coverage of sector skills js,js ∈jobs over the sks skill set. Onto_Cluster_Pred(js) predicts the cluster for a js job sector according to clustering by Machine Learning in the ontology. Onto_Previous(RJS, U JS), the previous sectors in RJS to U JS according to the orderly relationship given by the ladder in the ontology. 3) Content-based filtering to predict initial course recommendation In this stage, the algorithm proposed in [2] is applied, where filtering by content is used to filter the courses from the course catalog that raise the level of specialization and/or degree of update of the user's own skills, or help develop the user's related skills. From this filtering, an initial recommendation is obtained, which is refined in the next stage. (4) Heuristics of filtering and ordering courses for final prediction The initial course prediction from the previous phase is complemented and filtered using heuristics proposed by [2], while the following is verified for the user profile and course catalog: 1. That the user has the necessary skills to approach the courses, and otherwise, courses that develop them are selected by applying demographic restrictions; 2. User demographic restrictions are applied to initial course prediction and to the result of the previous stage; 3. In the course prediction, the courses whose skills to be developed are the same or a subset of another are eliminated, retaining those with the highest score, to thus determine course recommendation for the user. Evaluation and Results According to [7], RS evaluation can be conducted through on-line and off-line experimentation. The work presented by [2] performed off-line RS evaluation. This type of evaluation allows different algorithms and approaches to be assessed, which is used in experimental environments, since the utmost consistency is desired in order to compare the performance of different proposals under the same conditions. Likewise, the off-line evaluation involves metrics to reflect the effectiveness of the system from the user perspective and provide a widely accepted evaluation, due to the robustness of the metrics used. To evaluate our proposal, and to compare the results with our previous work [2], we performed an off-line evaluation and calculated the metrics used in it (Table 5). Metrics Description Calculation for the System Coverage Users to whom the system has made a recommendation, CRi set of recommendations from user u i ∈ U C(S) = |{u i ∈U,CR i ∅}| Precision The fraction of the recommendation that is relevant to the user. Mean absolute error of the given recommendation vs. the recommendation expected by the user, CEi Recall Calculates the ratio between recommendation and user preference The portion of recommendations made to the user that the user is not familiar with or has not seen before The fraction of the recommendation that is unexpected and valuable to the user. Harmonic mean between the precision and recall F 1 = 2 * Precision * Recall Precision+Recall RSs are evaluated in batches, with a set of data containing the profiles of both users and courses, as well as user preferences. These were taken from a survey conducted using a Google form, where it was requested that a list of courses be classified in the categories of desirable, preferred, novel and serendipitous, in order to ascertain the preferences of users regarding choice of LLL courses. The off-line phase of the system was implemented, whereby data were loaded into the Neo4j database that had previously been populated by the ontology substrate definitions. This was updated via events using the training dataset, and clusters of job performance sectors were determined using ML, k-Means and DBSCAN algorithms. In order to evaluate RS performance and compare it to our previous work, metrics were calculated for each of the following configurations [2]: 1: Content filtering using only the user's own skills; no related skills are determined, although skills of interest are considered; 2: Collaborative filtering using only the user's own work performance sectors; semantically related skills are determined; 3: Semantic filtering using rules to determine related job performance sectors and related skills; 4: Semantic filtering using 75% coverage of user skills to determine related job performance sectors. In other words, those positions that cover 75% of the user's skills were selected as related job performance sectors; 5: Semantic filtering using 50% coverage of skills from the job performance sector to determine related job performance sectors. In other words, those job performance sectors in which the user's skills cover 50% of the skills associated with the position; 6: Semantic filtering using DBSCAN clustering with ε = 0.3, to determine related job performance sectors and semantic rules to determine related skills; 7: Semantic filtering using k-Means clustering to determine related job performance sectors and semantic rules to determine related skills; 8: Semantic filtering using DBSCAN and k-Means clustering to determine related job performance sectors and semantic rules to determine related skills. Each of the configurations described above were run through the different data sets, training, testing and total, obtaining the following results. The test results are next shown along with the results obtained in the previous work [2], to compare the results and evaluate improvement in the RS. The results, using the training data (70% of the data set) to validate the model performance, are shown in Table 6. The results obtained from use of the different configurations with the total number of samples from the dataset are shown in Table 7. Finally, the system was run under the different configurations, using the test portion (30%) of data, with the results shown in Table 8. From the results obtained, it can be observed that both this proposal and the system proposed in [2] evidence similar behavior. As such, the configurations showing the best performance in the case of all data sets were semantic filtering using rules to determine related job performance sectors and related skills (3) and semantic filtering using DBSCAN clustering with ε = 0.3 to determine related job performance sectors, and semantic rules to determine related skills (6). However, there was an improvement in MAE and RMSE scores, and a slight increase in precision. A measure that summarizes both precision and recall is the harmonic mean between them (F1). Table 9 shows the harmonic means for the best performing configurations. From the harmonic mean comparison table, it can be seen in Figure 3, that there is a slight improvement in configuration 6, which makes use of the DBSCAN algorithm, while configuration 3, which corresponds to semantic filtering, remains unchanged. From the different tests, the clustering performed by k-Means evidences an inferior performance, which could be explained by the nature of the domain and distribution of the positions in space. In general, in the graph analysis we observe better clustering (related positions) performed by DBSCAN than that performed by k-Means. Improvements in the recall and serendipity metrics can be associated with use in the ontology and the estimation of associated skills or skills of interest and filtering of positions prior to the current one in an orderly relationship imposed by a ladder. In our review of different works, we found some similar RS proposals, and so to evaluate the performance of ours it is advisable to compare results. The work by [41] is geared to university students and company coordinators, recommends jobs according to user skills, while that by [21] is designed to recommend courses to students, using multiple data sources. On the other hand, the work by [42] recommends career paths and skills required for different jobs to users, based on their skills and interests. Finally, [35] recommends online courses to professionals according to their professional competences and professional development preferences. Table 10 provides a summary of the results obtained from each of these papers, which use the precision and recall metrics for the evaluation of their RSs, and the best performing configuration from our proposal. When comparing these works with the best results obtained from our proposal, an improvement in metrics of all similar RS proposals can be seen, as can an increase in accuracy, recall and harmonic mean; the latter measures being the combined performance between accuracy and recall. From the different tests, the clustering performed by k-Means evidences an inferior performance, which could be explained by the nature of the domain and distribution of the positions in space. In general, in the graph analysis we observe better clustering (related positions) performed by DBSCAN than that performed by k-Means. Improvements in the recall and serendipity metrics can be associated with use in the ontology and the estimation of associated skills or skills of interest and filtering of positions prior to the current one in an orderly relationship imposed by a ladder. In our review of different works, we found some similar RS proposals, and so to evaluate the performance of ours it is advisable to compare results. The work by [41] is geared to university students and company coordinators, recommends jobs according to user skills, while that by [21] is designed to recommend courses to students, using multiple data sources. On the other hand, the work by [42] recommends career paths and skills required for different jobs to users, based on their skills and interests. Finally, [35] recommends online courses to professionals according to their professional competences and professional development preferences. Table 10 provides a summary of the results obtained from each of these papers, which use the precision and recall metrics for the evaluation of their RSs, and the best performing configuration from our proposal. When comparing these works with the best results obtained from our proposal, an improvement in metrics of all similar RS proposals can be seen, as can an increase in accuracy, recall and harmonic mean; the latter measures being the combined performance between accuracy and recall. Discussion and Conclusions In this paper we have presented a new version of the RS proposed by [2], taking the incorporation of endorsements of user LinkedIn skills to evaluate RS performance into consideration. In the review of literature on LinkedIn endorsements, we noted that very few papers analyze this element. These are more oriented to analyze the validity of endorsements, as an element to be taken into account in the data extracted from SNSs. The strategy proposed for recommendation of LLL courses was based on establishing a relationship between users according to work performance sectors and professional skills in order to identify those skills that should be improved or developed for their current job, or to access another, higher level job and, based on these identified skills, to determine an initial prediction of courses that may develop them. This strategy made it possible to establish a mechanism for relating the data, which generally speaking, did not initially have relationships on which to base recommendations. By incorporating the endorsements, it was possible to obtain more information on the user profile, which in turn made it possible to incorporate skills that were not evident in the data related to current employment, and which were useful when refining course recommendation. When evaluating the incorporation of endorsements in creating user profiles, an improvement in RS performance was observed. The configuration that makes use of the DBSCAN algorithm improves the precision value by 3%, and results in a decrease in the root mean square error (RMSE) and mean absolute error (MAE). When we wanted to compare our proposal with similar works, we did not find RSs that made use of endorsements, therefore, we selected some similar proposals in terms of RSs objectives. One of the main differences focuses on the nature of the data as it was observed that most work was with structured data. The work by [42] obtained data directly from a university database, while [41] obtained them from a data repository used for ML experimentation, and [35] extracted them from employee profiles and the training plan directly from a company. The work by [21], on the other hand, proposed using information from LinkedIn to complement the data entered directly from users through the application or obtained from the university database. From these data a taxonomy was proposed to represent course, job and student information. The metrics used by most works were found to be recall and accuracy when they were analyzed to compare results with this proposal. In terms of the scores obtained, we can conclude that if we compare them to the results obtained from the tests performed on the total set of data, this proposal improves on the metrics obtained by these previous works by two percentage points in terms of accuracy and 10 percentage points in recall. Research on LinkedIn endorsements is oriented towards assessing the veracity of endorsements on LinkedIn profiles. [30] propose a framework for assessing the trustworthiness of endorsements. The work of [31] proposes to measure the trustworthiness of job candidates based on their skills and endorsements. Based on this research, one line of future work is to incorporate the validation of endorsements into the RS to determine their veracity when taking them into account for recommendations. In order to evaluate system behavior for multiple domains, the ontology could be updated with new instances associated with new domains. In order to evaluate system behavior for multiple domains, the ontology could be acted on with new instances associated with new domains, and the use of ontologies already built could also be evaluated, together with new ways to represent areas of knowledge and job performance sectors. With regard to information sources and given the continuous changes, the use of other SNSs should be evaluated, as well as different platforms such as, RocketReach and DataLead, from where professional profiles can be obtained. Online applications could also be offered for course recommendations, not only for online evaluation of the system, but also to determine opportunities for improvement based on user suggestions. Funding: This research received no external funding. Data Availability Statement: The data presented in this study are available on request from the corresponding authors.
2022-04-10T15:07:34.102Z
2022-04-08T00:00:00.000
{ "year": 2022, "sha1": "646cd9d8aefc610e73258bcab7371ee2293bfed5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9292/11/8/1190/pdf?version=1649673673", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4a9acb925059192d8f4dbf5cbe22538030f41595", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
17524919
pes2o/s2orc
v3-fos-license
Calculation of HELAS amplitudes for QCD processes using graphics processing unit (GPU) We use a graphics processing unit (GPU) for fast calculations of helicity amplitudes of quark and gluon scattering processes in massless QCD. New HEGET ({\bf H}ELAS {\bf E}valuation with {\bf G}PU {\bf E}nhanced {\bf T}echnology) codes for gluon self-interactions are introduced, and a C++ program to convert the MadGraph generated FORTRAN codes into HEGET codes in CUDA (a C-platform for general purpose computing on GPU) is created. Because of the proliferation of the number of Feynman diagrams and the number of independent color amplitudes, the maximum number of final state jets we can evaluate on a GPU is limited to 4 for pure gluon processes ($gg\to 4g$), or 5 for processes with one or more quark lines such as $q\bar{q}\to 5g$ and $qq\to qq+3g$. Compared with the usual CPU-based programs, we obtain 60-100 times better performance on the GPU, except for 5-jet production processes and the $gg\to 4g$ processes for which the GPU gain over the CPU is about 20. Introduction In our previous report [1] we introduced a C-language [2] version of the HELAS codes [3], HEGET (HELAS Evaluation with GPU Enhanced Technology), which can be used to compute helicity amplitudes on a GPU (Graphics Processing Unit). Encouraging results with 40-150 times faster computation speed over the CPU performance were obtained for pure QED processes, qq → nγ, for n = 2 to 8 in pp collisions. In this paper, we extend our study to QCD processes with massless quarks and gluons. The HEGET routines for massless quarks and gluons are identical to those of quarks and photons introduced in [1], and the qqg vertex function structure is also the same as the qqγ functions. The only new additional routines are those for ggg and gggg vertices. For the QED processes studied in ref. [1], we found that the present CUDA compiler cannot process qq → 6γ amplitude with 6! ≈ 700 Feynman diagrams, and we need to subdivide the HEGET codes into small pieces for 6γ and 7γ processes. In the case of 8γ production with 8! ≈ 4 × 10 4 Feynman diagrams, we have not been able to compile the program even after subdivision into small pieces. We also encountered serious slow down when the program accesses global memory during the parallel processing period. Therefore, our concern for evaluating the a e-mail: junichi.kanzaki@kek.jp b e-mail: naotoshi@post.kek.jp c e-mail: tstelzer@uiuc.edu QCD processes on a GPU is the proliferation of the number of diagrams, as well as the number of independent color amplitudes which come with different color weights. The paper is organized as follows. In section 2, we present the cross section formula for n-jet production processes in pp collisions in the quark-parton model, or in the leading order of perturbative QCD with scale-dependent parton distribution functions (PDF's). In section 3, we review briefly the structure of GPU computing by using HEGET codes, and give basic parameters of the GPU and CPU machines used in this analysis. In section 4, we introduce new HEGET functions for ggg and gggg vertices. Section 5 gives our results and section 6 summarizes our findings. Appendix lists all the new HEGET codes introduced in section 4. 2 Physics Process 2.1 n-jet production in pp collisions The cross section for n-jet production processes can be expressed as dσ = {a,b} dx a dx b D a/p (x a ,Q) D b/p (x b ,Q) dσ(ŝ) , (1) where D a/p and D b/p are the scale (Q) dependent parton distribution functions (PDF's), x a and x b are the momentum fractions of the partons a and b, respectively, in the Table 1. The number of Feynman diagrams and the color bases for QCD processes studied in this paper. No. of jets gg → gluons uu → gluons uu → uu+gluons in the final state #diagrams #colors #diagrams #colors #diagrams #colors 2 6 6 3 2 2 2 3 45 24 18 6 10 8 4 510 120 159 24 76 40 5 7245 720 1890 120 786 240 right-and left-moving protons. For the total pp collision energy of gives the invariant mass squared of the hard collision process The subprocess cross section is computed in the leading order as where is the invariant n-body phase space, λ i are the helicities of the initial and final partons, n a and n b are the color degree of freedom of the initial partons, a and b, respectively, and c i represents the color indices of the initial and final partons. When there are more than one gluons or identical quarks in the final states, an appropriate statistical factor should be multiplied on the phase space dΦ n in eq. (5). The Helicity amplitudes for the process (1) can be expressed as where the summation is over all the Feynman diagrams. The subscripts λ i stand for a given combination of helicities (±1 for both quarks and gluons in the HELAS convention [3]), and the subscripts c i correspond to a set of color indices (1, 2, 3 for flowing-IN quarks, 1, 2, 3 for flowing-OUT quarks, and 1 to 8 for gluons). In MadGraph [4] the amplitudes are expanded as in the color bases T ci α which are made from the SU(3) generators in the fundamental representation [5] The color factors are computed as where n a,b = 3 for q and q, n a,b = 8 for gluons, and the summation is over all {c i} = {c a , c b , c 1 , . . . , c n }. The color sum-averaged square amplitudes are computed as The cross sections are then expressed as where we introduce the helicity sum-average symbol as In this paper the following three types of multi-jet production processes are computed: The number of contributing Feynman diagrams and the number of color bases for the above processes are summarized in Table 1, which includes those for the process, gg → 5g. We note here that the number of diagrams (7245) for gg → 5g exceeds that of the uu → 7γ process (7! ≈ 5040), for which we could run the converted MadGraph codes on a GPU, only after division into small pieces [1]. In fact, we have not been able to run the gg → 5g program on GPU even after dividing the program into more than 100 pieces; as explained in section 5.4. Proliferation of the number of independent color basis vectors is also a serious concern for GPU computing, since the color matrix N of eq. (9) has m(m + 1)/2 elements when there are m independent basis vectors T ci a . For example, the process uu → uuggg has m = 240 color basis vectors from Table 1, and the matrix has 3×10 4 elements. The matrix exceeding 16000 elements cannot be stored in the 64kB constant memory, while storing it in the global memory will result in serious loss of efficiency in parallel computing. Therefore, the method to handle summation over color degrees of freedom is a serious concern in GPU computing. Selection criteria for jets Total and differential cross sections of the processes (13) in pp collisions at √ s = 14TeV are computed in this paper. We introduce final state cuts for all the jets as follows: where η i and p Ti are the rapidity and the transverse momentum of the i-th jet, respectively, in the pp collisions rest frame along the right-moving (p z = |p|) proton momentum direction, and p Tij is the relative transverse momentum [6] between the jets i and j defined by Here ∆R ij measures the boost-invariant angular separation between the jets. As for the parton distribution function (PDF), we use the set CTEQ6L1 [7] and the factorization scale is chosen to be the cut-off p T value, Q = p cut T = 20 GeV. The QCD coupling constant is also fixed as which is obtained from the MS coupling at Q = m Z , α s (m Z ) MS = 0.118 [8] by using the NLO renormalization group equations with 5-flavors. 3 Computation on the GPU GPU and its host PC For the computation of the cross sections of QCD n-jet production processes we use the same GPU and host PC as in the previous report [1]. In particular we use a GeForce GTX280 by NVIDIA [11] with 240 processors, whose parameters are summarized in Table 2. It is controlled by a Linux PC with Fedora 8 on a CPU whose properties are summarized in Table 3. Programs which are used for the computation of the cross sections are developed with the CUDA [2] environment introduced by NVIDIA [11] for general purpose GPU computing. Program structure Our program computes the total cross sections and distributions of the QCD n-jet production processes via the following procedure: 1. initialization of the program, 2. random number generation for multiple phase-space points {p a , p b , p 1 , . . . , p n } and helicities {λ i } on the CPU, 3. transfer of random numbers to the GPU, 4. generation of helicities and momenta of initial and final partons using random numbers, and compute amplitudes (J λi ) a of eq. (8) for all the color bases on the GPU, 5. multiplying the amplitudes and their complex conjugate with the color matrix N ab of eq. (9) and summing them up as in eq. (10), and multiply the PDF's of the incoming partons on the GPU, 6. transferring momenta and helicities for external particles, computed weights and the color summed squared amplitudes to the CPU, and 7. summing up all values to obtain the total cross section and distributions on the CPU. Program steps between the generation of random numbers (2) and the summation of computed cross sections (7) are repeated until we obtain sufficient statistics for the cross section and all distributions. Color matrix calculation In order to compute the cross sections of the QCD multijet production processes, multiplications of the large color matrix N ab of eq. (9), the vector of color-bases amplitudes (J λi ) α of eq. (8) and its complex conjugate have to be performed, as in eq. (10). For large n-jet processes, like gg → 4 gluons, uu → 5 gluons and uu → uu+3 gluons, the dimensions of color matrices exceed 100, and the number of multiplication becomes larger than 10 4 . These matrices cannot be stored in the constant memory (64kB for the GTX280; see Table 2) which is accessed in parallel, while storing them in the global memory (1GB for GTX280) results in serious slow-down of the GPU. We find that multiplications for the color-summation in eq. (10) can be reduced significantly as follows. The color matrix of eq. (9) contains many elements with the same value. We count the number of different non-zero elements in the color matrix and find the results shown in Table 4. We find for instance that among the 240×(240+1)/2 = 28, 720 elements of the color matrix for the uu → uu + 3g process, there are only 60 unique ones. In general, the number of different elements in the color matrix grows linearly rather than quadratically as the number of color basis vectors grows. Since the numbers in Table 4 are small enough, we can store them in the constant memory which is accessed quickly by each parallel processor. Before we arrive at the above solution adopted in this study, we examined the possibility of summing over colors via Monte Carlo. Let us briefly report, in passing, on this exercise. In the Monte Carlo color summation approach, we evaluate the matrix element M ci λi (7) for a given set of momenta {p i }, helicities {λ i } and colors {c i }, and sum the squared amplitudes over randomly generated sets of {p i , λ i , c i }. This method turns out not to be efficient because in the color basis using the fundamental representation of the SU(3) generators adopted by MadGraph, most of the basis vectors T ci a vanish for a given color configuration {c i }. As an example, gg → 4g has 5! = 120 color basis vectors (see Table 1), which take the form for the configuration {c i } = (a 1 , a 2 , . . . , a 6 ) where a i denotes the color index of the gluon i taking an integer value between 1 and 8. Among the 8 6 ≈ 260, 000 configurations, only 12% give non-zero values. Moreover, as many as 75% of the color configurations give vanishing results for all the 120 basis vectors. Although the efficiency can be improved by changing the color basis, we find that our solution of evaluating the exact summation over colors is superior to the Monte Carlo summation method for all the processes which we report in this paper. New HEGET functions The HEGET functions for massless quarks and gluons are the same as those introduced in the previous report [1]. The qqg vertex functions are identical to the qqγ functions of ref. [1] except for the coupling constant; for the vertex where g s = √ 4πα s is the strong coupling constant and T a ij is an SU (3)) generator in the fundamental representation. For example, the qqg vertex function is computed by the HEGET function iovxx0 as iovxx0(cmplx* fi, cmplx* fo, cmplx* vc, float g, cmplx vertex vertex vertex) (20) where the coupling constants are following the convention of MadGraph [4] and the color amplitude In the rest of this section, we introduce new HEGET functions for three-vector boson (VVV) and four-vector boson (VVVV) vertices. All the new HEGET functions are listed in Table 5, and their contents are given in Appendix. Also shown in Table 5 is the correspondence between the HEGET functions and the HELAS subroutines [3]. VVV: three vector boson vertex For the ggg vertex we introduce two HEGET functions, vvvxxx and jvvxx0. They correspond to HELAS subroutines, VVVXXX, and JVVXXX, respectively, for massless particles; see Table 5. vvvxxx The HEGET function vvvxxx (List 1 in Appendix) computes the amplitude of the VVV vertex from vector boson wave functions, whether they are on-shell or off-shell. The function has the arguments: vvvxxx(cmplx* ga, cmplx* gb, cmplx* gc, float g, cmplx vertex vertex vertex) where the inputs and the outputs are: Inputs: cmplx ga [6] wavefunction of gluon with color index, a cmplx gb [6] wavefunction of gluon with color index, b cmplx gc [6] The coupling constant is in the HEGET function (24), following the convention of MadGraph [4]. In order to reproduce the amplitudes associated with the ggg vertex Lagrangian of eq. (23), the color factor associated with the ggg vertex is if abc . More explicitly, the vertex amplitude for eq. (23) is 1 by using the output (vertex vertex vertex) in eq. (24). Also note the HELAS convention [3] of using the flowing-OUT momenta and quantum numbers for all bosons. jvvxx0 This HEGET function jvvxx0 (List 2 in Appendix) computes the off-shell vector wavefunction from the threepoint gauge boson coupling in eq. (23). The vector propagator is given in the Feynman gauge for a massless vector bosons like gluons. It has the arguments: jvvxx0(cmplx* ga, cmplx* gb, float g, cmplx* jvv jvv jvv) where the inputs and the outputs are: Inputs: cmplx ga [6] wavefunction of gluon with color index, a cmplx gb [6] wavefunction of gluon with color index, b float g coupling constant of the VVV vertex Outputs: cmplx jvv [6] vector current j µ (gc : ga, gb) which has a color index, c (29) As in eq. (27) the color amplitude for the off-shell current is if abc (jvv jvv jvv) . VVVV: four vector boson vertex For the ggggg vertex we introduce two HEGET functions, ggggxx and jgggx0, listed in Table 5. They correspond to HELAS subroutines, GGGGXX and JGGGXX, respectively, for massless particles. ggggxx The HEGET function ggggxx (List 3 in Appendix) computes the portion of the amplitude of the gggg amplitude where the first and the third, and hence also, the second and the fourth gluon wave functions are contracted, whether the gluons are on-shell or off-shell. The function has the arguments: ggggxx(cmplx* ga, cmplx* gb, cmplx* gc, cmplx* gd, float g, cmplx vertex) where the inputs and the outputs are: Inputs: cmplx ga [6] wavefunction of gluon with color index, a cmplx gb [6] wavefunction of gluon with color index, b cmplx gc [6] wavefunction of gluon with color index, c cmplx gd [6] wavefunction of gluon with color index, d float gg coupling constant of VVV vertex Outputs: cmplx vertex amplitude of the VVVV vertex The coupling constant gg for the gggg vertex is In order to obtain the complete amplitude, the function must be called three times (once for each color structure) with the following permutations: ggggxx(ga, gb, gc, gd, gg, v1 v1 v1) (35a) ggggxx(ga, gc, gd, gb, gg, v2 v2 v2) (35b) ggggxx(ga, gd, gb, gc, gg, v3 v3 v3) The color amplitudes are then expressed as jgggx0 The HEGET function jgggx0 (List 4 in Appendix) computes an off-shell gluon current from the four-point gluon coupling, including the gluon propagator in the Feynman gauge. It has the arguments: jgggx0(cmplx* ga, cmplx* gb, cmplx* gc, float gg, cmplx* jggg) where the inputs and the outputs are: Inputs: cmplx ga [6] wavefunction of gluon with color index, a cmplx gb [6] wavefunction of gluon with color index, b cmplx gc [6] wavefunction of gluon with color index, c float gg coupling constants of the VVVV vertex Outputs: cmplx jggg [6] vector current j µ (gd : ga, gb, gc) which has a color index, d (38) The function (37) computes off-shell gluon wave function with three specific color index d which comes along with a specific color factor. As in eq. (35) it should be called three times jgggx0(ga,gb,gc,gg,j1 j1 j1) (39a) jgggx0(gc,ga,gb,gg,j2 j2 j2) (39b) jgggx0(gb,gc,ga,gg,j3 j3 j3) (39c) to give the off-shell gluon with the color factor Comparison of total cross sections In order to validate the new HEGET functions which are introduced in this report, we compare the total cross sections of n-jet production processes computed on the GPU with those calculated by other programs which are based on the FORTRAN version of the HELAS library. We use MadGraph/MadEvent [4] and another independent FOR-TRAN program which uses the Monte Carlo integration program, BASES [12], as references. Due to the limited support for the double precision computation capabilities on the GPU, the whole computations with HEGET on a GTX280 are done with single precision, while the other programs with HELAS in FORTRAN compute cross sections with double precision. For the calculation of the n-jet production cross sections we use the same physics parameters as the Mad-Graph/MadEvent for all programs, and the same final state cuts of eq. (14) for all processes and all programs. The parton distribution functions of CTEQ6L1 [7] and the same factorization and renormalization scales, Q = p cut T = 20GeV, are also used. Results for the computation of the total cross sections are summarized in Tables 6, 7 and 8 for gg → gluons, uu → gluons and uu → uu+gluons, respectively. We find the results obtained by the HEGET functions agree with those from the other programs within the statistics of generated number of events. We note that multi-jet events that satisfy the final state cuts of eq. (14), where all jets are in the central region in |η| < 2.5 (14a) and their transverse momentum about the beam direction (14b) and among each other (14c) greater than 20 GeV, are dominated by pure gluonic processes in Table 6. The cross sections for uu → ng process in Table 7 are small because of uu annihilation. We note that the crossing-related non-annihilation processes, ug → u + (n − 1)g, have exactly the same number of diagrams and color bases, hence can be evaluated with essentially the same amount of computation time. Comparison of the processing time As already described in our previous report [1], we prepare two versions of the programs in the same structure for the computation of the total cross sections. One is written in CUDA, a C-based language, and can be executed on the GPU. The other is written in C and can be executed on the CPU. Using a standard C library function we measure the time between the start of the transfer of random numbers to the GPU and the end of the transfer of computed results back to the CPU. In Fig. 1, the measured process time in µsec for one event of n-jet production processes is shown for the GPU (GTX280) and the CPU (Linux PC with Fedora 8). They are plotted against the number of jets in the final state. Because the process time per event on the GPU depends [1] strongly on the number of allocated registers at the compilation by the CUDA and the size of thread blocks at the execution time, we scan combination of these parameters for the fastest event process time on the GPU. The upper three lines in Fig. 1 show the event process times on the CPU. They correspond to gg → n-jets denoted as gg, uu → n-jets as uu and uu → uu + n-jets as uu, respectively. For processes with small numbers of jets, e.g. n jet = 2, the event process times for different processes are all around 4.5 µsec. This is probably because they are dominated by computation steps other than the amplitude calculations, such as computations of the PDF factors and the data transfer between GPU and CPU, which are common to all physics processes. When the number of jets becomes larger, the event process time for the same number jets in the final states is roughly proportional to the number of diagrams of each process listed in Table 1. The lower three lines in Fig. 1 show the event process times on a GTX280. They also correspond to gg → n-jets denoted as gg, uu → n-jets as uu and uu → uu + n-jets as uu, respectively. As the number of jets becomes larger, the process time on the GPU grows more rapidly than that on the CPU. For the n jet = 4 case, the event process time of gg → 4 gluons is larger than the expected time from the proportionality to the number of diagrams of the other processes, uu → 4 gluons and uu → uu+2 gluons. In other words, the event process time on GPU grows faster than what we expect from the growth of the number of Feynman diagrams. For instance, the event process times ratio for gg → 4g and gg → 3g on the CPU are roughly 120 µsec/14 µsec ∼ 8.6, which roughly agrees with the ratio of the numbers of Feynman diagrams (Table 1), 510/45 ∼ 11. The corresponding ratio on GPU is 3.8 µsec/0.1 µsec ∼ 38, which is significantly larger. For the same number of jets, we also observe that the event process times on the CPU are roughly proportional to the number of diagrams. Fir n jet = 4, the ratio of the process times for gg → 4g to uu → 4g are about 120 µsec/29 µsec ∼ 4.1 on CPU, as compared to the ratio of the number of Feynman diagrams in Table 1, 510/159 ∼ 3.2. The same applies to n jet = 5 between uu → 5g and uu → uuggg, where Feynman diagrams have the ratio 1890/786 ∼ 2.4 from Table 1, and the event process time on the CPU gives 300 µsec/180 µsec ∼ 1.7, also in rough agreement. On the other hand, the event process times on the GPU for gg → 4g and uu → 4g have a ratio 3.8 µsec/0.45 µsec ∼ 8.4 which is much larger than the ratio of the diagram numbers; while that for uu → 5g and uu → uuggg has the ratio of 11µsec/9.5µsec ∼ 1.15. Although we do not fully understand the above behavior of the event process time on the GPU, we find that they tends to scale as the product of the number of Feynman diagrams and the number of color bases, while the event process times on the CPU are not sensitive to the latter. This is probably because as the number of color bases grows, more amplitudes, (J λi ) α in eq. (8), should be stored and then called to compute the color sum, eq. (10). These observations tell us that the relative weight of the color matrix computation in the GPU computing is very significant even after identifying the independent elements of the color matrix N αβ in eq. (9) as listed in Table 4. Comparison of performance of GPU and CPU The ratios of event process times between CPU and GPU are shown in Fig. 2. Three lines correspond to gg → n-jets denoted as gg, uu → n-jets as uu and uu → uu+(n−2)jets as uu, respectively. The performance ratios exceed 100 for the processes with small numbers of jets (n jet ≤ 3) in the final state. For n jet = 4 and 5, the performance ratios gradually drop to less than 40. For processes with large numbers of color bases, the ratios are smaller. For gg → 4 gluons, which has 120 color bases, the ratio is about 30, and for uu → uu+3 gluons, which has 240 color bases, the ratio becomes about 20. Note on gg → 5g study Among five-jet production processes we have not been able to run the program for gg → 5g. This process has 7245 di- agrams and 720 color basis vectors. In order to compile the program for the computation of this process, we use the technique developed in the previous study [1]. By dividing the program into about 140 pieces we were able to compile the gg → 5g program. Compilation takes about 90 min. on a Linux PC. The total size of the compiled program exceeds 200 MB, and we were not able to execute this compiled program on a GTX280. Summary We have shown the results of our attempt to evaluate QCD multi-jet production processes at hadron colliders on a GPU [11], Graphic Processing Unit, following the encouraging results obtained for QED multi-photon production processes in ref. [1]. Our achievements and findings may be summarized as follows. -A new set of HEGET functions written in CUDA [2], a C-language platform developed by NVIDIA for general purpose GPU computing, are introduced to compute triple and quartic gluon vertices. The HEGET routines for massless quarks were introduced in ref. [1], and the routine for photons [1] can be used for gluons. In addition, the HEGET functions for the qqg vertex are the same as those for the qqγ vertex introduced in ref. [1]. -The HELAS amplitude code generated by MadGraph [4] is converted to a CUDA program which calls HEGET functions for the following three type of subprocesses: gg → ng (n ≤ 5), uu → ng (n ≤ 5), and uu → uu + ng (n ≤ 3). -Summation over color degrees of freedom was performed on a GPU by identifying the same valued elements of the color matrix of eq. (9), in order to reduce the memory size. -All the HEGET programs for up to 5 jets passed the CUDA compiler after division into small pieces. However, we could not execute the program for the process gg → 5g. Accordingly, comparisons of performance be-tween GPU and CPU are done for the multi-jet production processes up to 5 jets, excluding the purely gluonic subprocess. -Event process times of the GPU program on GTX280 are more than 100 times faster than the CPU program for all the processes up to 3-jets, while the gain is reduced to 60 for 4-jets with one or two quark lines, and to 30 for the purely gluonic process. It further goes down to 30 and 20 for 5-jet production processes with one and two quark lines, respectively. -We find that one cause of the rapid loss of GPU gain over CPU as the number of jets increases is the growth in the number of color bases. GPU programs slow down for processes with larger numbers of color basis vectors, while the performance of the CPU programs is not affected much. -All computations on the GPU were performed with single precision accuracy. A factor of 2.5 to 4 slower performance is found for double precision computation on the GPU. [4].re); p1[1] = (ga [5].re); p1[2] = (ga [5].im); p1[3] = (ga [4].im); p2[0] = (gb [4].re); p2[1] = (gb [5].re); p2[2] = (gb [5].im); p2[3] = (gb [4].im); q[0] = -(jvv [4].re); q[1] = -(jvv [5].re); q[2] = -(jvv [5].im); q[3] = -(jvv [4].im);
2009-09-29T03:34:54.000Z
2009-09-29T00:00:00.000
{ "year": 2009, "sha1": "5a2732a6edfa0488eca93e1dc80f5c70cf783dfd", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-010-1465-5.pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "5a2732a6edfa0488eca93e1dc80f5c70cf783dfd", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
15117522
pes2o/s2orc
v3-fos-license
Genome sequencing and analysis of Salmonella enterica serovar Typhi strain CR0063 representing a carrier individual during an outbreak of typhoid fever in Kelantan, Malaysia Salmonella Typhi is a human restricted pathogen with a significant number of individuals as asymptomatic carriers of the bacterium. Salmonella infection can be effectively controlled if a reliable method for identification of these carriers is developed. In this context, the availability of whole genomes of carrier strains through high- throughput sequencing and further downstream analysis by comparative genomics approaches is very promising. Herein we describe the genome sequence of a Salmonella Typhi isolate representing an asymptomatic carrier individual during a prolonged outbreak of typhoid fever in Kelantan, Malaysia. Putative genomic coordinates relevant in pathogenesis and persistence of this carrier strain are identified and discussed. Background Salmonella enterica serovar Typhi, the aetiologic agent of typhoid fever is still posing a major health problem for the developing world, as about 16 million new cases are reported each year [1]. S. Typhi causes systemic infections (typhoid fever) as well as chronic infections (asymptomatic carriers) in humans, the latter serve as the source of infection [2]. The transmission of S. Typhi is primarily through faecal-oral route and a significant number of infected individuals become chronic asymptomatic carriers and keep shedding S. Typhi in faeces for decades [3]. This results in endemicity of S. Typhi in regions of the world with underdeveloped sanitation and community hygiene [4]. Carrier identification becomes extremely important as some of the ancestral haplotypes were observed in recent isolates suggesting their persistence in these asymptomatic carriers [5]. Traditional methods such as culturing of bacteria from faecal samples are not fool proof as the carriers shed bacteria intermittently. Serological tests to detect specific antibodies such as anti-H and anti-O are unable to differentiate between carriers and individuals who have recovered from the infection [6]. Especially, in areas endemic for S. Typhi, due to high background levels of these antibodies, serological tests cannot be adopted for the identification of a carrier [7]. Thus, there is an urgent need for inexpensive and efficient detection methods for the establishment of carrier state, perhaps based on genomic markers. The genetic typing tools such as PFGE, AFLP, ribotyping etc. can resolve limited genetic variation occurring within specific sites, and therefore are incapable of differentiating highly clonal strains such as outbreak related strains from the ones not associated with the outbreak (carrier isolates) [8][9][10]. High-throughput sequencing technologies have already been employed as a high resolution molecular epidemiologic tool to discern microevolution of highly related strains [11]. In this study, we attempted to determine if whole genome sequencing of S. Typhi isolated from a carrier individual can provide insights related to persistence and or adaptation mechanisms. We describe the genome sequence of a Salmonella enterica serovar Typhi strain (ST CR0063) isolated from a carrier individual during a prolonged outbreak of typhoid fever in Kelantan, Malaysia. Figure 1. The assembled draft genome shows high degree of similarity and shared core genome regions with Salmonella Typhi ST BL196 [12], the one identified as associated with a typhoid outbreak in Kelantan during the same period ( Figure 2). Virulence factors The gene shdA, a key factor predicted to be involved in persistence of the bacterium in the intestines [14] by binding to its extracellular matrix, was identified and annotated. This gene, by mimicking the host heparin, is able to bind to the extracellular matrix proteins, fibronectin and collagen, and probably plays an important role in carriers by contributing to prolonged faecal shedding [15]. The fim gene cluster [16] of chaperone -usher family involved in adhesion to non-phagocytic cells was detected along with its negative regulator fimW. Type IV pili and agf operon [17,18] encoding curli fimbriae which aid in attachment of the bacterium to intestinal villi and also with each other, were found in the genome. These adherence factors determine the sites of bacterial colonisation and thereby adaptation and pathogenicity of a particular strain [19,20]. The S. Typhi strain ST CR0063 genome also revealed viaA and viaB loci, the prime regulators of Vi antigen expression. The viaB locus contains all genes for the biosynthesis (tviA-E) and export (vexA-E) of the Vi antigen, a well-known virulence factor [21,22]. The mgtC gene involved in Magnesium uptake and ferric uptake regulators (fur) [23] were also identified in ST CR0063. The PhoPQ regulon [24], which induces cytokine secretion and cationic antimicrobial peptide resistance, was also found to be conserved in our carrier strain. The RpoS sigma factor needed to cope up with external stress and nutrient depletion conditions [25] was also identified and annotated. The co-ordinates of these virulence factors in the genome of ST CR0063 are depicted in Figure 3. Phages and pathogenicity islands (PAIs) The phages gifsy-1 and fels-2 [27] together with many phage proteins and a few hypothetical proteins were identified in the genome of ST CR0063 by various algorithms (See Methods for details). It is expected that these phages are acquired by horizontal gene transfer (HGT) events as they were embedded in some of the genomic islands recognized. The phage encoding SopE effector protein of SPI-1 (Salmonella Pathogenicity Island) was present in ST CR0063 as recognized in other Typhi genomes [28,29]. More than 15 PAIs that encode clusters of virulence associated genes have been identified across various serovars of Salmonella enterica. Ten pathogenicity islands have been identified by us in ST CR0063 and as expected [30], they were characterised by different G + C content and bounded by t-RNA genes. The SPI-1 type III secretion system (TTSS) structural genes spaM-NOPQRS and invABCEFGH and their regulatory proteins HilA, HilC, HilD [31] were also identified and annotated. The SPI-1 secreted effector proteins SopE, SopE2, SipA, SipB, SipC and SptP required for endothelial uptake and invasion [32] are also present. The genes SpiC, SseF, SseG, SifA, SifB secreted by SPI-2 TTSS and that are needed for survival in macrophages and colonisation of host organs [33] were also recognised in the present genome. The known regulators of SPI-2, OmpR-EnvZ and PhoP-PhoQ [34] were present. SPI-3, identified by us, contained magnesium transport genes mgtC and marT which are required for survival in macrophages [35]. Type I secretion system and its associated proteins encoded by SPI-4, and that are involved in the invasion of the intestinal epithelium [36], were also located in the present genome. The SPI-1 effector proteins SopB and PipB associated with enteritis and coded by SPI-5 [37] were also detected and annotated. The chaperone-usher fimbrial operons carried by SPI-6, SPI-10 and bacteriocin immunity proteins carried by SPI-8 [38] were identified. The SPI-7 and SPI-9 were identified in the ST CR0063 genome and were found to encode viaB locus, type IV pili formation proteins and TISS [38,39]. Conclusions and prospective The genomic blueprint of Salmonella Typhi isolate ST CR0063 was elucidated in this study. The genome sequence information presented herein may be harnessed to guide comparative genomics and identification of novel and specific diagnostic markers. However, further studies involving large scale genome sequencing of the strains from several of the endemic countries and especially those from carrier individuals of different socio-economical settings is needed to develop a reliable approach to decipher the characteristics of a carrier state. Also, it will be required to determine the true extent of the diversity of carrier strains as juxtaposed to their acutely pathogenic forms in terms of 1) gene gain/loss during colonization and adaptation; 2) dynamics of virulence acquisition/attenuation; 3) possible genomic rearrangements; and 4) the relative preponderance of carrier and virulent strains circulating in different endemic regions of the world. Finally, an in-depth analysis of the host-pathogen interactions and their influence on gut microbiota can only explain the adaptation and persistence mechanisms of the (asymptomatic) carrier strains. Genome sequencing DNA was isolated from the stool sample of an asymptomatic carrier individual from Kelantan, Malaysia in 2007 during a prolonged outbreak. The draft genome sequence of this strain (STCR0063) was determined by Illumina Genome Analyzer (GAIIx, pipe-line ver l.6). The 100 bp paired-end sequencing was done with an insert size of 300 bp. About 67X genome coverage was achieved and 1.9 gigabytes of data were obtained. Assembly and annotation The sequence data were assembled de-novo in the same way as described previously [40][41][42][43][44][45] into 538 contigs using Velvet [46] at optimal hash length 39. SSPACE [47] was used for scaffolding the pre-assembled contigs using paired-end data. The gaps within these scaffolds were filled using Gapfiller by aligning the reads against already generated Scaffolds by SSPACE [48]. A reference guided assembly was generated by aligning reads to Salmonella Typhi str. CT18 [GenBank: AL513382.1] using bwa tools [49]. This reference guided assembly was used to re-order the scaffolds generated in de-novo way. In-house written Perl scripts were used for this re-ordering process and to finalize the gaps. The de novo and reference guided approaches were used to finalize the consensus draft genome. The reference guided assembly and reordered scaffolds were loaded on to Tablet -NGS data visualisation tool, to visualise the repeats, insertions and deletions [50]. The final draft nucleotide sequence after manual curation was annotated in our laboratory using RAST [51] and ISGA pipeline [52]. The genome statistics were gleaned using Artemis [53]. The data were further validated using gene prediction tools such as Glimmer [54] and EasyGene [55]. The RNAmmer [56] and tRNAscan-SE [57] were used to identify rRNA and tRNA respectively. Phages and PAIs Prophages and putative phage like elements in the genome were identified using PhiSpy [58] and Prophage Finder [59]. The putative HGT events were determined using Alien Hunter tool [60]. An integrated interface Island Viewer was used to predict putative genomic islands within the genome [61]. Sequence data access The Salmonella enterica subsp. enterica serovar Typhi str. CR0063 whole genome shotgun (WGS) project has been submitted to the GenBank and has the project accession AKIC00000000. The project version entailing draft assembly described herein has the accession number AKIC01000000, and consists of sequences AKIC01000001-AKIC01000538. Competing interests The authors declare that they have no competing interests. Authors' contributions NA designed the study, interpreted the results and edited the manuscript. RB and NK managed Illumina sequencing, made the assemblies, analyzed the genome, and performed annotations. SS and TS provided computational tools and contributed to automation of the analysis process. KT provided inputs related to the outbreak and the strain features, characterized the strain and maintained it in pure cultures. STN contributed to microbiology of the strain and prepared high molecular weight DNA for genome sequencing. All the authors read and approved the manuscript prior to submission.
2017-06-25T10:52:45.506Z
2012-12-13T00:00:00.000
{ "year": 2012, "sha1": "188f23af79d1d8da53f8990793a8214fab38b274", "oa_license": "CCBY", "oa_url": "https://gutpathogens.biomedcentral.com/track/pdf/10.1186/1757-4749-4-20", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "83dda94bd058fd2dfaceefa3ef01913042b1ba20", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18737172
pes2o/s2orc
v3-fos-license
Splash Dynamics of Paint on Dry, Wet, and Cooled Surfaces In his classic study in 1908, A.M. Worthington gave a thorough account of splashes and their formation through visualization experiments. In more recent times, there has been renewed interest in this subject, and much of the underlying physics behind Worthington's experiments has now been clarified. One specific set of such recent studies, which motivates this paper, concerns the fluid dynamics behind Jackson Pollock's drip paintings. The physical processes and the mathematical structures hidden in his works have received serious attention and made the scientific pursuit of art a compelling area of exploration. Our current work explores the interaction of watercolors with watercolor paper. Specifically, we conduct experiments to analyze the settling patterns of droplets of watercolor paint on wet and frozen paper. Variations in paint viscosity, paper roughness, paper temperature, and the height of a released droplet are examined from time of impact, through its transient stages, until its final, dry state. Observable phenomena such as paint splashing, spreading, fingering, branching, rheological deposition, and fractal patterns are studied in detail and classified in terms of the control parameters. Introduction The relationship between fluid mechanics and art is a certainly not a new field of investigation within mathematics and physics.Leonardo da Vinci's pioneering studies and detailed sketches of water flow date back to 1508.The series of block prints of great waves by Japanese artist Hokusai were published around 1830 [1].More recently, fractal patterns and dimension have become fascinating and fertile areas of scientific interest, most notably with the use of viscous oils, latex paints, and varnish fixatives.Recent studies of Jackson Pollock's drip paintings are widely known and quite compelling, as are the analyses of varnish crackling, used to authenticate paintings by the great masters [2,3].However, the physics and mathematics of watercolor painting remains largely unexplored.This could, in large part, be due to the fact that water quickly disperses, along with the pigment.Furthermore, watercolors permeate the underlying sheet of paper rather than drying on top of a canvas, a phenomenon observed with the more viscous media.This experiment explores droplet patterns of watercolors with the aim of elucidating the physics behind the process.The results thus far have been quite interesting and varied with respect to the kinds of fractals produced, branches and tributaries, rheological formations, and residual sediment patterns to name a few.Watercolor appears to be a great medium to explore the art of fluid dynamics, or perhaps the fluid dynamics of art, due to water's complex behavior, ultimately yielding some truly spectacular displays. The dynamics of a splash, such as one produced by the impact of a droplet on a surface has been extremely well studied and the literature on the subject dates back to 1908 to the work by Worthington [4,5].The subject continues to be of interest in current times as well due to diverse applications such as in forensics, ink-jet printing and micro-fabrication processes which rely upon drop dispensing.Recent review papers [6][7][8] discuss the state of the art in the field up until the last decade.In this regard, the studies concerning the impact of droplets on dry surfaces is most relevant to our own work.The experimental work by Rioboo et al. [9] explored the impact of different liquids upon solid surfaces corresponding to varying surface roughness.Their investigations revealed six categories of splashes: deposition, prompt splash, coronal splash, receding break-up, partial rebound and complete rebound, which depend upon the properties of the liquid and surface.In addition to the roughness of the surface, the viscosity, density and surface tension of the liquid droplet have been recognized as being significant and the different regimes of droplet splashes are often expressed in terms of three dimensionless quantities, namely the Reynolds (Re), Weber (We) and Ohnesorge (Oh) numbers [8] which depend upon these afore-mentioned physical parameters.The viscosity of the droplet and friction of the surface determines the initial stages of the splash while the surface tension takes control of the final stages, when the drop thickness on the surface is diminished.The prominent regimes are distinguished by the parameter K = We √ Re whose critical value determines the onset of splashing. There has also been a substantial amount of work on the splash patterns of non-Newtonian and particle-laden droplets (see for instance [10][11][12][13][14]).As is expected, non-Newtonian characteristics such as viscoelasticity, yield-stress and shear-thinning result in a marked variation in the observed post-impact behavior of the droplets.These have been categorized as (a) irreversible viscoplastic behavior, (b) perfect elastic recoil and (c) viscoleasticity [11].In the case of non-Newtonian fluids, the Deborah number (De), critical strain parameter (γ c ) and Mach number (M) have been used to classify the various observed regimes.The splash dynamics of droplets with embedded particles is equally interesting and unique [13][14][15].For droplets with a sufficiently low particle concentration, the parameter K has been employed to characterize the drop impact with the viscosity µ replaced by an effective, concentration-dependent viscosity µ e .In the case of high concentration droplets, it is observed that a reinterpretation of the Weber number, based on particle characteristics, is needed to accurately capture the onset of splashing. The present article presents our experimental results for droplets of watercolor impacting wetted canvas, which are maintained at different temperatures at and below room temperature.Our observations are contrasted with those reported in the literature and inform an unexplored aspect of the bigger problem, which could be of interest to scientists and artists alike.The outline of the paper is as follows.In Section 2, we discuss the experimental setup and procedure.Section 3 presents the results of the experiments such as radial growth curves of the droplet patterns and effect of the control parameters upon the drop shape and size.Section 4 is focused on the fractal characteristics of the splash patterns.Finally, in Section 5, we perform a rigorous statistical study of the experiments to firmly establish causal relationships for the observed patterns. Materials and Methods This experiment was conducted utilizing three types of Winsor and Newton Artist Series watercolor paint, namely, Permanent Rose, Prussian Blue, and Sepia.These pigments were chosen based upon their individual and significantly differing rheological, deposition properties.Each color was available in tube form whereby the pigment was squeezed from the tube and dissolved in clean, filtered water.Three grades of acid-free watercolor paper were used: Arches 140 lb rough, Arches 140 LB cold-press, and Canson 90 lb paper.The papers were cut into smaller pieces measuring roughly 5 in × 7 in and liquid frisket was applied to the edges of each rectangular paper to create a thin latex border in order to reduce and contain water dispersion.Filtered water was then applied with a soft sea sponge to both sides of the paper, a technique known as wet-on-wet, and the paper (or substrate) was mounted flat on a backing board.Excess water was spilled off, and the paper was left to sit for a few minutes to allow the substrate to thoroughly absorb water.The papers were then separated into three categories: unfrozen, frozen for 5 min, or frozen for 30 min A droplet of paint was released onto a substrate from a height of either 6 inches or 12 inches.For the first set of paintings, the droplet was released from a standard drinking straw by holding the thumb over the top, open end, removing the thumb and letting gravity pull down the pigment.For the remaining sets of paintings, a medicine dropper (of outer diameter 4 mm and inner nozzle diameter 1mm) was used to release the droplets.Photographs were taken with a cellular phone camera at roughly 1 min intervals for about 20 min or until the paint spread reached a maximum dispersion.Each digital photograph was analyzed using the ImageJ software (National Institute of Health, Bethesda, MD, USA.) The volume of each droplet released from the medicine dropper was, on average, about 0.04 mL, while the volume released from the straw was about 0.18 mL.The density, ρ, of the droplets of the different paints was measured by weighing 10 mL of the solutions and the volume of each droplet was measured by counting the number of drops that made up 10 mL.The density of the different paints ranged between 0.99-1.02gm/cc.The viscosity of the paints, µ, was measured using a hand held viscometer and is described in further detail later in this section.The surface tension of the watercolor paints, σ, could not be measured and was therefore assumed to be the same as that of water.The impact velocity of the droplet from the two different heights was computed using the elementary kinematic formula, V h = 2gh, where V h refers to the terminal, impact velocity of the droplet, released from a height h.We find that V 6in = 1.73m/s and V 12in = 2.45m/s.Using these parameters, we can estimate the values of the non-dimensional Weber number and Reynolds number which are given by where D refers to the characteristic length in the problem which was taken to be the approximate diameter of the droplet.In this study, the Weber number ranged between 150-323, while the Reynolds number varied from 2600-10,200.Comparing with the Re vs.We curve shown in ( [17], Figure 1), we note that our experimental conditions put us in a very interesting part of the phase map which is between the viscous-inertial and capillary-inertial spreading regimes. Rheology of paint: The rheological properties of the materials used for painting are each important in their respective ways and interact differently with the other agents involved in the process of painting [16].Overall, we need to pay particular attention to the following: (1) the support in these experiments: namely the watercolor paper; (2) the ground: the first layer on the support, i.e., the water applied to the paper; (3) the paint: one or more pigments, and sometimes a brightener, transparent or "white" crystals that lighten the value and increase the chroma of the dried paint dispersed in a vehicle or medium.The paint consists of: a binder, traditionally and still commonly said to be gum Arabic or glycerin used for softening the dried gum and helping it redissolve.The paint also contains humectant, which is made of syrup, honey or corn syrup, to aid in moisture retention; an extender or filler, such as dextrin, to thicken the paint; additives, to prevent clumping of the raw pigment after manufacture and to speed up the milling of the pigment; fungicides or preservatives to suppress the growth of mold or bacteria; and finally water, which dissolves all the ingredients, transports them onto the paper and evaporates quickly [19].The pigments within the paints used for this experiment are indicated as follows: Permanent Rose: Quinacridone Red (PV19), Prussian Blue: Alkali Ferriferrocyanide (PB27) and Sepia: Carbon Black and Iron Oxide (PBk6, PR101) [18].The viscosity of the paints were measured using a hand-held Haake viscometer (Thermo Scientific, Waltham, MA, USA).Experiments were conducted using approximately 5 mL of pigment dissolved in 30 mL of water.Accordingly, the materials were scaled up to 20 mL of pigment dissolved in 120 mL of water for viscosity measurement purposes.The viscometer was first calibrated to zero dP-s for clean, filtered water, and then the viscosity of each paint type was measured at 15 second intervals for 10 min An average value for each paint was calculated, along with one standard deviation.The Table 1 lists the average viscosity of the various paints used.The non-Newtonian characteristics of watercolor paint are not presented in this paper.However, there is reason to believe that paints could be non-Newtonian based on their various components, as mentioned above.Such a multi-component suspension has the tendency to display properties such as yield, thixotropy or dilatancy, which are fundamental non-Newtonian characteristics and extend important properties for a painting purposes ( [19], p. 52).It has long been thought that the low viscosity of the carrier fluid (water) would render the watercolor system Newtonian, but recent work has shown that components such as gum Arabic possess non-Newtonian properties when in solution [20][21][22].The effect of freezing temperatures upon the paint and surface roughness could also potentially bring out non-Newtonian properties.These are however, not rigorously proven at this stage and need to be investigated in greater detail in the future. Heating Curves for Canvas: The canvas was wetted on both sides according to the procedure described earlier and placed in a freezer for a duration of 5 and 30 min, corresponding to the two cases of frozen canvas.The freezing times were chosen under the assumption that freezing longer would allow us to maintain the surface as a solid for longer period.The time interval between removal of the canvas from the freezer and the start of the experiment was about 10-15 s.Therefore, no significant melting of the canvas would have occurred during this time.To verify how the temperature of the canvas changed with time, we measured the average surface temperature of the canvas in all three cases using a hand held infrared thermometer (General IRT 206 Infrared Thermometer, Taiwan).The Figure 2 shows the "heating curves" for all three cases which reveals the classical Newtonian profile.After approximately 10 min, the temperature of all the three canvases reach a common temperature and very slowly converge to room temperature.The melting temperature for the two frozen canvases are approximately at the 1 min and 4 min mark indicated by the dashed lines in the Figure 2. Prolonged residence times in the frozen state results in reduced viscous resistance for the paint to disperse on the canvas.Porosity of Canvas: The porosity of canvas was analyzed using the standard water evaporation method [23].The pore volume fraction is given by = V p V t , where V t is the total volume and V p is the volume of pores of the form weight of wetted canvas − weight of dry canvas density of water . Our estimations of the porosity of the three different canvas types reveal that the void fractions for 90lb, 140lb and 140 lb cold pressed are 0.489, 0.319 and 0.29, respectively, suggesting 30% to nearly 50% of the canvas being empty space.Figure 3a-c show the canvas under a microscope with the last one, Figure 3d showing regular uncoated printer paper under the same magnification for comparison.All the surfaces have been stained with water color paint to highlight surface features.The images reveal substantial coarseness of the surfaces, especially in cases (a)-(c), when compared to plain paper and the porous nature of the canvas, even at such relatively low levels of magnification. Results The effective diameter of each splash was measured in centimeters, approximated by the largest width of the splash.The effective radius for each droplet was repeatedly measured continuously over time intervals of the order of a minute from the time of impact to the equilibrium state when the paint eventually dried on the paper (see Figure 4).Figure 5 depicts the evolution of the effective radius of the splash as a function of time.In particular, the figure shows a representative image of the impact of: height of droplet, temperature of the canvas, surface roughness of the canvas and viscosity of the paints upon this saturating curve.One can clearly see the impact of some these parameters more strongly than others from the figures.The effective radius of the paint is most affected by the paint type and nature of canvas (Figure 5c,d) which can be attributed to the varying frictional forces caused by the different pigment concentration of the paints and surface roughness of the canvas, respectively.The temperature of the canvas (Figure 5b) also shows interesting trends in the initial phase when surface characteristics are different.However, the true impact of all the parameters can be procured only by means of some more rigorous statistical means.Statistical analyses were conducted for the 46 different cases that were studied and correlations between the size of splash and the various control parameters are best determined using these tools.The statistical correlations are discussed below in Section 5, but, in the rest of this section, we discuss some other specific quantitative features, drawing upon similar work in the literature on slightly different systems.As also noted in [24], we only observe two broad categories of splashing, namely deposition and prompt splashing due to the rapid absorption of much of the paint into the canvas upon contact.Within these categories, we identified distinct patterns in our experiments, which are classified into four sub-categories: (i) symmetric or circular patterns; (ii) splash pattern with a visible inner stamp; (iii) splash with strong radial fingering patterns; and (iv) satellite droplets.In Table 2 below, we explain the qualitative impact of the experimental parameters upon the appearance and strength of these splash patterns which are also shown in Figure 6.This is also further investigated in our statistical analysis section.In the table, the symbol ↑ indicates that the control parameter has an increasing effect upon the strength/magnitude/existence of that particular pattern while the symbol ↓ indicates the reverse. Property Symmetric/Circular Visible Inner Stamp Radial Fingering Satellites Increase in... Rioboo et al. [25] break down the deposition of paint on a dry surface into two phases: kinetic and actual, with the former displaying a radial growth r ∼ √ t, where r is the effective radius of the splash and t, is time.Such a profile is also observed in other studies concerning droplet splashes on liquid surfaces [26].However, the value of the exponent is seen to vary in some other studies between 0.2-0.5 [27,28] which is attributed to the interaction of adjacent splashes.In our experiments, we also fit a power law function r ∼ at b , to verify the radial growth rate of the splashes (see Figure 7).On average, over all the experiments performed, the average values of the fit parameters are ā = 4.39 and b = 0.26, with an average fit correlation R 2 = 0.86.However, if the data is analyzed in terms of the freezing period of the canvas, we observe a distinctly different value of b; (i) in the case when the canvas is not frozen, b = 0.091, (ii) when the canvas is frozen for 5 min b = 0.194 and (iii) when the canvas is frozen for 30 min, b = 0.471.In the current study, we therefore hypothesize the existence of three phases: (i) initial absorption, (ii) kinetic and (iii) actual.The initial absorption phase, which does not exist in the previous experiments, could potentially leave a relatively smaller volume of the droplet to spread.This would suggest a short kinetic phase which is dominated by capillary forces resulting in a slower growth rate.The relatively low value of our own exponent can be attributed to such an initial absorption phase. Time (sec) Effective radius (cm) The graph shows the evolution of the effective radius of the splash with time.A power law fit to the data is also shown, with an exponent value of 0.13, which is much smaller than those seen in other kinds of splashes. Following the work of Marmanis and Thoroddsen [24], we estimate the "number of fingers" as a function of height for various cases that show clear fingering patterns.As in [24], "...everything that resembles a finger, no matter how short..." is regarded a finger.The exact number of fingers however, is difficult to determine and the numbers reported must be realized to be approximate.Figure 8 depicts some sample cases of our analysis, which shows the count for the two different heights considered in this study (h = 6in and h = 12in) and two different paints (Permanent Rose and Prussian Blue).Clearly, in both cases, the height, or impact velocity is directly related to the number of fingers (also noted in [24]).In addition, the viscosity of the paint is inversely related to the number of fingers seen from comparing the two graphs Figure 8a,b.Other potential factors are more difficult to identify directly from the count and are dealt with in the following Section 5. Fractal Dimension The fractal analysis of drip paintings has become a research area of widespread interest in the past two decades [2,3,29].The "organic" paintings of Jackson Pollock, in particular, have been identified as having a unique fractal signature.As a result, Taylor et al. [29] have claimed that the fractal dimension can be used as a means to identify artists and expose fakes, a claim which is contested by some [30].However, this controversial issue aside, the fractal dimensional analysis of art is an interesting issue in itself.Furthermore, fractal patterns have also found unique and interesting applications such as in the field of environmental psychology, for therapeutic purposes (see [31] and references therein). The method of box counting was implemented to approximate the fractal dimension of each painting.To estimate a two-dimensional fractal, a grid of boxes, each with a horizontal and vertical dimension of 2 n , {n = 0, 1, 2, . . .m}, is superimposed over an image and the total number of boxes, N m , that are needed to cover the image are counted.At any given value of m, the fractal dimension, or Hausdorff dimension, is then approximated by D m = log(N m )/ log(2 m ).This procedure is repeated as m → ∞, thus D m → D, the dimension of the figure.A Matlab based code was used to perform this computation.Consequently, a limit must be imposed on m to prevent N m from becoming infinitesimally small and 2 m infinitely large so as to yield a value equal to zero, and thus an indeterminate logarithmic ratio.Test images with known fractal dimension were used to determine the accuracy of the Matlab code.A numerical analysis of fractal dimension versus number of boxes graphically demonstrates an ideal box-count number around 200.At box-count numbers greater than about 400, the fractal dimension diverges from the target value because the number of boxes covering the image, N m , becomes negligible in comparison to the total number of boxes, 2 m , just as an object under a microscope might become blurry even though the lens gets closer and closer to a slide. The photographs were saved in 'JPG' format, loaded into Matlab which converted a color image into a binary data array to produce a black and white figure (see Figure 9) which could then be analyzed with the box count method.Benchmark tests for the fractal dimension were performed (see Table 3) on several well known patterns such as Sierpinski triangle, Koch snowflake and Golden dragon [32].Convergence studies were also conducted for box counts ranging from 25-500.Maximum error in our fractal dimension computation was about 0.09% when compared with their known dimensions.The time evolution of the fractal dimension was also computed and shows a similar overall profile to the effective radius.Figure 10 shows a sample curve corresponding to the images in Figure 4.However, this curve does not display a power-law correlation.The factors affecting the fractal dimension are discussed in the following section. Statistical Analysis We first modeled the relationship between scaled radius and the predictor variables: temperature (1 = unfrozen, 2 = frozen 5 min, 3 = frozen 30 min), height (1 = 6" and 2 = 12"), time, viscosity (1 = Permanent Rose, 2 = Prussian Blue and 3 = Sepia), type of paper (1 = canson 90 lb, 2 = arches 140 lb rough, 3 = arches 140 lb cold press) and the volume (1 = medicine dropper and 2 = straw), where time is the only continuous variable and all the others are treated as categorical variables.The fitted model result is given in Table A1 in the appendix.Further ANOVA analysis showed that all the predictors are significant predictors to the scaled radius.Based on Table A1, we can see that frozen 30 min significantly decreases the scaled radius compared to unfrozen temperature (p-value = 0), but frozen 5 min is not significantly different from unfrozen temperature.Height at 12" significantly increases the scaled radius compared to height at 6".With time increasing, it significantly increases the scaled radius.The normality and constant assumption of this multiple regression are met through residual plot check. We then modeled how the same set of covariates affect certain specific patterns (termed hole pattern 1 and hole pattern 2).We create a binary variable for hole pattern 1 which corresponds to a prominent initial droplet stamp where paint has landed and dispersed, such as in Figure 6 (B2).We fit a logistic regression model using the covariates to explain the binary response variable hole pattern 1.The fitted model is given in Table A2 in the appendix.Height, time, paper type and volume are significant predictors to the binary response hole pattern 1.Specifically, height at 12" increases the odds of hole pattern 1; longer time increases the odds of hole pattern 1; Using paper type 2 has higher odds of hole pattern 1 than using paper type 1; and volume 2 has lower odds of hole pattern 1 than volume 1. We fit a similar logistic model with response variable hole pattern 2 (rheological settling paint pattern within boundary of initial droplet with no interior stamp, for example Figure 6 (A2)).The fitted model is given in Table A3 in the appendix.The result indicates that all predictors are significant except for height.Frozen at 30 min has higher odds of hole pattern 2 than unfrozen.However, frozen 5 min has no significant difference from unfrozen.Longer time increases the odds of hole pattern 2. Prussian Blue and Sepia both have significant higher odds of hole pattern 2 than Permanent Rose, Sepia has the highest odds of hole pattern 2 compared to the other two viscosity levels.Canson 90 LB has the higher odds of hole pattern 2 than arches 140 LB rough and arches 140 LB cold press.Volume 2 (Straw) has higher odds of hole pattern 2 than volume 1 (medicine dropper). We then fit the fractal dimension with all the predictors.For each experiment, we recorded the last fractal dimension value.In total, there are 46 observed fractal dimension values.We fit a multiple regression model but find two outliers from the residual plot and normal Q-Q plot.We removed two outliers and refit the model.The final result is given in Table A4 in the appendix.The result indicates temperature frozen at 30 min significantly reduces the fractal dimension value compared to unfrozen.Prussian Blue and Sepia significantly reduce the fractal dimension value compared to Permanent Rose.The residual against the fitted plot (see Figure 11a) and the normal Q-Q plot (see Figure 11b) indicate that the model assumptions are met and the inference obtained from this model are valid. Conclusions In summary, our experimental investigation of drop patterns of watercolors on canvas held at different temperatures, reveals patterns which depend upon material properties of the paint, canvas and also on the impact velocity and the temperature and wetting properties of the canvas.The radial growth pattern of the splash from the time of impact to equilibrium, achieved upon evaporation and settling, is qualitatively similar to that seen in previous studies, but the growth rate can vary between 0.1-0.47,depending upon the level of freezing of the canvas.The range of We and Re puts this study in a well studied part of the experimental phase, which has been looked at before; however, there have been no previous studies on the effects of temperature and wetting on canvas in quite the same physical context. Figure 12 gives a qualitative idea of paint absorption which appears to be the greatest at the impact point and caused by the penetration of paint into depths of the wet canvas during first impact (referred to here as "initial absorption").Details of the absorption process were not pursued in this study which has focused on the long term pattern evolution.However, we do recognize this to be an important aspect of the fluid dynamics involved here which will be investigated in our follow-up work.We also recognize that the absorption of paint into the canvas might also continue to occur post impact when the surface is in liquid state.Therefore, in principle, the overall dynamics could be characterized by the competition of not only We and Re, but also perhaps by something additional such the Blake number, B = UρD µ(1− ) , which characterizes flows through porous media.Here, U is the flow speed, D is the characteristic length, ρ is the density, µ refers to dynamic viscosity and is the void fraction.In addition, non-Newtonian effects arising from the pigment concentration could also play an important role especially during the impact phase when shear stresses on the droplet are a maximum, making this a truly complex problem.Based on the material and flow parameters of this study, the overall values of B range from 394-2155 and shows sensitivity to the canvas, paint type and release height of droplet.To qualitatively understand the effect of canvas porosity, we took a few images of the cross sectional slice of a droplet splash stain done about 5 min after the experiment, when the paint had not had a chance to evaporate.The image was obtained with a confocal microscope set at a magnification of about 300.The Figure 12 reveals some penetration, inferred from the pinkish hue of the paint at the center, i.e., the impact point, with not much absorption (at least not at this scale) elsewhere in the observed time.We therefore believe that while absorption might play some role at the early stages of the spread and paint penetration might continue to play a small role at later times as well.Therefore, the phenomenon still maintains the same overall properties as in the case of non-porous surface.In the case of 30 min canvas freezing, the surface is frozen and stays as such for several minutes past impact reducing the absorption time.Therefore, in this case, viscous and capillary regimes could dominate the absorption phase.Absorption dynamics appears to occur in two extremes of time scales: (a) the very short time impact scale where the maximum penetration is likely to occur and (b) the very long time scales where slow diffusion into the canvas occurs in the liquid-post-molten state of the canvas.Both these phases need further investigation. We identify specific properties/patterns emerging in our experiments and a rigorous statistical analysis validates the qualitative observations, discussed in Table 2.A box count analysis of the images reveals that the splashes display fractal structure.The fractal dimension of the observed patterns are analyzed and contrasted with each other, revealing significant correlations to the environmental and material factors in this study.The fingering patterns observed in some of the experiments are seen to strongly correlate to the impact speed of the droplets and also the freezing temperature and the wetting of the canvas. Overall, a thorough analysis of the physics of water color image on canvas can be extremely beneficial to watercolor artists to help render more controlled artistic works.Paint droplets on a substrate frozen for 30 min often produce a branch structure similar to microscopic blood vessels, after thawing for two to five minutes.Refreezing a painting at this moment may then permit the pigment to settle into the paper in this formation and allow for the emergence of new paint patterns not only for creative exploration but also for further scientific analysis.The beauty of watercolor painting lies in the complex flow of water along with the physical processes and the chemical reactions that occur within a singular pigment and between different paints.The nature of water allows pigment to open up and become translucent.The technique of wet-on-wet paint-canvas interaction is certainly not the only way to approach watercolor painting; wet-on-dry, dry-on-dry are equally valid methods, but wet paper makes for more vibrant colors.By freezing the paper, however, new and different patterns, e.g., fingering, branching, and confined sedimentation, became possible, which would otherwise quickly disperse and be a fleeting moment in the lifespan of the painting process. The current study is only the first step in our understanding the physics of watercolor painting.Several interesting key questions remain including the effect of brush (shear stresses) on canvas and surface tension of paints.A more rigorous analysis of impact velocity is also desired.In addition, while a very complicated task, a theoretical/numerical analysis of the problem is also necessary for us to really appreciate the underlying physics.There have been some attempts at providing analytical explanations for the drop impact and spreading of liquids on solid surfaces [11,33,34].These previous studies have incorporated Newtonian and non-Newtonian aspects of the liquids and also considered the effect of drying of the liquid.In the future, with the inclusion of absorption, these models could be considered towards application to our problem.Several of these ideas are either currently being pursued or will be taken up in our future, ongoing work on this subject. Figure 2 . Figure 2. Temperature of the canvas as a function of time for all three cases.The dashed lines point to the times when the canvas surface appears to be in molten liquid state. Figure 3 . Figure 3. Closeup view of the three canvases used in this study showing their surface features and porous nature.The pictures were taken with a Leica CME microscope (Meyer Instruments, Houston, TX, USA) at magnification of 100×.Images (a)-(c) correspond to 90 lb, 140 lb and 140 lb cold pressed canvas, respectively.The image (d) is of stained, uncoated printing paper at the same magnification. Figure 4 .Figure 5 . Figure 4.A time sequence of the splash of Permanent Rose on canvas. Figure 6 . Figure 6.Examples of the four distinct splash patterns seen in our experiments.A single splash can contain one or more of these patterns.The images are organized in pairs, with the first image (X1, where X = A, B, C, D), immediately upon impact and the final image (X2) are the images at the final times, at equilibrium. Figure 7 . Figure 7.The graph shows the evolution of the effective radius of the splash with time.A power law fit to the data is also shown, with an exponent value of 0.13, which is much smaller than those seen in other kinds of splashes. Figure 8 . Figure 8. Number of fingers as a function of height of release or impact velocity.Figure (a) shows the results for Permanent Rose and figure (b) shows the count for Prussian Blue.The x-axis represents different experimental cases where fingering was observed. Figure 9 . Figure 9.A black and white image of a splash. Figure 10 . Figure 10.Evolution of fractal dimension of a splash. Figure 11 . Figure 11.Residual against fitted value plot for multiple regression model of fractal dimension.Normal Q-Q plot for multiple regression model of fractal dimension. Figure 12 . Figure 12.Cross sectional image of canvas showing absorption and penetration of paint through canvas around the droplet impact point. Table 1 . Average viscosity measurements of the water color paints. Table 2 . Repeated patterns observed in our experiments are analyzed for their qualitative dependence upon canvas roughness, paint viscosity, height(or impact speed) and temperature of canvas. Table 3 . Fractal dimension of some sample cases. Table A2 . Logistic model results for hole pattern 1. Table A3 . Logistic model results for hole pattern 2. Table A4 . Multiple regression model results for fractal dimension.
2016-04-15T09:12:14.267Z
2016-04-14T00:00:00.000
{ "year": 2016, "sha1": "2315d34b859cb2697b6404e7ded0200b13b6f043", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2311-5521/1/2/12/pdf?version=1460637268", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "2315d34b859cb2697b6404e7ded0200b13b6f043", "s2fieldsofstudy": [ "Art", "Physics" ], "extfieldsofstudy": [] }
253436682
pes2o/s2orc
v3-fos-license
Early Changes in Androgen Levels in Individuals with Spinal Cord Injury: A Longitudinal SwiSCI Study We aimed to explore longitudinal changes in androgen levels in individuals with spinal cord injury (SCI) within initial inpatient rehabilitation stay and identify clinical/injury characteristics associated with hormone levels. Linear regression analysis was applied to explore the association between personal/injury characteristics and androgen hormones (total testosterone, free testosterone, sex hormone-binding globulin (SHBG), dehydroepiandrosterone (DHEA), and dehydroepiandrosterone sulfate (DHEA-S)) at admission to rehabilitation. Longitudinal changes in androgen levels were studied using linear mixed models. Analyses were stratified by sex and by injury type. We included 70 men and 16 women with SCI. We observed a non-linear association between age, time since injury, and androgens at baseline. At admission to initial rehabilitation, mature serum SHBG (full-length, protein form which lacks the N-terminal signaling peptide) was higher, while DHEA and DHEA-S were lower among opioid users vs. non-users. Serum levels of total testosterone and DHEA-S increased over rehabilitation period [β 3.96 (95%CI 1.37, 6.56), p = 0.003] and [β 1.77 (95%CI 0.73, 2.81), p = 0.01], respectively. We observed no significant changes in other androgens. Restricting our analysis to men with traumatic injury did not materially change our findings. During first inpatient rehabilitation over a median follow up of 5.6 months, we observed an increase in total testosterone and DHEA-S in men with SCI. Future studies need to explore whether these hormonal changes influence neurological and functional recovery as well as metabolic parameters during initial rehabilitation stay. Introduction A decline in androgen hormones and abnormalities of the hypothalamic-pituitarygonadal (HPG) axis has been repeatedly reported in individuals with chronic spinal cord injury (SCI), with more than 40% of men having testosterone levels below normal agespecific cut-offs [1,2]. Deficiency of androgen hormones primarily influences sexual function and fertility [3]. Low testosterone, in addition, may accelerate the aging processes in individuals with SCI by promoting development of sarcopenic obesity, metabolic disorders and hyperinflammatory state [4,5]. Dehydroepiandrosterone (endogenous steroid hormone precursor), and its sulphated ester (DHEA-S) have been linked with immune function and inflammation, bone metabolism and physical strength in frailty, as well as risk of diabetes [6][7][8]. Sex hormone-binding globulin (SHBG), which primary role is to bind testosterone, was inversely associated with insulin resistance, inflammation, diabetes, and metabolic syndrome [9][10][11][12][13]. In addition, in animal models of SCI, endogenous and exogenous estradiol were linked with improved recovery post-injury in both sexes [14,15], while androgens (testosterone and DHEA), on the other hand, may excrete sex-specific effects in SCI (e.g., high testosterone in women, and low testosterone in men have mirroring effects on metabolism) [15,16]. Although testosterone decline occurs earlier in life in men with SCI as compared to able-bodied individuals (ABI) [17,18], the evidence on association between injury duration and testosterone levels is contradicting [18,19]. Comprehensive studies in the subacute phase of the injury are scarce, thus making it difficult to understand the trajectory of early changes in androgen levels following the injury, which is of utmost importance for understanding the role of androgens in modifying metabolic changes, rehabilitation outcomes and functioning post-injury [20,21]. In addition, current studies were predominantly focused on testosterone and HPG axis, while studies on DHEA, and DHEA-S, which is the most abundant steroid hormone with important biological functions, remain scarce. Further, despite important physiological role of androgens in females, studies in women are uncommon [22,23]. Women were often purposely excluded from analyses, further widening the literature gap [24]. Finally, previous studies often did not account for SCI characteristics, body morphology, physical activity, underlying comorbidities and medication in their analysis, which all have been shown to influence hormone levels in SCI [1,20,25,26]. Thus, in the current study, we aimed to: (i) explore the longitudinal changes in androgen levels in men and women with SCI during initial rehabilitation stay and (ii) identify clinical characteristics associated with hormone levels by using a multicenter SCI cohort in Switzerland. Study Setting The inception cohort of the Swiss Spinal Cord Injury cohort (SwiSCI) is a prospective multicenter study that recruits study participants across four major rehabilitation centers in Switzerland, namely, Swiss Paraplegic Center (Nottwil), Clinique Romande de Readaptation (Sion), Balgrist Spine Center (Zurich), and Basel Rehabilitation Clinic (Basel) [27]. Study participants were involved in interdisciplinary rehabilitation approach tailored to person's specific needs and aimed to optimize one's functioning. The SwiSCI inception cohort collects numerous demographic, biopsychosocial and clinical parameters at five time points after the date of SCI diagnosis: at 28 days (range 16-40 days, T1), 84 days (70-98 days, T2), 168 days (150-186 days, T3), at discharge (10-0 days before discharge, T4) and one year after diagnosis. In addition, the SwiSCI biobank provides a platform for conducting research within the Inception Cohort of SwiSCI by cryopreserving serum, plasma, and peripheral blood mononuclear cells (PBMC), RNA, DNA and urine for research purposes. Biobank sampling (at T1 and T4), started on the 27 of June 2016 in the largest center (Nottwil), followed by 2 other centers (Basel and Sion) on the 23 of August 2018 and the 15 of January 2019, respectively. Detailed information on the study design and collected data can be found elsewhere [27]. Study Population, Inclusion and Exclusion Criteria The SwiSCI enrolled individuals aged over 16 years with traumatic or non-traumatic SCI receiving their first specialized rehabilitation in Switzerland. Individuals with injuries attributable to either a congenital condition, neurodegenerative disorder, or Guillain-Barré syndrome or who had a new SCI in the context of palliative care were excluded from the study. All SwiSCI study participants who provided serum samples at both the beginning and end of rehabilitation between 27 of June 2016 and 20 of January 2021, were eligible for inclusion. In addition to SwiSCI exclusion criteria, we excluded individuals with congestive heart failure, and inflammatory bowel disease and sex hormone therapy users. Study Measures Androgen Hormones The SwiSCI study participants had blood drawn between 7:00 a.m and 2:00 p.m from the antecubital vein for serum processing which was spun at 1800× g, separated, and stored at −80 • C until batch processing for subsequent quantification of androgens (that were not routinely measured in the SwiSCI study). All hormones were measured using Enzyme-Linked Immunosorbent Assay (ELISA). Total testosterone, free testosterone, and SHBG were measured using ELISA kits (Abcam, Lucerna-Chem AG, Luzern, Switzerland, cat.no:ab174569, cat.no:ab178663 and cat.no:ab260070, respectively). DHEA and DHEA-S were measured using ELISA kits (Abnova, Lucerna-Chem AG, Luzern, Switzerland, cat.no:KA0315 and cat.no:KA0920, respectively). All plates were scanned on a Beckman Coulter, Inc., Brea, CA, USA, multimode analysis software and the final levels of the various sex hormones were determined on Myassays Ltd. online database [28] using a four parameter logistic fit according to manufacturer's instructions. Details are provided in Supplementary Table S1. Clinical and SCI Characteristics Demographic characteristics such as age at baseline, sex, information on comorbidities and medication use, duration of injury and SCI characteristics were derived from the patient's medical records. The level of injury was classified as tetraplegia (at level C2-C8) and paraplegia (level T1-S5), and the completeness of injury into complete motor injury (AIS A and B) and incomplete (AIS C and D) based on the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) [29]. "International SCI Basic Data Sets" suggested by the International Spinal Cord Injury Society was used to collect information on sexual dysfunction [30]. Waist circumference (WC) was measured using a pliable tape measure expressed in cm. Body mass index (BMI) was computed employing the standard formula [weight in kilograms/(height in meters) 2 ]. Power Calculation Three sources helped guide the estimation of the sample size for the study: (1) previous similar research, (2) general statistical principles, and (3) the "Gpower"(ver. 3.1.9.7; Heinrich-Heine-Universität Düsseldorf, Düsseldorf, Germany; http://www.gpower.hhu.de/ accessed on 4 October 2022) power analysis program Sample sizes in similar research ranged from 20 to 92 SCI participants [20,21,31]. General consensus on principles of statistics suggests multiple regression analyses should have 10 participants per independent variable (6 variables in our case). The program "Gpower" estimated a minimum of 57 subjects was needed to achieve 80% power with an anticipated large Cohen's f-square effect size of 0.35 for multiple regression. Statistical Analyses We summarized our data using median and interquartile range (IQR), or counts with percentages, as appropriate. Wilcoxon signed-rank test (continuous data) or Chi-squared test (nominal/dichotomous data) were used to determine differences in demographic and clinical profiles between men and women. At baseline, we used linear regression analysis to determine the association between independent variables: injury characteristics (etiology, level and completeness), opioid use and corticosteroid use and androgens (dependent variables). We fit restricted cubic splines to describe the trend between age and androgen hormones, and between time since injury and androgens. We applied both the paired t-test and multilevel mixed model using random intercept and residual maximum likelihood estimation to determine the longitudinal changes in androgens across the period of inpatient rehabilitation. The model was adjusted for age, BMI, level of injury, completeness of injury, time since injury, opioids use, and corticosteroids use in [32]. All analyses were stratified according to sex and were performed using Stata 16.1 (StataCorp LLC, College Station, TX, USA) for Windows. All computations were done using two-tailed tests, and a p-value of <0.05 was considered statistically significant. Baseline Characteristics A total of 86 SCI individuals, 70 men (81%) and 16 women (19%), were included in the study ( Figure 1) and the median age of the study population was 51 years (IQR 36-64). The majority of study participants (n = 61, 70.9%) had traumatic SCI, 33 (38.4%) individuals had a motor complete injury and the median duration of injury at the moment of admission to first inpatient rehabilitation was 15.5 days (IQR 10-27). In Table 1, we present the most important clinical characteristics of study participants at admission to rehabilitation stratified by sex. Besides statistically significant differences in causes of TSCI and body weight, other personal and clinical characteristics did not differ between sexes. In men, median total and free testosterone levels were 12.5 nmol/L (IQR 7.9-17.7) and 27.5 pmol/L (IQR 16.9-36.6), respectively. Total and free testosterone levels were significantly lower in women (1.9 nmol/L (IQR 1.4-2.5) and 2.9 pmol/L (2.3-3.7)). No differences between sexes were observed in other androgens. Median DHEA level was 20.2 nmol/L (IQR 12.8-40.2) in men and 18.9 nmol/L (IQR 15.7-35.0) in women, while median DHEA-S was 3.6 µmol/L (IQR 1.6-6.9) and 3.1 µmol/L (IQR 1.5-4.6) in men and women, respectively. SHBG levels were 2558 pg/mL (IQR 2053-2777) and 2890 pg/mL (IQR 2020.5-3947) in men and women, respectively. We provide details of the medical conditions associated with NTSCI in Table S2. Forty-eight men (88.89%) expressed willingness to respond to questions related to sexual function. Orgasmic function, psychogenic erection, reflex erection, and ejaculation were either absent or reduced among the majority of men who responded to the questionnaire, Supplementary Figure S1. Association between Clinical Characteristics and Androgens at Baseline The association between age and time since injury and androgen hormones was nonlinear in both men and women, and in men with motor complete traumatic SCI, these associations were described using restricted cubic splines. Results can be found in the online supplement (Supplementary Figures S2-S6). In regression analysis, DHEA-S levels were lower among individuals with incomplete as compared to complete injury (β −1.78 (95%CI −3.43, −0.12) p < 0.05). Men who used opioids as compared to men who did not receive opioids had lower DHEA (β −9.02 (95%CI −17.01, −1.03) p < 0.001), DHEA-S (β −2.18 (95%CI −3.58, −0.78) p < 0.001) and higher SHBG levels [β 557.76 (95%CI 281.48, 834.03) p < 0.001). No differences were observed among individuals with traumatic and non-traumatic injury, tetra-and paraplegia, users, and non-users of corticosteroids (Table 2). When restricting our analysis to men with traumatic SCI, results did not materially change (Supplementary Table S3). Women with non-traumatic as compared to traumatic injury had lower DHEA (β −18.5 (95%CI −35.51, −0.58) p < 0.05). SHBG was higher among opioid users (β 1158.41 (95%CI 310.59, 2006.23) p < 0.05), whereas lower trends of DHEA and DHEA-S seen among opioid users did not reach statistical significance, Table 2. Longitudinal Changes in Androgen Levels In the fully adjusted linear mixed model (age, BMI, level and completeness, time since injury, use of opioids and corticosteroids), we observed a significant increase in total testosterone (β 3.96 (95%CI 1.37, 6.56) p = 0.003) and DHEA-S (β 1.77 (95%CI 0.73, 2.81) p = 0.001) in men with SCI when comparing beginning and end of rehabilitation period. We observed no significant changes in free testosterone, DHEA, or SHBG in men with SCI, Table 3. The results remained stable when restricting our analyses to men with traumatic injury, as well as when comparing the results of linear mixed models and paired t-test and Wilcoxon signed rank test (Supplementary Tables S4 and S5). In women, we observed a significant decrease in mean DHEA levels comparing the beginning and end of the reha-bilitation period (β −11.91 (95%CI −24.39, −4.11) p < 0.001). No significant longitudinal changes were observed among other hormones, Table 3. Figure 2, and Supplementary Figures S7 and S8 show individual trajectories in changes in androgen levels. Discussion Over a period of initial rehabilitation stay (between admission or within 16-40 postinjury to up to 10 days before discharge), total testosterone and DHEA-S levels increased significantly, while we observed no significant changes in other hormones (free testosterone, SHBG, or DHEA). At admission to initial rehabilitation, serum SHBG was higher, while DHEA and DHEA-S were lower among opioid users vs. non-users. We observed a non-linear association between age and injury duration and hormone levels. Our findings on women should be interpreted with caution, considering that only 16 women were available for analysis. Androgen Changes during Initial Inpatient Rehabilitation A recent meta-analysis including 37 observational studies reported significantly lower total testosterone among individuals with chronic SCI compared to ABI, with no differences in free testosterone nor SHBG between the two groups [33]. Low testosterone Discussion Over a period of initial rehabilitation stay (between admission or within 16-40 postinjury to up to 10 days before discharge), total testosterone and DHEA-S levels increased significantly, while we observed no significant changes in other hormones (free testosterone, SHBG, or DHEA). At admission to initial rehabilitation, serum SHBG was higher, while DHEA and DHEA-S were lower among opioid users vs. non-users. We observed a nonlinear association between age and injury duration and hormone levels. Our findings on women should be interpreted with caution, considering that only 16 women were available for analysis. Androgen Changes during Initial Inpatient Rehabilitation A recent meta-analysis including 37 observational studies reported significantly lower total testosterone among individuals with chronic SCI compared to ABI, with no differences in free testosterone nor SHBG between the two groups [33]. Low testosterone levels reported in the literature varied from 10% to more than 70% of men with SCI [34]. Higher prevalence of testosterone deficiency was reported among individuals with motor complete (as compared to motor incomplete) and cervical (as compared to thoracic/lumbosacral) injuries and among individuals using narcotic medications for pain management [17,19,35]. Although scarce, studies in the subacute phase of the injury show similar patterns in early testosterone changes. Naftchi et al. measured sex hormones in urine once a week for four months starting from the onset of the injury [21]. Individuals with paraplegia (as compared to age-matched ABI) had lower luteinizing hormone (LH) and follicle-stimulating hormone (FSH) for two weeks and lower levels of testosterone for six weeks after the injury, subsequently reaching normal levels. In individuals with tetraplegia, serum testosterone concentrations remained significantly lower than those of the controls during the entire 4-month testing period [21]. In a study by Schopp et al., the time since injury was associated with testosterone levels, with those having an acute injury being more likely to have low testosterone than those with a chronic injury [20]. Similarly, among men recruited at the inpatient rehabilitation unit, mean total and free testosterone levels were lower among individuals ≤12 months post-injury as compared to individuals >12 post-injury [31]. Due to variability in methods used to measure androgens (ELISA in current study, Levy and Schwartz (1973) modification of the Bradlow (1968) method or chemiluminescent microparticle immunoassay and radioimmunoassay in previous studies) it remains challenging to compare the prevalence of low androgen levels across studies. European Academy of Andrology (EAA) guidelines on hypogonadism in males suggest liquid chromatography-mass spectrometry (LC-MS) as a preferred method for androgen assessment [36]. They further recommend using standardized methods, such as immunoassay, for research purposes, as they show high correlation with LC-MS/MS within the adult male testosterone range (although they offer less precision in the hypogonadal range). In our study, at admission to rehabilitation low total (<8 nmol/L or <231 ng/dL) and free testosterone (<220 pmol/L or <6.3 ng/dL) were seen among 27% and 36% of men with SCI. We observed significant increases in total testosterone and DHEA-S over rehabilitation, while free testosterone, SHBG and DHEA were not significantly changed. Lack of significant variations in free testosterone could be explained by low albumin levels observed in the acute phase of the injury. Testosterone is predominantly bound to SHBG (60-70% of total testosterone), while around 30% to 40% is loosely bound to albumin [37]. Therefore, lower albumin leaves a higher amount of free testosterone in circulation. Meaning that at baseline, free testosterone levels in blood may be overestimated. Further, our findings may be at least partially explained by the decreased prevalence of opioid medication use over the period of rehabilitation (which decreased from 32.9% to 11.4%). Indeed, at baseline, we observed significantly lower levels of DHEA and DHEA-S and higher levels of SHBG among opioid users as compared to non-users (no differences observed in free and total testosterone). In addition, elevated levels of corticosterone/cortisol (either exogenous or endogenous) may drive a decrease in testosterone acutely following injury [38]. Cortisol levels typically reverse to normal within 6 months of injury which is in line with the improvement in hormonal status within a year since injury as reported earlier or within on average 5.6 months of follow-up in our study [39]. Finally, an increase in testosterone levels observed in our study may be as well driven by a thorough exercise prescription within a rehabilitation program which may stimulate production of testosterone [40]. Clinical Implications of Our Findings and Directions for Future Research Within weeks since injury, testosterone level decreases, and thereafter may reach normal values within 4-6 months post-injury, which is fairly supported by our study that reported a significant increase in total testosterone and DHEA-S prior to discharge from first inpatient rehabilitation. The evidence in chronic SCI indicates that men with SCI may be a target population for testosterone deficiency screening. However, the timing of such screening strategies remains debated. For example, Sullivan et al. suggested implementing a routine testosterone deficiency screening strategy among men with SCI beginning at least 1-year post-injury and/or by the 3rd decade of life [34]. To develop the SCI-specific testosterone deficiency screening guidelines, a comprehensive systematic overview of the literature engaging the GRADE approach [41] should identify subgroups of SCI individuals that are at a higher risk for developing testosterone/androgen deficiency and provide a rationale for early testosterone screening. In addition, sex steroids, especially 17β-estradiol and progesterone, showed neuroprotective effects and led to improvement of functional deficits in animal models of SCI [14]. Future human studies should investigate whether sex steroids may influence changes in the metabolic milieu in subacute injury and influence functional recovery during rehabilitation. In line with this, testosterone treatment alone or combined with resistance training may be a reasonable strategy to slow down the bone loss and fat accumulation following the injury and improve muscle size, strength and contractile properties [42][43][44][45][46], a rationale for engaging testosterone supplementation in rehabilitation practice should be provided. In addition, DHEA supplementation, which has been shown to increase levels of downstream sex steroids and, therefore, improves immune and stress response, glucose metabolism and body fat ratio may be an option to enhance the performance of existing rehabilitation strategies after the injury [47][48][49][50]. We observed significantly lower levels of DHEA and DHEA-S among opioid users. Previous studies reported a dose-related DHEA-S deficiency in adults who are chronically consuming sustained-action oral or transdermal opioids [51]. In our study, at baseline, >30% of study participants used opioids, while only 11% were discharged with those medications. Opioid-induced endocrinopathy is a common adverse effect of long-term opioid therapy [52]. In SCI individuals using opioids, symptoms of endocrinopathy (such as sexual dysfunction, decreased muscle mass, anemia, or low testosterone) may remain unrecognized and attributed to the injury. Thus, monitoring hormonal levels in this subpopulation may be crucial. Our findings in women are only exploratory and were based on limited data. We observed lower levels of DHEA in women with non-traumatic as compared to traumatic injury. Furthermore, DHEA levels decreased during the rehabilitation period. In this study, we did not have information about menopausal or menstruation status and thus, this makes the interpretation of our findings challenging. Total testosterone levels were above 2.4 nmol/L among 37.5% of women at baseline. In women, hyperandrogenism was associated with an increase in the prevalence of several metabolic factors (dyslipidemia, insulin resistance, hypertension, obesity), which could lead to an increased risk of diabetes but also cardiovascular disease, and stroke [53]. Thus, a detailed hormonal status assessment in the context of both SCI and women-specific factors (such as menstruation, menopause, polycystic ovarian syndrome, etc.) is warranted to understand pathophysiological changes in metabolic milieu post-injury. Strengths and Limitations of the Current Study To our knowledge, this is the first study to explore longitudinal changes in androgen levels using linear mixed model approach, which is a robust and powerful tool for analyzing complex datasets with repeated or clustered observations. We adjusted our analyses for factors such as injury characteristics or medication use and we further restricted the analyses to individuals with traumatic injury and our results remain stable. In addition, we measured SHBG and other androgens such as DHEA and DHEA-S that were not often studied in the SCI population but were identified as key determinants of health and wellbeing in ABI [5,12,13,[54][55][56]. Our study has some limitations that need to be mentioned. First, androgen levels were measured using ELISA rather than LC-MS, which is considered the gold diagnostic standard for steroid hormone monitoring. Although ELISA measurements have high correlation with LC-MS/MS within the adult male testosterone range, its precision in hypogonadal range is lower. Therefore, our measurements in this range may underestimate the true prevalence of low total and free testosterone. Our study, having a strictly research purpose did not aim to provide clinical diagnosis of hypogonadism. Instead, the main contribution of this work is the identification of changes in the longitudinal hormonal trajectories/during rehabilitation. Importantly, using ELISA, facilitates comparison of our findings to previous published works on the SCI population, where ELISA or radioimmunoassay (RIA) were the most commonly applied analytical methods. Furthermore, the ELISA kits from two manufacturers, Abnova and Abcam, were used to measure androgen levels, thus, the assessment of correlation between androgen levels was not feasible. Third, the SHBG assay sensitivity is not optimized for the natural range of the analyte and levels measured in current study cannot be compared to clinical standard (ELISA kit detection range was between 62.5 pg/mL and 4000 pg/mL). Finally, we were not able to explore the association between testicular blood flow, which was previously linked with testicular function and may be a crucial factor to influence androgen levels [57]. Conclusions In this study we observed an improvement in total testosterone and DHEA-S in men over first inpatient rehabilitation. Androgens may play a pivotal role in functional recovery and early metabolic changes in SCI. Thus, to develop timely preventive strategies, future methodologically sound longitudinal studies are required to disentangle the complex association between hormone levels and aging, visceral adiposity, physical inactivity and functional recovery post-injury. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/jcm11216559/s1, Figure S1. Sexual dysfunction as reported by study participants at T2 (70-98 days post-injury) Figure S2: Association between age and androgens in men; Figure S3: Association between age and androgens in women; Figure S4: Association between age and androgens in men with motor complete traumatic SCI; Figure S5: Association between time since injury and androgens in men; Figure S6: Association between time since injury and androgens in women; Figure S7: Bivariate plots of changes in androgen sex hormone levels in men with spinal cord injury; Figure S8: Bivariate plots of changes in androgen sex hormone levels in women with spinal cord injury; Table S1: Detailed description of hormone measurements; Table S2: Causes of Non-traumatic spinal cord injury; Table S3: Association between individual patient and injury's characteristics and hormone levels at baseline, analysis restricted to men with traumatic injury; Table S4: Longitudinal changes in androgens, analysis restricted to men with traumatic spinal cord injury; Table S5: Longitudinal changes in sex hormones (Wilcoxon sign test and Linear mixed models). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data sets generated and/or analyzed during the current study are available from the corresponding author on reasonable request.
2022-11-10T17:23:09.644Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "a0a57abc53bd085845a7b2c7dddeedabe3daee8f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/11/21/6559/pdf?version=1667887941", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ade8c2444fd42f97e1899c9b1931df4c8176872e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245324461
pes2o/s2orc
v3-fos-license
Effect of Solution Miscibility on the Morphology of Coaxial Electrospun Cellulose Acetate Nanofibers Coaxial electrospinning (co-electrospinning) technique has greatly expanded the universality of fabricating core-shell polymer nanofibers. However, the effect of solution miscibility on the morphology of co-electrospun products remains unclear. Herein, different cellulose acetate (CA) solutions with high solution miscibility but distinctly different electrospinnability were used to survey the effect of solution miscibility on the co-electrospinning process. The structural characterizations show that co-electrospun products are composed of nanofibers with and without the core-shell structure. This indicates that partial solution mixing occurred during the co-electrospinning process instead of absolute no-mixing or complete mixing. Importantly, the solution miscibility also shows a significant influence on the product morphology. In particular, the transformation from nanofibers to microparticles was realized with the increase of core-to-shell flow ratio during the co-electrospinning of core electrosprayable CA/dimethylacetamide (DMAc) solution and shell electrospinnable CA/acetone-DMAc (2/1, v/v) solution. Results show that the solution miscibility exerts a significant effect on not only the formation of core-shell structure but also the product morphology. This work provides a new insight for the in-depth understanding of the co-electrospinning process. Introduction Electrospinning and electrospray are kindred electrohydrodynamic (EHD) techniques to produce ultrafine polymer fibers and particles [1,2], and have been extended to various fields such as nanosensors [3], drug delivery [4], tissue engineering [5], energy [6] and environment [7] applications. For the electrospinning, a continuous electrified jet is ejected from the tip of a Taylor cone, and subsequently solidified into fibers. However, if the viscoelasticity of the polymer solution can't suppress the Rayleigh instability induced by the surface tension, the jet will break up into small droplets before its solidification, producing particles instead [8,9]. In the past two decades, newly developed coaxial electrospinning/electrospray (co-electrospinning/co-electrospray) techniques have greatly expanded the universality of fabricating polymer fibers/particles with complex structures [10][11][12]. In these processes, dissimilar solutions are usually delivered into different channels of a coaxial multichannel spinneret to achieve various structures, such as core-shell [13,14], hollow [15,16], multichannel [17], multiwall [18] or wire-in-tube [19] structures. In a typical co-electrospinning process, the electrostatic forces focused on the shell fluid drive the core and shell fluids to form a core-shell compound Taylor cone [10,20]. Then a core-shell electrified jet is ejected from the tip of the compound Taylor cone and is subsequently solidified into core-shell fibers. For the co-electrospinning technique, there are two basic issues as discussed in detail in the review by Moghe and Gupta [21]. The first is the role of core and shell fluids in the co-electrospinning process. It has already been revealed that both the core and shell fluids play stable but different roles depending on their respective conductivity and electrospinnability [10,15,[22][23][24]. The other issue is the effect of the miscibility between core and shell fluids on the formation of core-shell fibers, which has not been fully understood. The computational [25] and experimental [26] studies both demonstrated that two miscible or partially miscible fluids usually favor a low interfacial tension, being beneficial to achieving a stable co-electrospinning process. However, there is still a divergence on whether core-shell fibers can be formed during the co-electrospinning of miscible core and shell solutions [12,21]. Some studies have shown that core-shell fibers with shape boundaries can be fabricated by co-electrospinning of miscible or even identical solutions [13,26,27]. As the electrospinning process (~1 ms) was much faster than the diffusion spreading of the boundary between two miscible solutions (0.01-1 s) no mixing took place during the co-electrospinning of two polyethylene-oxide/water-ethanol solutions with different concentrations [13]. However, it was neglected that the solution mixing might occur before the ejection of the compound jet, as the two solutions first met in the Taylor cone and kept contact for several seconds [28]. On the contrary, few researchers have also reported that the significant interdiffusion between two miscible solutions was possible during the co-electrospinning process and that it might lead to partial or even complete mixing of core and shell layers [15,28,29]. However, as far as we know, few have reported on the effect of solution miscibility and the resulting solution mixing on the morphology of co-electrospun products. It has been widely proven that a highly electrospinnable shell solution can carry out the nonelectrospinnable core polymer solution or even non-polymeric liquid to form core-shell nanofibers. This means that the effect of the core fluid is modest. The morphological transformation has been typically observed in the modified co-electrospinning processes, where a solvent flow [8,30,31] or solvent saturated airflow [32] is used as the core or shell fluid instead of two viscous polymer solutions. The above limited observations indicate that a clearer understanding of the effect of solution miscibility on the co-electrospinning process and the product morphology needs to be developed through further research. It is desired that the selected core and shell solutions should have high miscibility. Meanwhile, their electrospinnability should have a difference in order to confirm the effect of solution miscibility on the product morphology. Herein, we present a sample method to prepare such solution couples with high miscibility but different electrospinnability by using the same polymer but changeable solvents. Cellulose acetate (CA) is a biodegradable and eco-friendly polymer derived from natural cellulose [33]. It has been widely fabricated into nanofibers and microparticles via electrospinning and electrospraying for environmental and biological applications [33]. Furthermore, Poly (lactic-co-glycolic acid) (PLGA) is a biodegradable copolymer of poly lactic acid and poly glycolic acid, which has been widely used in drug delivery and biomaterial applications [34]. Importantly, CA dissolved in different solvents shows different electrospinnability. The CA/acetone-DMAc (2/1, v/v) solution (denoted as CA-AD21) and CA/acetone solution (denoted as CA-A) are electrospinnable, while the CA/DMAc solution (denoted as CA-D) is non-electrospinnable but shows good electrosprayability [35][36][37]. Here, two solution couples, core CA-D solution with shell CA-AD21 solution, and core CA-A solution with shell CA-D solution, were selected to survey the effect of solution miscibility, as shown in Figure 1. For comparison, core electrosprayable PLGA/DMAc (denoted as PLGA-D) solution with shell CA-AD21 solution were also selected as a solution couple with a much lower miscibility. solution affected the properties of the electrified jet, such as the surface tension and solidification speed. Meanwhile, the partial solution mixing induced by high miscibility hastened this process, resulting in the significant morphological transformation. The final product morphology depended on the synergistic effect of core and shell solutions rather than the solo effect of the core or shell solution. Briefly, this work indicates that high solution miscibility and resulting partial solution mixing have a significant effect on the product morphology. Materials Cellulose acetate (Mw = 30,000) and pentanediol were purchased from Sigma-Aldrich (Shanghai, China). Poly (lactic-co-glycolic acid) (GA/LA, 50/50) was supplied by Jinan Daigang Biomaterial Co., Ltd. (Jinan, China). AgNO3, NaCl, Poly (vinylpyrrolidone) It has been observed that the partial mixing of core and shell solutions occurred at the tips of Taylor cones during the co-electrospinning of core electrosprayable CA-D solution and shell electrospinnable CA-AD21 solution. Furthermore, scanning electron microscope (SEM), transmission electron microscope (TEM) and fluorescence microscopy characterizations show that products with and without the core-shell structure were both produced, indicating the partial mixing rather than absolute no-mixing or complete mixing during the co-electrospinning process. In addition, the proportion of core-shell nanofibers was increased with the reduction of solution miscibility, as proved by the solution couple of core electrospinnable CA-A solution and shell electrosprayable CA-D solution. Importantly, with the increase of core-to-shell flow ratio, fiber-to-particle or particle-to-fiber transformation was achieved during the co-electrospinning of the above two highly miscible solution couples. However, only a slight morphology variation was observed during the co-electrospinning of core electrosprayable PLGA-D and shell electrospinnable CA-AD21 solutions. We concluded that the solvent diffusion from the core fluid to shell solution affected the properties of the electrified jet, such as the surface tension and solidification speed. Meanwhile, the partial solution mixing induced by high miscibility hastened this process, resulting in the significant morphological transformation. The final product morphology depended on the synergistic effect of core and shell solutions rather than the solo effect of the core or shell solution. Briefly, this work indicates that high solution miscibility and resulting partial solution mixing have a significant effect on the product morphology. Preparation of Polymer Solutions The 15% (w/v) CA-D, PLGA-D and CA-A solutions were prepared by dissolving 2.25 g polymer (CA or PLGA) into 15 mL solvent (DMAc or acetone), under magnetic stirring at 45 • C overnight. The 15% (w/v) CA-AD solutions with different solvent ratios were prepared by dissolving 2.25 g CA into 15 mL mixtures of DMAc and acetone with the acetone-to-DMAc ratio of 2/1, 1/2, 1/4 and 1/8, respectively, under magnetic stirring at 45 • C overnight. For preparing Ag nanoparticles (NPs) dispersed CA solution, Ag-NPs with the edge length~50 nm were first prepared according to our previously reported procedures [38] and mainly involved the reduction of AgNO 3 by pentanediol in the presence of PVP and NaCl at 155 • C. Then the as-prepared Ag-NPs were re-dispersed in the CA-D solution, followed by a vigorous ultrasonication to obtain a homogeneous solution. Single-Nozzle and Coaxial Electrospinning/Electrospray Experiments For single-nozzle electrospinning/electrospray experiments, 1 mL PTFE pipe assembled with a single-nozzle spinneret was used to load the as-prepared CA solutions. Then the electrospinning/electrospray experiments were carried out at 15 kV voltage and 12.5 cm tip-to-receiver working distance. The flow rates were all set to 0.2 mL/h. The obtained membranes were collected onto the aluminum foils and then dried in an oven overnight at 60 • C to remove the residual solvents. The environment was controlled at 20-30 • C and a humidity of~40%. In the co-electrospinning/co-electrospray experiments, the single-nozzle spinneret was replaced by a coaxial bi-channel spinneret (Figure 1d,e). Subsequently, the core and shell solutions were respectively fed into inner and outer channels of the bi-channel spinneret. The flow rate of the solution with higher flow rate was maintained at 0.3 mL/h in all co-electrospinning/co-electrospray experiments, while the flow rate of another solution with lower flow rate was adjusted to achieve different core-to-shell flow ratios as required. The co-electrospinning/co-electrospray experiments were carried out around 15 kV voltage and 15 cm tip-to-receiver working distance. The voltage needed to be adjusted in order to achieve stable co-electrospinning/co-electrospray processes under different core-to-shell flow ratios. The environment was controlled at 20-30 • C and humidity of~40%. Characterization The as-prepared electrospinning/electrospray membranes were characterized by scanning electron microscope (Hitachi SU8020, Tokyo, Japan), transmission electron microscope (JEOL JEM-2010, Tokyo, Japan), inverted fluorescence microscope (Leica DMI3000 B, Wetzlar, Germany), Fourier transform infrared spectroscopy (FTIR, Nicolet Nexus, Madison, WI, USA), thermo gravimetric analysis (TGA, Mettler-Toledo TGA/DSC 3+, Greifensee, Switzerland) and X-ray diffraction (XRD, Purkinje XD6, Beijing, China). For FTIR characterization, the samples collected on Al foils were directly characterized by the FTIR spectrometer equipped with a smart diffuse reflectance accessory, a spectral resolution of 2 cm −1 and with 64 scans performed. For TGA characterization, samples weighing around 10 mg were measured with a heating rate of 5°C/min in the Ar atmosphere. Furthermore, X-ray diffraction patterns were collected over the angular range 10-80 • in 20 steps of 0.03 • with 4 • /min scanning rate and 1 accumulation number. The XRD system was equipped with Cu Kα radiation (36kv, 20 mA, λ = 0.15406 nm) and a diffracted-beam graphite monochromator. The slit arrangement for data collection consisted of 1/6 • divergence slit, 0.10 mm receiving slit and 1/2 • scattering prevention slit. For the TEM characterization, a piece of membrane was immersed in ethanol and sonicated slightly to obtain the nanofiber suspension. Subsequently, the as-prepared nanofiber suspension was dropped onto a copper mesh and left to dry naturally for TEM observations. However, for directly observing the core layer, a severe sonication was applied for 2 h to break the shells of some nanofibers. And the images of Taylor cones were taken with a digital camera. In addition, the solution viscosity was tested by a rotary viscometer (Brookfield DVS+, Middleboro, MA, USA) using a S02 spindle with 20 RPM at the environment temperature of 24 • C. The measurement of the surface tension was measured with a tensiometer (Kruss DSA100, Hamburg, Germany). A needle with the diameter of 0.518 mm was used to insert a droplet of the polymer solution with the volume of 15.5 uL. Then an image was taken and subsequently analyzed by the drop shape analysis program of Pendant Drop supplied by the manufacturer to calculate the interfacial tension. Results and Discussions Both acetone and DMAc are good solvents for CA, meanwhile, acetone and DMAc are also highly miscible due to the adjacent values of Hildebrand solubility parameter (Table S1) [39]. Furthermore, an experiment of solvent mixing also demonstrated the high miscibility of acetone and DMAc ( Figure S1). Therefore, CA, acetone and DMAc were selected to prepare highly miscible solution couples. In addition, PLGA was also selected to prepare the solution couple of CA-AD21 solution and PLGA-D solution with a much lower miscibility. Table 1 shows the viscosity and surface tension of some CA solutions and PLGA-D solution. Both the solution viscosity and surface tension increase with the rising of the proportion of DMAc in CA solutions. The surface tension of polymer solutions is close to that of the solvents used (DMAc of 32.4 and acetone of 23.7 mN/m) [35]. Furthermore, the diameter of all fibers obtained in this work was measured from 100 fibers in different regions. The corresponding diameter distribution is shown in the histograms of Figure S2, and the average fiber diameter is presented in Tables 1 and S2. Single-Nozzle Electrospray/Electrospinning of CA Solutions Frist, the single-nozzle electrospray and electrospinning of different CA solutions were performed. Although DMAc is a good solvent for CA, the CA-D solution was not electrospinnable and only irregular particles could be produced (Figure 2a), possibly owing to the high surface tension and high boiling point of DMAc [35,36]. Nevertheless, few thick fibers were fabricated by electrospinning of CA-A solution (Figure 2b), proving its electrospinnability. Unfortunately, the electrospinning process was soon interrupted by the spinneret clogging induced by the rapid evaporation of acetone. Meanwhile, the CA-AD21 solution showed good electrospinnability and could be continuously electrospun into ultrafine nanofibers (Figure 2c) as the rapid solvent evaporation was suppressed by the presence of DMAc. Furthermore, various products with different morphologies were achieved by decreasing the acetone-to-DMAc ratio in CA solutions, as shown in Figure 2d-f. Evidently, the higher the proportion of DMAc, the more likely it is to produce Polymers 2021, 13, 4419 6 of 14 microparticles rather than nanofibers. Nevertheless, the fiber diameter also decreased gradually with the reduction of the acetone-to-DMAc ratio (Table 1, Figure S2a-c). Polymers 2021, 13, x FOR PEER REVIEW 6 of 14 2d-f. Evidently, the higher the proportion of DMAc, the more likely it is to produce microparticles rather than nanofibers. Nevertheless, the fiber diameter also decreased gradually with the reduction of the acetone-to-DMAc ratio ( Table 1, Figure S2a-c). Co-Electrospinning of Core CA-D Solution and Shell CA-AD21 Solution In order to observe the influence of solution miscibility on the product morphology, the electrospinnability of core and shell solutions should have a discernible difference. When the core and shell solutions are the same, the morphology of co-electrospun products should be the same or similar to that of single-nozzle electrospun products, as proved by the co-electrospinning of two well-electrospinnable CA-AD21 solutions ( Figure S3). Therefore, the core solution was replaced by an electrosprayable CA-D solution, while the electrospinnable CA-AD21 solution was still selected as the shell solution. Owing to the low liquid-liquid interfacial tension [26], stable electrospinning processes proceeded successfully under a wide range of core-to-shell flow ratio from 1:6 to 1:1. Figure 3a-c show the Taylor cones formed under different core-to-shell flow ratios of 1:6, 1:1.5 and 1:1, respectively. The core-shell compound Taylor cones were formed, while partial solution mixing was observed at the tips of Taylor cones. As shown in Figure 3d-h, the increased core-to-shell flow ratio led to less and thinner nanofibers (Table S2, Figure S2d-g) accompanied with increasing and spheroidized beads. However, when the core-to-shell flow ratio exceeded 3:1, it was not easy to achieve stable co-electrospinning processes and reproducible products again. The core jet preferred to eject from the shell jet owing to its insufficient confinement, resulting in the splitting of the compound jet. Nevertheless, microparticles were occasionally produced by a careful operation (Figure 3i). As a result, the transition from bead-free nanofibers to beaded nanofibers, and further to microparticles was achieved. Evidently, the higher the flow ratio of the core solution, the more likely to produce beads and particles, which is similar to the result of previous single-nozzle electrospinning/electrospray of CA solutions. Co-Electrospinning of Core CA-D Solution and Shell CA-AD21 Solution In order to observe the influence of solution miscibility on the product morphology, the electrospinnability of core and shell solutions should have a discernible difference. When the core and shell solutions are the same, the morphology of co-electrospun products should be the same or similar to that of single-nozzle electrospun products, as proved by the coelectrospinning of two well-electrospinnable CA-AD21 solutions ( Figure S3). Therefore, the core solution was replaced by an electrosprayable CA-D solution, while the electrospinnable CA-AD21 solution was still selected as the shell solution. Owing to the low liquid-liquid interfacial tension [26], stable electrospinning processes proceeded successfully under a wide range of core-to-shell flow ratio from 1:6 to 1:1. Figure 3a-c show the Taylor cones formed under different core-to-shell flow ratios of 1:6, 1:1.5 and 1:1, respectively. The coreshell compound Taylor cones were formed, while partial solution mixing was observed at the tips of Taylor cones. As shown in Figure 3d-h, the increased core-to-shell flow ratio led to less and thinner nanofibers (Table S2, Figure S2d-g) accompanied with increasing and spheroidized beads. However, when the core-to-shell flow ratio exceeded 3:1, it was not easy to achieve stable co-electrospinning processes and reproducible products again. The core jet preferred to eject from the shell jet owing to its insufficient confinement, resulting in the splitting of the compound jet. Nevertheless, microparticles were occasionally produced by a careful operation (Figure 3i). As a result, the transition from bead-free nanofibers to beaded nanofibers, and further to microparticles was achieved. Evidently, the higher the flow ratio of the core solution, the more likely to produce beads and particles, which is similar to the result of previous single-nozzle electrospinning/electrospray of CA solutions. The Structure of Co-Electrospun CA Nanofibers Owing to the formation of the same polymer of CA, FTIR, TGA, XRD and contact angle characterizations cannot distinguish the single-nozzle electrospun/electrosprayed CA fibers/particles and co-electrospun/co-electrosprayed products ( Figure S4). TEM characterization is usually used to identify the core-shell structure of co-electrospun nanofibers, whereas it is also difficult to distinguish two layers in this case owing to the lack of contrast between the same material. Actually, there is no apparent contrast difference under TEM observation for many nanofibers in our case, meanwhile, slight contrast differences could also be observed for some nanofibers (Figure 4a,b). However, TEM images with such poor contrast cannot be direct evidence for the core-shell structure. To directly observe the core layer, the as-prepared nanofibers were severely sonicated to break the shell layer before TEM observation. Evidently, some nanofibers indeed have a core-shell structure (Figure 4c). Besides, the as-prepared nanofibers were immersed in liquid nitrogen and subsequently fractured for SEM observation of the fractured surface. As shown in Figure 4d, discernible core and shell layers could be observed, whereas it was difficult to clearly distinguish the core and shell layers of some nanofibers from the same sample (Figure 4e). Meanwhile, it was observed that some nanofibers were covered with an ultrathin sheath (Figure 4f), consistent with the TEM observation of Figure 4b. The shell layers were much thinner that they should be, implying the occurrence of significant solution mixing during the formation of these nanofibers. Besides, the TEM image of a bead and SEM image of particles also show their core-shell structure ( Figure S5). The above characterizations show that co-electrospun products have a complicated structure that is different from conventional single-nozzle electrospinning-derived mono-phase products or co-electrospinning-derived core-shell products. The as-prepared products are composed of nanofibers both with and without the core-shell structure. The fibers without the core-shell structure could be derived from the mixing of core and shell layers. However, if the core or shell layer was interrupted in some regions, mono-layer fibers could also be obtained. The Structure of Co-Electrospun CA Nanofibers Owing to the formation of the same polymer of CA, FTIR, TGA, XRD and contact angle characterizations cannot distinguish the single-nozzle electrospun/electrosprayed CA fibers/particles and co-electrospun/co-electrosprayed products ( Figure S4). TEM characterization is usually used to identify the core-shell structure of co-electrospun nanofibers, whereas it is also difficult to distinguish two layers in this case owing to the lack of contrast between the same material. Actually, there is no apparent contrast difference under TEM observation for many nanofibers in our case, meanwhile, slight contrast differences could also be observed for some nanofibers (Figure 4a,b). However, TEM images with such poor contrast cannot be direct evidence for the core-shell structure. To directly observe the core layer, the as-prepared nanofibers were severely sonicated to break the shell layer before TEM observation. Evidently, some nanofibers indeed have a core-shell structure (Figure 4c). Besides, the as-prepared nanofibers were immersed in liquid nitrogen and subsequently fractured for SEM observation of the fractured surface. As shown in Figure 4d, discernible core and shell layers could be observed, whereas it was difficult to clearly distinguish the core and shell layers of some nanofibers from the same sample (Figure 4e). Meanwhile, it was observed that some nanofibers were covered with an ultrathin To confirm the formation reason of the fibers without the core-shell structure, Ag-NPs were dispersed in the core CA-D solution to show the distribution of the polymer from the core Ag/CA-D solution in the co-electrospun fibers indirectly ( Figure S6). If no or slight solution mixing occurred, the Ag-NPs should be only embedded in the core region of fibers. On the contrary, if significant or complete solution mixing take place, Ag-NPs should be randomly distributed throughout the nanofibers as the diffusion of core fluid into shell solution. As shown in Figure 4g, Ag-NPs were only embedded in the central region in some cases, indicating the slight solution mixing during the formation of these fibers. Meanwhile, for some nanofibers, Ag-NPs were randomly located in the whole nanofibers (Figure 4h), implying the significant mixing of core and shell solutions. Even Ag-NPs were only occasionally embedded in the marginal area of fibers (Figure 4i), proving that the solution mixing process was relatively complex. Furthermore, the fluorescence microscopy characterization was also used to confirm whether the core or shell layers were interrupted in some regions. The corresponding images in Figure 5 show that both the shell (red color) and core (green color) polymer components are usually continuous, proving that the nanofibers without the core-shell structure were mainly derived from the mixing of core and shell layers in the local region. Above results indicate that the partial solution mixing indeed occurred during the co-electrospinning of highly miscible CA solutions, while complete mixing did not take place owing to the rapid electrified jet travel process [13]. To confirm the formation reason of the fibers without the core-shell structure, Ag-NPs were dispersed in the core CA-D solution to show the distribution of the polymer from the core Ag/CA-D solution in the co-electrospun fibers indirectly ( Figure S6). If no or slight solution mixing occurred, the Ag-NPs should be only embedded in the core region of fibers. On the contrary, if significant or complete solution mixing take place, Ag-NPs should be randomly distributed throughout the nanofibers as the diffusion of core fluid into shell solution. As shown in Figure 4g, Ag-NPs were only embedded in the central region in some cases, indicating the slight solution mixing during the formation of these fibers. Meanwhile, for some nanofibers, Ag-NPs were randomly located in the whole nanofibers (Figure 4h), implying the significant mixing of core and shell solutions. Even Ag-NPs were only occasionally embedded in the marginal area of fibers (Figure 4i), proving that the solution mixing process was relatively complex. Furthermore, the fluorescence microscopy characterization was also used to confirm whether the core or shell layers were interrupted in some regions. The corresponding images in Figure 5 show that both the shell (red color) and core (green color) polymer components are usually continuous, proving that the nanofibers without the core-shell structure were mainly derived from the mixing of core and shell layers in the local region. Above results indicate that the partial solution mixing indeed occurred during the co-electrospinning of highly miscible CA solutions, while complete mixing did not take place owing to the rapid electrified jet travel process [13]. The Mechanism of Fiber-to-Particle Morphological Transformation As the core-to-shell flow ratio increased, the transformation from bead-free nanofibers to beaded nanofibers, and further to microparticles was realized. Generally, the morphological transition is mainly determined by the competition of viscoelasticity against Rayleigh instability [8,21]. The former is mainly dependent on polymer chain entanglement, while the latter is mainly driven by surface tension [8,21]. For single-nozzle electrospinning, the particle-to-fiber transition can be easily achieved by increasing the the increase of core-to-shell flow ratio, the transformation from bead free nanofi fusiform-beaded nanofibers was achieved (Figure 6b-d). Obviously, the morpho variation was much smaller in comparison with the result of co-electrospinning wi D solution as the core solution. This result indicates that the high solution miscibi deed facilitated the morphological transformation from fibers to particles during electrospinning of CA solutions. Co-Electrospinning of Core CA-D Solution and Shell CA-A Solution Further, the electrospinnable CA-A solution and electrosprayable CA-D so were selected as the core and shell solutions, respectively. In this case, the solution bility was slightly reduced. A few studies have shown that an electrospinnable cor tion as the spinning aid can carry out the non-electrospinnable shell solution to form shell nanofibers [24,25]. Nevertheless, the co-electrospinning process was not as st above experiments, probably owing to the insufficient electrospinnability of the she tion. Specifically, it was not easy to obtain a stable startup process. When the high v was applied on the spinneret, the splitting of the compound jet occurred freq Through finely adjusting the working voltage, a stable co-electrospinning process co ten be achieved and lasted for more than 15 min. As expected, the inversed morpho transformation from particles to fibers was also achieved. When the core-to-shell flo was lower than 1:1, the resultant products were mainly composed of particles and (Figure 7a,b), while beaded nanofibers were fabricated when the core-to-shell flow exceeded 1:1 (Figure 7c,d). In addition, the shells of resultant nanofibers were som slightly broken, indicating that these nanofibers have a core-shell structure, as sho Co-Electrospinning of Core CA-D Solution and Shell CA-A Solution Further, the electrospinnable CA-A solution and electrosprayable CA-D solution were selected as the core and shell solutions, respectively. In this case, the solution miscibility was slightly reduced. A few studies have shown that an electrospinnable core solution as the spinning aid can carry out the non-electrospinnable shell solution to form core-shell nanofibers [24,25]. Nevertheless, the co-electrospinning process was not as stable as above experiments, probably owing to the insufficient electrospinnability of the shell solution. Specifically, it was not easy to obtain a stable startup process. When the high voltage was applied on the spinneret, the splitting of the compound jet occurred frequently. Through finely adjusting the working voltage, a stable co-electrospinning process could often be achieved and lasted for more than 15 min. As expected, the inversed morphological transformation from particles to fibers was also achieved. When the core-to-shell flow ratio was lower than 1:1, the resultant products were mainly composed of particles and beads (Figure 7a,b), while beaded nanofibers were fabricated when the core-to-shell flow ratio exceeded 1:1 (Figure 7c,d). In addition, the shells of resultant nanofibers were sometimes slightly broken, indicating that these nanofibers have a core-shell structure, as shown in the amplified SEM images (Figures 7e,f and S8). And the proportion of core-shell nanofibers was much higher in this case, possibly owing to the reduced solution miscibility. ,f and S8). And the proportion of core-shell nanofibers was much higher in this case, possibly owing to the reduced solution miscibility. Conclusions In summary, we experimentally investigated the effect of solution miscibility on the morphology as well as the structure of co-electrospun products by co-electrospinning of different CA solutions. It was found that the partial mixing of core and shell solutions occurred during the co-electrospinning of highly miscible CA-D and CA-AD21 solutions, resulting in the products composed of fibers both with and without the core-shell structure. While the complete mixing did not take place owing to the rapid electrified jet travel process. Importantly, the partial solution mixing facilitated the morphological transformation from nanofibers to microparticles with the increase of core-to-shell flow ratio. In this process, the final product morphology was dependent on the synergistic effect of core and shell solutions rather than the solo effect of the core or shell solution. In short, this work indicates that partial solution mixing occurs during the co-electrospinning of highly miscible solutions, and subsequently exerts a significant effect on not only the structure but also the morphology of co-electrospun products. Supplementary Materials: The following are available online at www.mdpi.com/xxx/s1, Table S1: Solubility parameters of CA, acetone and DMAc; Table S2: The morphologies of co-electrospun products prepared by different solution couples; Figure S1: T The photographs of mixing process between rhodamine/acetone solution and DMAc solvent, showing the high miscibility of acetone Conclusions In summary, we experimentally investigated the effect of solution miscibility on the morphology as well as the structure of co-electrospun products by co-electrospinning of different CA solutions. It was found that the partial mixing of core and shell solutions occurred during the co-electrospinning of highly miscible CA-D and CA-AD21 solutions, resulting in the products composed of fibers both with and without the core-shell structure. While the complete mixing did not take place owing to the rapid electrified jet travel process. Importantly, the partial solution mixing facilitated the morphological transformation from nanofibers to microparticles with the increase of core-to-shell flow ratio. In this process, the final product morphology was dependent on the synergistic effect of core and shell solutions rather than the solo effect of the core or shell solution. In short, this work indicates that partial solution mixing occurs during the co-electrospinning of highly miscible solutions, and subsequently exerts a significant effect on not only the structure but also the morphology of co-electrospun products. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/polym13244419/s1, Table S1: Solubility parameters of CA, acetone and DMAc; Table S2: The morphologies of co-electrospun products prepared by different solution couples; Figure S1: T The photographs of mixing process between rhodamine/acetone solution and DMAc solvent, showing the high miscibility of acetone and DMAc; Figure S2: The corresponding histograms of fiber diameter. (a-c) Single-nozzle electrospinning of (a) CA-A solution, (b) CA-AD12 solution and (c) CA-AD21 solution; (d-g) Co-electrospinning of core CA-D and shell CA-AD21 solutions under the core-to-shell flow ratio of (d) 1:6, (e) 1:4.5, (f) 1:3, (g) 1:1,5; (h-i) Co-electrospinning of core CA-A and shell CA-D solutions under the core-to-shell flow ratio of (h) 1.5:1 and (i) 3:1; (j-l) Co-electrospinning of core PLGA-D and shell CA-AD21 solutions under the core-to-shell flow ratio of (j) 1:3, (k) 1:1.5 and (l) 1:1; Figure S3: (a,b) Images of compound Taylor cones formed in the coelectrospinning of two CA-AD21 solutions under the core-to-shell flow ratio of 1:3 and 1:1. (c,d) SEM images of co-electrospun CA nanofibers corresponding to (a,b). (e,f) The histograms of fiber diameter corresponding to (c,d); Figure S4: (a) FTIR characterizations of the single-nozzle electrospun CA fibers, co-electrospun CA fibers, co-electrospun beaded fibers, co-electrosprayed particles and single-nozzle electrosprayed CA particles. (b-d) TGA, XRD and contact angle of the single-nozzle electrospun CA fibers, co-electrospun CA fibers and single-nozzle electrosprayed CA particles; Figure S5: (a) A TEM image of a bead and (b) the SEM image of particles prepared by co-electrospinning of CA-D and CA-AD21 solutions; Figure S6: (a-c) SEM images of Ag-NPs loaded products prepared by coelectrospinning of core Ag-NPs-CA/DMAc solution and shell CA/acetone-DMAc (v/v, 2/1) solution under core-to-shell flow ratio of (a) 1:3, (b) 1:1.5 and (c) 1:1, respectively. (d-f) The histograms of fiber diameter corresponding to (a-c); Figure S7: (a) FTIR characterizations of the single-nozzle electrospun CA fibers, co-electrospun PLGA@CA fibers and single-nozzle electrosprayed PLGA particles; Figure S8: A SEM image of nanofibers prepared by co-electrospinning of core CA/acetone solution and shell CA/DMAc solution under core-to-shell flow ratio of 3:1, showing most nanofibers have a core-shell structure.
2021-12-19T16:09:08.275Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "983b4395c4fbdeb3dc43481f290e72863353e656", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/13/24/4419/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e34235c3bc7687d4f2b7771128dd31620a048523", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
40051387
pes2o/s2orc
v3-fos-license
Scrotal Neoplasia : Would Truck Drivers Be At Greater Risk ? Objective: To analyze how scrotal neoplasias have been managed during the past decade and to question possible factors or professions associated to its presence. Materials and Methods: We retrospectively evaluated every case reported from 1995 to 2005 at our hospital. We described the clinical scenario, complementary exams, treatments and outcomes. We also tried to verify if there was any risk, predisposing factors or professions that would explain the cancer origin. Results: Six cases were reviewed. Out of these, three patients were truck drivers. Five of them showed restricted lesions without inguinal lymph nodes enlargement. Histologically, six patients presented squamous carcinoma, with two of them having the verrucous type. The median age of patients was 52 years old (31 to 89). The five patients who are still alive had their lesions completely removed with safety margin and primary closure. Conclusions: We have noticed that the scrotal carcinoma behavior is similar to that of the penis, where removal of the lesion and study of the regional lymph nodes help to increase the patient survival rate. The outstanding fact was that three out of six patients were truck drivers, raising the hypothesis that such profession, maybe due to the contact or attrition with the diesel exhaust expelled by the engine or to sexual promiscuity, would imply in a larger risk of developing this rare INTRODUCTION Scrotum malignant neoplasia is a rare disease and it has been occasionally reported.Its historical context is always remembered as it was initially described by Bassius, in 1731, and soon after that by Treyling, in 1740.However, in 1775, in the famous report "Cancer Scroti", Sir Percival Pott was the first one to link these tumors to chimney sweepers and, since then, this disease has been considered the first occupational neoplasia described in the medical literature.Noticing that these professionals had precarious hygiene, he advised them to take a daily bath.Soon after, the Danish association of chimney sweepers requested daily hygiene from their members, which reduced the incidence of the disease (1,2).This incident was considered one of the first and the most effective interventions in Public Health.Nowadays, scrotal cancer corresponds to 0.1/100,000 cases a year in the USA.Even in America, the largest oncological hospitals have, at the most, a few dozen cases in their files (3).In Brazil, it has been reported in a ship's engine operator (4).Scrotal cancer is extremely similar to penis tumors, and its management has been based on the protocols adopted for the latter.We reported the six cases seen at our hospital during the last decade and reviewed the literature on the subject. We retrospectively analyzed the cases of malignant scrotal neoplasia seen in our service from 1995 to 2005.Six cases were reported.Even if briefly, we took the time to describe the clinical scenario, diagnostic and therapeutic strategies adopted, and the evolution of every specific case.Besides, we also tried to verify if there was some risk, predisposing factor or profession that would explain the origin of this neoplasia. CASE REPORTS Case #1 -B.S., 45 year-old black male, truck driver.Nine months before the initial visit, he noticed the appearance of a lesion in the scrotum, which had developed into a urethro-scrotal fistula.The patient was submitted to incisional biopsy, which demonstrated a squamous cell carcinoma.The lesion was considered irresectable and, during staging exams, the presence of bilateral inguinal lymph node and pulmonary metastasis was noticed.In July 1995, the patient was submitted to systemic radio and chemotherapy with BEP -bleomycin, etoposide, and cisplatin.The overall clinical status of the patient worsened and he died of sepsis in November 1995. Case #2 -J.C.S., 49 year-old black male, farmer.Ten years before the initial visit, he was submitted to perineal cutaneous urethrostomy in another hospital due to complex and recurrent stenosis of the urethra.Three months prior to coming to our hospital, he noticed a scrotal nodule, close to the urethrostomy.The patient was submitted to incisional biopsy, which was diagnosed as squamous cell carcinoma.Staging exams did not show any metastasis.In August 1998, the patient was submitted to a complete removal of the lesion, with a safety margin of 2.0 cm.The lesion dimensions were 6.0 x 4.0 x 2.5 cm.The result was compatible with the biopsy, with free margins.He progressed with stenosis of the perineal urethrostomy, which was solved with urethral dilations.The patient is still alive and well. Case #3 -J.A., 52 year-old white male, truck driver.Five years ago, the patient noticed a vegetative tumor mass of slow growth in the scrotum, which ulcerated and did not heal.In May 2004, the patient was submitted to excisional biopsy of the lesion, which measured 4.0 x 2.5 x 2.5 cm.The result demonstrated a well-differentiated infiltrating squamous cell carcinoma (grade-I) with verrucous pattern.The staging exams did not show any metastasis.The patient is doing well, without signs of recurrence or dissemination. Case #4 -C.F.B., 89 year-old white male, retired farmer.Eighteen months prior to his first visit, the patient noticed a 3.0 cm ulcerated lesion in the left hemiscrotum, presenting tumoral aspect.The patient was submitted to incisional biopsy, and the result showed squamous cell carcinoma (grade-II).In April 2004, the patient was submitted to wide resection of the lesion (4.5 x 3.5 x 2.9 cm), with a 1.0 cm safety margin.The result was similar to that of the biopsy, with free margins.The patient missed the follow-up and returned only in August 2005, presenting left inguinal tumoral lymph nodes, 8.0 cm wide, without mobility, with possible deep invasion, and considered irresectable.The scrotal scar had a good aspect.The patient was referred to radiotherapy and chemotherapy.Since then he is seen on an outpatient basis. Case #5 -J.C.A., 51 year-old brown male, truck driver.Ten years prior to the initial visit, the patient noticed the beginning of verrucous lesions in the pubic and scrotal areas.The patient sought medical aid and was treated for condylomatosis with topical application of podophyllin for countless times.Most of the lesions disappeared, except for a scrotal lesion that continued progressing and, three months before evaluation, it had reached 8.0 cm and was ulcerated.In December 2004, the patient was submitted to resection of the left hemiscrotum and study of the inguinal sentinel lymph node through the dynamic lymphoscintigraphy technique by use of 99m Tc and patent blue dye.The result was grade-I verrucous carcinoma with free margins and the removed lymph node was negative.The patient is doing well, without signs of recurrence. Case #6 -A.A.S., 31 year-old male, clerk, HIV positive.Two years prior to the initial visit, he noticed a red flat lesion, somewhat squamous, in the right hemiscrotum.He sought medical aid and was treated with topical cortical therapy without success. The lesion developed, increased in size, and ulcerated.In July 2005, the patient was submitted to excisional biopsy that evidenced a low grade squamous cell carcinoma.The patient did not present evidences of inguinal lymph nodes enlargement and is seen on an outpatient basis. COMMENTS Being initially described over 250 years ago and soon after, associated to the contact with the soot regarding chimney cleaners, scrotal neoplasia is considered the first occupational neoplasia recorded in medical literature (1,2).Today, it is a known fact that the responsible agent for these cases of neoplasia is the carcinogen 3:4-benzpyrene, a hydrocarbon found in coal (5).The disease has some defined iatrogenic causes in its genesis: the Fowler's solution, an arsenic composition which has been used to treat psoriasis in the past; the association with psoralene and ultraviolet A radiation (PUVA) also employed in the treatment of this disease, causing solar keratosis and epidermoid dysplasia (6,7); and the radiotherapy used in the treatment of scrotal eczema or groin lymphoma (8).Besides the role of hygiene as a probable cause, mechanical or chemical irritation is also questioned because there have already been cases reported in carriers of hypospadia, the scar of Fournier's gangrene, and spinal cord injury with urinary incontinence and chronic use of rubber urinals (9)(10)(11).From the total of patients, three were truck drivers, professionals who do not always practice the ideal type of hygiene, besides being exposed to diesel exhaust and having mechanical attrition of the scrotal area.In these workers, there are significant positive trends in lung cancer risk with increasing cumulative exposure to diesel exhaust (12).High risks have also been reported for other sites: skin, larynx, bladder, and kidney (12).There has been recorded a little higher incidence of scrotal carcinoma in the Iranian nomad (old Persia) population, who used to carry bags containing embers of coal underneath their clothes to keep them warm in the winter.Another issue to be raised is the role of HPV viruses, especially HPV16 and HPV18, in the genesis of a less aggressive variant, the verrucous carcinoma.These viruses are the same ones related to penis cancer (8).Likewise, truck drivers are traditionally considered one of the most sexual promiscuous groups in Brazil (13). In the genesis of basal cell carcinoma, which corresponds to 5% of scrotal neoplasia, the etiology in question is immunosuppression due to aging, UV rays used in other sites, and the previous use of radiotherapy. The natural history of scrotum cancer seems to be very similar to that of the penis and the protocols applied to the latter can be applied to the former (1). Clinically, the lesion is usually presented isolatedly in the 6th decade of life, with slow growth, ulcerating after six months.Since it takes patients from eight to twelve months to seek medical help, a biopsy of the scrotum should be performed whenever suspicious growth is present (1). The preferred diagnostic method is the excisional or incisional biopsy, depending on the extension of the neoplasia. The staging follows the basic principles of penis neoplasia staging: physical exam describing the extension and depth of the lesion, palpation of inguinal lymph nodes, pelvis imaging exams (CT or MRI) to evaluate pelvic lymph nodes and thorax X-ray to evaluate the lungs.There are records of dynamic scintigraphy (study of the sentinel lymph node) with use of 99m Tc and the patent blue dye, similar to the method described for penis cancer (21).In cases #5 and #6 described here, we used this technique, and it was possible to remove one inguinal lymph node ipsilateral to the scrotal lesion. When treating the primary lesion, the intervention must be fast, just like it happens in penis cancer.The reason is that the survival rate is low if the disease progresses, with 30% of deaths happening soon after the progression (22).This is what has happened in case #1 of the present series. The excision with a surgical margin of 2 cm is recommended, followed by primary closing of the incision or use of grafts or flaps if the wound is large.Testicles should be preserved whenever possible by maintaining them in its own hemiscrotum, or transferring them to the contralateral hemiscrotum (23).Whenever this procedure is not feasible, testicles should be buried in the thigh subcutaneous tissue or protected with musculoskeletal flaps.If there the testis is affected, inguinal radical orchiectomy should be performed, similar to the treatment given to primary testicular tumor (1). Inguinal lymphadenectomy or prophylactic inguinal iliac lymphadenectomy, for non-palpable lymph nodes, is controversial and it should be kept for palpable tumors after the use of antibiotic therapy, a protocol that is also similar to that of penis carcinoma (1).Lymphadenectomy should be bilateral, since the superficial lymph vessels of the scrotum communicate freely.In the presence of pelvic invasion, the prognosis has been poor.Simplified inguinal lymphadenectomy, with the preservation of the saphena vein, should be the method of choice (1). Radiotherapy can also be applied, especially for verrucous carcinoma or for patients who do not accept surgery.In one case report, a 6200 cGy dose, in 31 fractions, allowed local control and significant reduction of symptoms (24).In another report, radiotherapy was given as initial treatment to 9 of 65 cases and showed no increment to survival rate after adjustment for other variables (22). Reports on the treatment of systemic disease are scarce.BEP is the scheme that is applied most often, being also similar to what happens on penis cancer (22,25). The most important predictors of survival are stage and age at diagnosis.Survival rates varies progressively with combinations of these two variables regarding subjects younger than 65 years old and seen at diagnosis presenting a 5 years survival rate of 75% or more, compared with 17% for subjects who are 65 years old and older with regional or distant spread (22). Contrary to what happens to most neoplasias, scrotal carcinoma seems to be heading for extinction.Most cases have been reported during the first half of the last century, and, nowadays, they are reported in an anecdotal way (2).If such forecast does not come true, it should be established, among other things, if the HPV virus has any relationship in the genesis of such neoplasia, in addition to accomplishing genomic study in the few described cases, and to confirming the role of the sentinel lymph node study with the use of the dynamic lymphoscintigraphy technique.
2017-07-15T01:50:46.419Z
2007-01-01T00:00:00.000
{ "year": 2007, "sha1": "cbc93520b76606e06985624b3dde22877d0c15a9", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/ibju/v33n4/v33n4a09.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cbc93520b76606e06985624b3dde22877d0c15a9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
262125290
pes2o/s2orc
v3-fos-license
Insulin Therapy for the Management of Diabetes Mellitus: A Narrative Review of Innovative Treatment Strategies The discovery of insulin was presented to the international medical community on May 3, 1922. Since then, insulin has become one of the most effective pharmacological agents used to treat type 1 and type 2 diabetes mellitus. However, the initiation and intensification of insulin therapy is often delayed in people living with type 2 diabetes due to numerous challenges associated with daily subcutaneous administration. Reducing the frequency of injections, using insulin pens instead of syringes and vials, simplifying treatment regimens, or administering insulin through alternative routes may help improve adherence to and persistence with insulin therapy among people living with diabetes. As the world commemorates the centennial of the commercialization of insulin, the aims of this article are to provide an overview of insulin therapy and to summarize clinically significant findings from phase 3 clinical trials evaluating less frequent dosing of insulin and the non-injectable administration of insulin. INTRODUCTION Between January and February 1922, insulin was successfully used to lower blood glucose levels and resolve glycosuria and ketonuria in a teenage boy living with diabetes mellitus (DM) [1,2].Groundbreaking research conducted at the University of Toronto during the period of insulin's discovery [3] were presented to an international audience for the first time at the Annual Meeting of the American Medical Association held on May 3, 1922 [4].Thereafter, the physicochemical characterization of insulin [5][6][7] and the synthesis of highly pure preparations for treating DM [8,9] enabled insulin to become the life-saving antidiabetic medication that it is today. Insulin Structure-Function Relationship Insulin, a peptide hormone that regulates carbohydrate metabolism in vertebrates, belongs to the a ?b class of evolutionarily conserved globular proteins [11,12].It consists of 51 amino acids organized into two chains: the A chain (glycine A1 -asparagine A21 ) and the B chain (phenylalanine B1 -threonine B30 ).The amino acids that constitute the A and B chains influence the natural tendency of insulin to self-associate and bind the insulin receptor [13].Modifying specific amino acids in the two chains alters molecular stability and the dynamics of hexamer-to-monomer dissociation without disrupting insulin's ability to lower blood glucose levels [1].Consequently, most therapeutic insulins that are currently available have modified amino acids and different capacities for self-association compared to endogenous human insulin [14,15]. The molecular pharmacology of various therapeutic insulins is summarized in Table 1. Classifying Insulins The earliest method for classifying therapeutic insulins was based on duration of action [16,17].More recently, therapeutic insulinsparticularly those providing basal coveragehave been classified by generation [18,19] in order to more effectively highlight the evolving therapeutic landscape.A generation-based approach to classification is useful because it allows clinically relevant characteristics of various insulin preparations to be emphasized, such as concentration, glycemic management, and approximate time-action profile. Aims Initiation and intensification of insulin in patients with T2DM is often delayed due to limited acceptance of, adherence to, or persistence with insulin therapy, which lead to poor glycemic management and suboptimal treatment outcomes [31].Innovative treatment strategies for improving insulin adherence and persistence include less frequent dosing [28], non-injectable administration [32], the simplification of complex regimens [33], and the use of insulin pen technologies [34].Since the latter two approaches have been reviewed by other authors [35,36], this article will summarize clinically significant developments in the less frequent dosing of insulin and the non-injectable administration of insulin based on phase 3 randomized controlled trials (RCTs) retrieved from PubMed and ClinicalTrials.govbetween January 1, 2023 and July 31, 2023.This article is based on previously conducted studies and does not contain any new studies with human participants or animals performed by any of the authors. Once-Daily Dosing of Basal Insulin Patients with diabetes who do not achieve glycemic targets with once-daily or twice-daily dosing of a first-generation basal insulin may benefit from once-daily dosing of a secondgeneration basal insulin [37].However, unlike endogenous insulin secretion, conventional basal insulins do not reproduce the physiological hepatic-to-peripheral insulin gradient (threefold higher insulin levels in the liver compared to skeletal muscle and adipose tissue) [38,39]. Eli Lilly and Company developed basal insulin peglispro (BIL), the first hepato-preferential insulin analogue formulated for oncedaily dosing [40,41].This development was motivated by the need for a basal insulin with IMAGINE 7, a phase 3, randomized, crossover trial comparing 8-h to 40-h variable-time dosing to 24-h fixed-time dosing of BIL showed that variable-time dosing provided a reduction in baseline HbA1c that was non-inferior to fixed-time dosing after 12 weeks of treatment in adult participants with T1DM who were previously treated with insulin.Lastly, IMAGINE 8, a phase 3, randomized, crossover trial evaluating the incidence of hypoglycemia 84 h after administering a double dose, demonstrated that double dosing of BIL was associated with a significantly lower risk of clinically significant hypoglycemia (blood glucose B 3.0 mmol/L or symptoms of severe hypoglycemia) compared to double dosing of glargine in adult participants with T2DM who were previously treated with insulin. Despite these positive findings, the IMAGINE clinical development program was ultimately terminated because participants treated with BIL developed elevated levels of alanine aminotransferase and serum triglycerides as well as increased liver fat content [41,50]. Table 3 summarizes the IMAGINE clinical development program evaluating once-daily dosing of basal insulin. Once-Weekly Dosing of Basal Insulin Extensive research has been conducted in an attempt to develop a basal insulin with an extended half-life, prolonged glucose-lowering activity, and potential for improving treatment adherence [28].As a consequence, several insulin preparations have been formulated for once-weekly dosing.Novo Nordisk developed an ultra-long-acting basal insulin analogue (icodec) and a fixed-ratio combination of a basal insulin and GLP-1RA (icodec ?semaglutide).Eli Lilly and Company developed an ultra-longacting, single-chain insulin variant fused to the fragment crystallizable region of an immunoglobulin G2 (insulin efsitora alfa). Icodec (NN1436) The ONWARDS clinical development program is a series of six active-controlled, treat-to-target, phase 3a RCTs [51-56].The primary objective of the program is to evaluate change in HbA1c from baseline to the end of the treatment period by comparing once-weekly dosing of icodec to once-daily dosing of conventional basal insulin (glargine or degludec) in three diverse populations: adult participants with T2DM who are insulin-naı ¨ve (ONWARDS 1, 3, and 5); adult participants with T2DM who were previously treated with insulin (ONWARDS 2 and ONWARDS 4); and adult participants with T1DM who were previously treated with insulin (ONWARDS 6). Published results demonstrate that icodec provided reductions in baseline HbA1c that were non-inferior (ONWARDS 1-4) and statistically superior (ONWARDS 1-3) to degludec U-100 and glargine U-100.Results from ONWARDS 5 and ONWARDS 6 are not yet published, but they are expected to provide key insights that will inform various clinically relevant aspects of once-weekly dosing of icodec, CLINICAL SIGNIFICANCE OF LESS FREQUENT DOSING OF INSULIN Glycemic management with conventional insulin therapy is typically suboptimal, necessitating treatment intensification with either multiple daily injection (MDI) of insulin or continuous subcutaneous insulin infusion (CSII) [65,66].The need for daily subcutaneous injections is reduced with CSII because the site of infusion must be changed every 48-72 h [67,68].However, adherence and persistence rates of insulin therapy are still lower than for other antidiabetic medications [69].Although several negative predictive factors have been identified [70], the inverse relationship between frequency of insulin injections and treatment adherence and persistence [71] has not been effectively tackled by MDI or CSII. Less frequent dosing of insulin has major clinical implications because it may help patients living with DM achieve desired outcomes by overcoming the known barriers to optimal use of insulin therapy [31,72].Onceweekly dosing of GLP-1RAs is associated with higher rates of treatment adherence and persistence compared to once-daily dosing [73].By reducing the burden of injections, it is likely that once-weekly dosing of insulin will lead to similar improvements in adherence and persistence [74]. There are very few studies evaluating adherence to and persistence with less frequent dosing of insulin therapy [75].A recently published cross-sectional study found that a reduced number of injections was the most common patient-reported factor that may improve treatment adherence [76].More research into onceweekly dosing of insulin is needed to provide robust evidence of the impact of less frequent dosing on adherence to and persistence with insulin therapy [77]. Basal Insulin Peglispro, Icodec, Icodec 1 Semaglutide, and Insulin Efsitora Alfa One hepato-preferential insulin preparation, two ultra-long-acting insulin preparations, and one fixed-ratio combination have been studied in phase 3 RCTs.Figure 1 summarizes clinically significant characteristics of these innovative insulins. The IMAGINE Trials BIL was designed to pharmacologically replicate the physiological hepatic-to-peripheral insulin gradient.Unfortunately, the IMAGINE clinical development program was discontinued because transaminases, serum triglyceride levels, and liver fat content were elevated in insulin-treated but not insulin-naı ¨ve adult participants with T2DM who were treated with BIL. The ONWARDS and QWINT Trials By significantly reducing the burden of injection, once-weekly basal insulin has potential to improve adherence to and persistence with insulin therapy among patients living with DM.However, there is concern that icodec and insulin efsitora alfa may be associated with excessive day-to-day glycemic variability.The increasing use of continuous glucose monitoring (CGM) in research and clinical practice has enabled dynamic fluctuations in blood glucose levels to be studied more conveniently.Time in range (TIR), which is defined as the percentage of time that blood glucose is between 3.9 and 10.0 mmol/L [83,84], is a clinically relevant indicator of glycemic management that is inversely correlated with HbA1c [85].For adults with T1DM or T2DM, the recommended TIR is [ 70%, meaning that blood glucose levels 15).CGM data for adult populations with T1DM (ONWARDS 6 and QWINT-5) and adult populations with T2DM (QWINT-2, QWINT-3, and QWINT-4) will provide additional clinically significant information on the quality of glycemic management resulting from less frequent dosing of insulin. Insulin efsitora alfa protracts insulin action by binding to the fetal neonatal receptor, whereas icodec reversibly binds to human serum albumin [87].It is unclear whether these different mechanisms of protraction will lead to clinically significant differences in efficacy and safety.A head-to-head trial comparing insulin efsitora alfa and icodec may be needed in order to resolve this uncertainty. The COMBINE Trials Intensification of basal insulin with onceweekly dosing of icodec ?semaglutide will be a clinically significant treatment option for adult patients with T2DM because it has the potential to significantly reduce injection burden, provide complementary basal and prandial glycemic management with a limited risk of hypoglycemia, reduce body weight, and manage cardiovascular risk factors [88].Consequently, results from the COMBINE program are eagerly awaited due to the frequent association between obesity and T2DM [89] and the urgent need for safe and effective medications that manage the cardiometabolic complications of DM. NON-INJECTABLE ADMINISTRATION OF INSULIN: SUMMARY OF 3 CLINICAL TRIALS Insulin therapy is primarily administered via subcutaneous injection.However, missed and mistimed dosing of subcutaneous insulin occurs frequently among people living with DM [90], contributing to the suboptimal use of insulin therapy and poor treatment outcomes.Consequently, the suitability of non-injectable administration of insulin has been intensely investigated [91].Two prandial insulins-Exubera (developed jointly by Nektar Therapeutics, Pfizer, and Sanofi-Aventis) and Technosphere insulin (developed by MannKind Corporation)-have been formulated for inhaled administration.Additionally, Oramed Pharmaceuticals developed a basal insulin called ORMD-0801, which has been formulated for oral administration. Exubera In two phase 3 RCTs evaluating long-term pulmonary safety in adult participants with insulin-treated T1DM [92] or insulin-treated T2DM [93], Exubera caused non-progressive and reversible declines in baseline forced expiratory volume in 1 s (FEV 1 ) and baseline carbon monoxide diffusing capacity that were slightly greater in magnitude but clinically non-meaningful compared to regular human insulin (RHI), lispro, and aspart. The efficacy of Exubera has been compared to oral antidiabetic drugs (OADs) or RHI in five phase 3 RCTs [94-98] with the primary objective of evaluating change in HbA1c from baseline to the end of the treatment period in insulin-naı ¨ve or insulin-treated adult participants with T1DM or T2DM. In participants with insulin-naı ¨ve T2DM, Exubera provided a reduction in baseline HbA1c that was superior to both metformin monotherapy and dual oral therapy consisting of an insulin secretagogue (sulfonylurea or repaglinide) ?an insulin sensitizer (thiazolidinedione or metformin).Additionally, Exubera provided a non-inferior reduction in baseline HbA1c compared to RHI in participants with T1DM or T2DM who were previously treated with insulin. Technosphere Insulin In a phase 3 clinical trial evaluating long-term pulmonary safety in adult participants with T1DM or T2DM, Technosphere insulin caused a small and non-progressive decline in baseline FEV 1 compared to the usual antidiabetic treatment (OADs alone or OADs ?insulin) [99]. The efficacy of Technosphere insulin has been evaluated in five phase 3 RCTs [100-104] that had the primary objective of evaluating the change in HbA1c from baseline to the end of the treatment period in adult participants with insulin-treated T1DM, insulin-treated T2DM, or insulin-naı ¨ve T2DM. Inhaled administration of Technosphere insulin demonstrated consistently positive results across the phase 3 clinical trials: a noninferior reduction in baseline HbA1c compared to biaspart in participants with insulin-treated T2DM; a non-inferior reduction in baseline HbA1c compared to aspart or lispro in participants with insulin-treated T1DM; and a superior reduction in baseline HbA1c compared to OADs in insulin-naı ¨ve participants with T2DM.Lastly, in participants with insulin-treated T2DM, Technosphere insulin provided a reduction in baseline HbA1c that was not equivalent to aspart. INHALE-1 [105] is an ongoing open-label, active-controlled, phase 3 RCT that is comparing Technosphere insulin to rapid-acting insulin analogues (lispro, aspart, or glulisine) with the primary objective of evaluating the change in HbA1c from baseline to the end of the treatment period in participants B 18 years of age with T1DM or T2DM who were previously treated with insulin.This non-inferiority clinical trial is expected to provide high-level evidence that will support the use of inhaled insulin in children and adolescents living with DM. ORMD-0801 Two placebo-controlled, phase 3 RCTs [106,107] evaluating the change in HbA1c baseline to the end of the treatment period in insulin-naı ¨ve adult participants with T2DM were terminated early following the completion of only 26 weeks of treatment with ORMD-0801.As a consequence, the clinical need for a safe and efficacious oral insulin preparation remains unmet. Table 5 summarizes the phase 3 RCTs evaluating the non-injectable administration of insulin. CLINICAL SIGNIFICANCE OF NON-INJECTABLE ADMINISTRATION OF INSULIN The non-injectable administration of insulin has been investigated since 1920s, when alcoholic solutions containing insulin were administered orally [108].However, this approach was abandoned due to limited efficacy compared to the subcutaneous administration of insulin.Pulmonary administration of insulin was proposed as an alternative to subcutaneous administration due to the large surface area, high permeability, and extensive vascularization of the deep lung [109].However, pulmonary administration is challenging due to diffusional deposition of the medication in the mucus layer and mucociliary advection/clearance [110].innovative formulation of insulin into a dry powder consisting of very small particles insulin to be successfully delivered to the alveoli, thereby surmounting the barriers to the pulmonary administration of peptide medications [111,112]. Exubera, Technosphere Insulin, and ORMD-0801 Two inhaled insulin preparations and one oral insulin preparation have been studied in phase 3 RCTs.Figure 2 summarizes clinically significant characteristics of these innovative insulins. The Exubera and Technosphere Insulin Clinical Trials Due to positive evidence of pulmonary safety and efficacy, Exubera became the first inhaled insulin to be approved in 2006 for use in adult patients with DM in the United States (US) and Europe [113].However, the withdrawal of Exubera from the market in the US (2007) and Europe (2008) due to poor sales [113,114] created an opportunity for the development of other inhaled insulins for patients preferring a non-injectable treatment option.Technosphere insulin was subsequently developed and approved in 2014 [115] following positive results from phase 3 RCTs, and is currently the only inhaled insulin preparation available in the US for management of post-prandial hyperglycemia in adult patients living with DM. Intensification of antidiabetic treatment in the pediatric population seems to be the next frontier for inhaled insulin.Since the INHALE-1 trial is expected to provide a new therapeutic option for managing post-prandial hyperglycemia in children and adolescents with DM, findings from this RCT are eagerly awaited. The impact of non-injectable administration on adherence to and persistence with insulin therapy has been previously studied.Some authors have suggested that inhaled administration of insulin may improve treatment adherence [116,117].In several empirical studies, inhaled administration of insulin was associated with higher treatment satisfaction than subcutaneous administration among participants with DM [118][119][120][121][122]. Furthermore, adolescent and adult participants with T1DM who were treated with inhaled insulin self-reported lower barriers to treatment adherence [123].Since real-world evidence (RWE) has been shown to play a critical role in assessing treatment adherence [124], there is an urgent need for RWE that corroborates the positive findings from empirical research on inhaled administration of insulin. The ORMD-0801 Clinical Trials The early termination of the phase 3 RCTs evaluating ORMD-0801 is disappointing.Consequently, the clinical significance of oral insulin remains unclear due to a lack of robust clinical evidence.To overcome this limitation, research into the chemical, formulation, and physical barriers to the oral administration of insulin [125] should continue to be prioritized in order to ensure that other therapeutic insulins designed for oral administration reach advanced stages of clinical development. CONCLUSIONS Less frequent dosing of insulin has been evaluated by numerous phase 3 clinical trials and has yielded mixed results.In the IMAGINE trials, once-daily dosing of basal insulin peglispro provided glycemic management that was noninferior to glargine and NPH.However, the development of this hepato-preferential insulin was discontinued due to transaminitis, elevated serum triglyceride levels, and increased liver fat content.In the completed ONWARDS trials, once-weekly dosing of icodec provided non-inferior and statistically superior glycemic management compared to glargine and degludec.Based on these positive results, icodec is likely to be the world's first-in-class ultra-long-acting basal insulin approved for the medical management of diabetes mellitus.The ongoing COMBINE and QWINT trials are expected to provide substantive evidence of the efficacy and safety of icodec ?semaglutide and insulin efsitora alfa, respectively.Phase 3 clinical trials evaluating the non-injectable administration of insulin have nated in Technosphere insulin being the only inhaled antidiabetic medication currently available to people living with diabetes.need for an oral insulin remains unmet because the two clinical trials evaluating ORMD-0801 have been terminated early.We therefore look forward to continuous innovation in insulin therapy to overcome existing and emerging treatment challenges. Authorship All named authors meet the International Committee of Medical Journal Editors (ICMJE) criteria for authorship for this article, take responsibility for the integrity of the work as a whole, and have given their approval for this version to be published. Author Contributions.Ken Nkonge, Dennis Nkonge, and Teresa Nkonge contributed by collecting relevant articles, performing the literature review, preparing figures and tables, and writing the first draft of the manuscript.All authors have provided their final approval of the submitted version of the manuscript. Funding.No funding or sponsorship was received for this study or the publication of this article. Data Availability.Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study. Icodec 1 including dose titration in adult participants with T2DM and administration of basal-bolus insulin therapy in adult participants with T1DM.Semaglutide (NN1535, IcoSema)The ongoing COMBINE clinical development program comprises three active-controlled, open-label, phase 3 RCTs [57-59].The primary objective of the program is to evaluate change in HbA1c from baseline to the end of the treatment period in adult participants with T2DM who were previously treated with either basal insulin or GLP-1RA.These phase 3 trials are comparing once-weekly dosing of fixed-ratio combination icodec ?semaglutide to onceweekly dosing of icodec (COMBINE 1), onceweekly dosing of semaglutide (COMBINE 2), and once-daily dosing of glargine (COMBINE 3). Fig. 2 Fig. 2 Schematic of clinically significant characteristics of insulins formulated for non-injectable administration.ALP alkaline phosphatase, ALT alanine aminotransferase, AST aspartate aminotransferase, DL CO carbon monoxide Table 1 Molecular pharmacology of therapeutic insulins Table 2 Classification of therapeutic insulins according to generation Table 3 Summary of phase 3 randomized controlled trials evaluating once-daily dosing of basal insulin Table 4 Summary of phase 3 randomized controlled trials evaluating once-weekly dosing of basal insulin Table 5 Summary of phase 3 randomized controlled trials evaluating the non-injectable administration of insulin Table 5 continued Conflict of Interest.Ken Nkonge, Dennis Nkonge, and Teresa Nkonge declare that they have no competing interests.format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http:// creativecommons.org/licenses/by-nc/4.0/.Bain SC, Gowda A, et al.Weekly icodec versus daily glargine U100 in type 2 diabetes without previous insulin.N Eng J Med. 2023;389: 297-308.52.Philis-Tsimikas A, Asong M, Franek E, et al.Switching to once-weekly insulin icodec versus once-daily insulin degludec in individuals with basal insulin-treated type 2 diabetes (ONWARDS 2): a phase 3a, randomised, open-label, multicentre, treat-to-target trial.Lancet Diabetes Endocrinol.2023;11:414-25.53.Lingvay I, Asong M, Desouza C, et al.Once-weekly insulin icodec vs once-daily insulin degludec in adults with insulin-naı ¨ve type 2 diabetes: the ONWARDS 3 randomized clinical trial.JAMA.ClinicalTrials.gov.A research study to compare a new weekly insulin, insulin icodec used with doseguide app, and daily insulins in people with type 2 diabetes who have not used insulin before (ONWARDS 5).2021.https://clinicaltrials.gov/ study/NCT04760626.Accessed 10 Mar 2023.56.ClinicalTrials.gov.A research study to compare a new weekly insulin, insulin icodec, and an available daily insulin, insulin degludec, both in combination with mealtime insulin in people with type 1 diabetes (ONWARDS 6) (ONWARDS 6).2021.https://clinicaltrials.gov/study/NCT04848480.Accessed 10 Mar 2023.57.ClinicalTrials.gov.A research study to see how well the new weekly medicine icosema, which is a combination of insulin icodec and semaglutide, controls blood sugar level in people with type 2 diabetes compared to weekly insulin (COM-BINE 1).2022.https://clinicaltrials.gov/study/ NCT05352815.Accessed 10 Mar 2023.58.ClinicalTrials.gov.A research to see how well the new weekly medicine icosema, which is a combination of insulin icodec and semaglutide, controls blood sugar level in people with type 2 diabetes compared to weekly semaglutide (COM-BINE 2) (COMBINE 2).2022.https://clinicaltrials. gov/study/NCT05259033.Accessed 10 Mar 2023.59.ClinicalTrials.gov.A research study to see how well the new weekly medicine icosema, which is a combination of insulin icodec and semaglutide, controls blood sugar level in people with type 2 diabetes compared to insulin glargine taken daily with insulin aspart (COMBINE 3).2022.https://clinicaltrials.gov/ study/NCT05013229.Accessed 10 Mar 2023.91.Pandey M, Choudhury H, Yi CX, et al.Recent updates on novel approaches in insulin drug delivery: a review of challenges and pharmaceutical implications.Curr Drug Targets.2018;19:1782-800.92.Rosenstock J, Cefalu WT, Hollander PA, et al.Twoyear pulmonary safety and efficacy of inhaled human insulin (exubera) in adult patients with type 2 diabetes.Diabetes Care.2008;31:1723-8.93.Skyler JS, Jovanovic L, Klioze S, Reis J, Duggan W, Inhaled Human Insulin Type 1 Diabetes Study Group.Two-year safety and efficacy of inhaled human insulin (exubera) in adult patients with type 1 diabetes.Diabetes Care.2007;30:579-85.94.Barnett AH, Dreyer M, Lange P, Serdarevic-Pehar M.An open, randomized, parallel-group study to compare the efficacy and safety profile of inhaled human insulin (exubera) with metformin as adjunctive therapy in patients with type 2 diabetes poorly controlled on a sulfonylurea.Diabetes Care.2006;29:1282-7.95.Quattrin T, Be ´langer A, Bohannon NJV, Schwartz SL, Exubera Phase III Study Group.Efficacy and safety of inhaled insulin (exubera) compared with subcutaneous insulin therapy in patients with type 1 diabetes: results of a 6-month, randomized, comparative trial.Diabetes Care.2004;27:2622-7.96.Skyler JS, Weinstock RS, Raskin P, et al.Use of inhaled insulin in a basal/bolus insulin regimen in type 1 diabetic subjects: a 6-month, randomized, comparative trial.Diabetes Care.2005;28:1630-5.97.Rosenstock J, Zinman B, Murphy LJ, et al.Inhaled insulin improves glycemic control when substituted for or added to oral combination therapy in type 2 diabetes: a randomized, controlled trial.Ann Intern Med.2005;143:549-58.98. Hollander PA, Blonde L, Rowe R, et al.Efficacy and safety of inhaled insulin (exubera) compared with subcutaneous insulin therapy in patients with type 2 diabetes: results of a 6-month, randomized, comparative trial.Diabetes Care.2004;27:2356-62.99.Raskin P, Heller S, Honka M, et al.Pulmonary function over 2 years in diabetic patients treated with prandial inhaled technosphere insulin or usual antidiabetes treatment: a randomized trial.Diabetes Obes Metab.2012;14:163-73.100.Rosenstock J, Lorber DL, Gnudi L, et al.Prandial inhaled insulin plus basal insulin glargine versus twice daily biaspart insulin for type 2 diabetes: a multicentre randomised trial.Lancet.2010;375: 2244-53.
2023-09-23T06:17:41.341Z
2023-09-22T00:00:00.000
{ "year": 2023, "sha1": "9f891390d25d17b831f53aed6e6e07dab537c84e", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s13300-023-01468-4.pdf", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "37ed427be714ab73aa7c0c73c581d1def8cd02a8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
151351490
pes2o/s2orc
v3-fos-license
A Failure to Communicate: the Fact-value Divide and the Putnam-dasgupta Debate AUTHOR'S NOTE: We would like to thank Partha Dasgupta for helpful comments on an earlier version of this paper. We would also like to thank the anonymous referees and especially the editor Thomas Wells for his careful editing and many suggestions for improvement, many of which we followed. We remain responsible for any errors. Abstract: This paper considers the debate between economists and philosophers about the role of values in economic analysis by examining the recent debate between Hilary Putnam and Sir Partha Dasgupta. It argues that although there has been a failure to communicate there is much more agreement than it seems. If Dasgupta's work is seen as part of the methodological tradition expounded by John Stuart Mill and John Neville Keynes, economists and philosophers will have a better basis for understanding each other. Unlike the logical-positivist tradition, which treats facts and values as two mutually exclusive concepts, the Mill-Keynes tradition recognizes that facts and values are intertwined. Unlike the Smithian tradition, which blends the study of facts and normative rules, it divides economics into a science that studies " what is " and an art which considers " what ought to be done ". In thinking about the ongoing debate between philosophers and economists about the place of values in economics, one cannot help but be reminded of that famous line in the movie Cool Hand Luke, " What we've got here is a failure to communicate ". Despite attempts to resolve the debate, there seems to be little agreement, with many economists continuing to believe that economics should study and indeed does study facts, not values; many philosophers continuing to In thinking about the on-going debate between philosophers and economists about the place of values in economics, one cannot help but be reminded of that famous line in the movie Cool Hand Luke, "What we've got here is a failure to communicate". Despite attempts to resolve the debate, there seems to be little agreement, with many economists continuing to believe that economics should study and indeed does study facts, not values; many philosophers continuing to SU AND COLANDER / A FAILURE TO COMMUNICATE VOLUME 6, ISSUE 2, AUTUMN 2013 2 believe that economists are hopelessly confused; and neither side recognizing the other's position as defensible. A recent flare up of this debate can be seen in the on-going exchange between Hilary Putnam-writing together with Vivian Walsh (2007a;2007b;2012)-and Sir Partha Dasgupta (2005;2007a;, both representative of the best in their field. The debate between them began in an unusual manner. In his book An inquiry into well-being and destitution (1993,(6)(7), Dasgupta cited Putnam (1981;1989) to the effect that an entanglement of facts and values is unavoidable and that that entanglement would influence the way he argued. Based on that citation, and a reading of Dasgupta's work, Putnam saw Dasgupta as an example of how economists can do economic policy analysis right-i.e., by explicitly including ethical judgements in their work. If Putnam believed that he and Dasgupta were in the same camp, that belief was shattered when, in a 2005 article 'What do economists analyze and why: values or facts?' published in the journal Economics and Philosophy, Dasgupta took issue with claims that Putnam had made about how he was including values in his economic analysis. Dasgupta argued that what economists do is analyze facts, and that in professional debates on social policy economists differ primarily on their reading of the facts, not on their values. He further claimed that "Ethics has taken a back seat in modern economics not because contemporary economists are wedded to a 'value-free' enterprise, but because the ethical foundations of the subject were constructed over five decades ago and are now regarded to be a settled matter" (Dasgupta 2005, 221-222). Dasgupta suggested that Putnam was promoting the false impression that modern economics is an "ethical desert". Dasgupta's paper led to a strong response by Putnam and Walsh in Economics and Philosophy (2007a)-to which Dasgupta replied (2007a)and a longer response in the Review of Political Economy (2007b). That ultimately led to a co-edited book (2012), which reprinted their articles together with others by philosophers on their side of the argument. In all these works Putnam and Walsh argue forcefully that Dasgupta has failed to understand Putnam's account of the entanglement of fact and value. The new version made some clarifications in the introductory sections, added a discussion of why Sen's capabilities cannot be seen as primitive ethical notions, and included a short section on estimating poverty. But these changes amplified and clarified his points; they did not change his position. Likewise, Putnam and Walsh did not change their position when revisiting the debate in The end of value-free economics (2012) by reprinting their original contributions (2007a; 2007b, 2009). Given the lapse of time, both sides clearly had the chance to amend their published positions if they wanted to. They chose not to. By examining the debate this paper attempts to clarify the issues in dispute and facilitate communication between philosophers such as Putnam and economists such as Dasgupta. The paper is organized as follows. In Section 1 we review the origins of the debate between Putnam and Dasgupta. In Section 2 we identify two different issues in relation to the debate-the concept of value and the methodology of economics-and argue that these two issues need to be treated separately. We examine the first issue in Section 3 by placing the Putnam-Dasgupta debate in the context of more recent debate about the role of facts and values in the philosophy of science and the philosophy of economics. We examine the second issue in Sections 4 and 5, arguing that the methodology of economics advocated by Dasgupta does indeed belong to a broad classical tradition as Putnam suggested, but to a Mill-Keynes tradition rather than to the Smithian approach presumed by Putnam and Walsh. In Section 6 we conclude by arguing that seeing Dasgupta as a follower of the Mill-Keynes tradition makes it easier to see precisely where Putnam and Dasgupta disagree. Both are convincing within their own context, but outside of that context there is ambiguity and a resulting lack of communication. INTELLECTUAL BACKGROUND TO THE PUTNAM-DASGUPTA DEBATE To understand the Putnam Dasgupta debate, it is useful to review its origins. In a series of works since the 1980s Putnam has argued against the idea that there is a sharp metaphysical dichotomy between facts and values, and that facts and values are entangled in scientific knowledge (1981; 1990; 1993; 2002; 2003). The main target of Putnam's discussion is logical positivism, which holds that ethical values cannot be legitimate SU AND COLANDER / A FAILURE TO COMMUNICATE VOLUME 6, ISSUE 2, AUTUMN 2013 4 subject-matter of science because they are cognitively meaningless. Putnam's fact-value entanglement arguments are applicable to all sciences, but economics has been of particular interest to him because he believes that logical positivism strongly affected the development of economics in the 1930s, and that its influence still lingers in economics today. According to Putnam, the logical-positivist movement, combined with several other intellectual currents of the time, shaped economists' idea of economics as a scientific discipline in the twentieth century. Among the results of these influences, Putnam argued, was Lionel Robbins's position requiring a clear-cut distinction between economics and ethics, with ethical judgments having no place in the science of economics (Putnam 2002, 53-54). 1 In Putnam's view, the exclusion of ethics has impoverished economics since then. In particular, the factvalue dichotomy has impoverished the ability of welfare economics to evaluate economic well-being. Putnam argues that just as economics was embedding a positivist methodology into its vision of itself, philosophy was moving away from logical positivism. As early as 1951 Willard Van Orman Quine launched an attack on the analytic-synthetic dichotomy which, in Putnam's view, eventually collapsed the fact-value dichotomy that lay at the foundation of the logical-positivist approach. In his works Putnam has extended Quine's insights and reinforced the argument against the fact-value dichotomy by exploring the phenomena that he has called the entanglement of fact and value. The core of Putnam's idea of the entanglement of fact and value is that "the very vocabulary in which we describe human facts […] frequently fails to be factorable into separate and distinct 'factual' and 'evaluative' components" (Putnam and Walsh 2007b, 185). One of Putnam's own examples can help us understand better what Putnam means by this. According to Putnam, when we say a sentence like 'He is a cruel person', we do not simply 'describe' the person, but also 'evaluate' the person (Putnam 2002, 34-35). It is Putnam's view that when we describe a fact we almost inevitably make an evaluation or 1 While Putman follows the standard way of interpreting Robbins, there is an alternative interpretation that sees Robbins's contribution differently (see Colander 2009). In this alternative view, instead of wanting to keep ethical values out of economics, what Robbins actually wanted to do was to reduce some of the most blatant blending of value judgments and supposedly scientific policy conclusions. We do not discuss such points extensively here since they involve history of thought issues rather than philosophical issues. value judgment as well. Since making a factual judgment almost inevitably involves value judgments, description and valuation are interdependent and entangled. Note that what Putnam argues against is not the practical distinction between facts and values but the metaphysical dichotomy or dualism of fact and value (2002,(9)(10). The former still considers that fact and value are not the same. Putnam refutes the dichotomy on the ground that the factual and evaluative components in the vocabulary we use are often simultaneously present. While the "cruelty" case may overstate the point, since scientific technical language is generally structured to avoid such obvious entanglements, we fully agree that if one digs deep enough, all descriptive language, and hence all language in science is inevitably value-laden. That is what might be called a base-line metaphysical entanglement that cannot be avoided. But, as a practical matter, one might still want to call a primarily logical proposition, for example, 'Given a utility function with appropriate assumptions, a derived demand curve will be downward sloping', a fact to be distinguished from a relatively more value laden proposition such as, 'Society will be better off if income is redistributed in some fashion'. One of Putnam's goals is to enrich modern economics by getting economists to recognize not only the negative critique of the fact-value dichotomy but also the positive opportunities of the entanglement of facts and values. Entanglement demonstrates the legitimacy-indeed necessity-of ethical judgments in economic analysis. A major example cited by Putnam of how this opportunity can be taken up by economists is Amartya Sen's capability approach to studying economic well-being. Several of Dasgupta's works can be seen as practical demonstrations of Putnam's position. His 1993 book An inquiry into well-being and destitution, among many other works, shows how economists can and should integrate ethical concerns into their research, and even cites Putnam's work as a justification for this approach. Thus it probably came as some surprise to Putnam that Dasgupta's 2005 article advanced a quite different interpretation of what economists, including Dasgupta himself, were doing. In the resulting exchange both sides seemed to be talking past each other. THE ENTANGLEMENT OF FACT AND VALUE: THE DISAGREEMENT In a reply jointly written with Walsh, Putnam argues that Dasgupta completely misread his position on the entanglement of facts, theories, SU AND COLANDER / A FAILURE TO COMMUNICATE VOLUME 6, ISSUE 2, AUTUMN 2013 6 and values (Putnam and Walsh 2007a). In response, Dasgupta insists that he understood entanglement perfectly and had no quarrel with it (Dasgupta 2007a). In examining why they disagree, let us start with an example where their disagreement is evident. In closing his paper, Dasgupta (2005) offers two quotations-from Reutlinger and Pellekaan (1986) and from the World Bank's 1986 World development report-to support his central claim that economists have shared ethical values, but differ in their reading of the facts. The same quotations are also used by Putnam and Walsh as evidence that Dasgupta had failed to understand what they meant by the entanglement (Putnam and Walsh 2007b, 185-187). 2 These two quotations are as follows: [L]ong run economic growth is often slowed by widespread chronic food insecurity. People who lack energy are ill-equipped to take advantage of opportunities for increasing their productivity and output. That is why policymakers in some countries may want to consider interventions that speed up food security for the groups worst affected without waiting for the general effect of long-run growth (Reutlinger and Pellekaan 1986, 6). The best policies for alleviating malnutrition and poverty are those which increase growth and the competitiveness of the economy, for a growing and competitive economy facilitates a more even distribution of human capital and other assets and ensures higher incomes for the poor. Progress in the battle against malnutrition and poverty can be sustained if, and only if, there is satisfactory economic growth (World Bank 1986, 7). In this case, in saying that economists have shared values, Dasgupta means that the ethical desirability of eliminating destitution is presumed by both sets of authors. He sees the difference in policy recommendations as disagreements concerning the most effective means of eliminating destitution that follow from the two parties' differing views of the central causal mechanisms. In contrast, in arguing that the disagreement between the two sets of authors is of an entangled character, Putnam and Walsh mean that the apparent divergence in views regarding the most effective means is actually the result of the authors' different values. In their view, the authors of the World development report do not truly share the value of eliminating destitution with Reutlinger and Pellekaan: the apparent value agreement is just a disguise for their real unspeakable values (Putnam and Walsh 2007b, 186). Our claim is that the arguments of both sides can be seen as convincing within their own context while simultaneously being seen as incomplete from the perspective of the other side. Dasgupta is clearly aware that ethical values are often the motivation for economic studies, and hence he agrees that that economics is not value-free. Moreover, he believes, rightly or wrongly, that the ethical values which motivate most economic research are widely shared by economists. There is little doubt that Dasgupta recognizes the entanglement of fact and value at the initial stage of a research project, but he seems to believe that at the later stages of the research, the evaluation of facts will not be entangled with ethical values, though he does not deny that other types of values may be involved (Dasgupta 2007a, 471). Putnam disagrees with him on the latter point. For Putnam, it is impossible to make a statement about facts without making an ethical value judgment. He believes that on this point Dasgupta has failed to comprehend the true meaning of his analysis of entanglement and its implications. Putnam and Walsh argue that the values held by Reutlinger and Pellekaan are different from those of the World Bank, and that this difference in values is at the root of their different reading of the facts. Their sharp critique points out the problem that economists may use so-called 'scientific' theory as cover for ideological beliefs. But can this argument alone defeat Dasgupta's position that economists, even when sharing ends, would still have different views regarding which means would be most effective for achieving them due to their different readings of the facts? And isn't it possible that economists do genuinely agree about some ends, yet still disagree about means due to different understandings of the relevant facts, such as causal mechanisms? We believe that it is indeed possible, and that as a practical matter good economists, such as Dasgupta, focus their applied work on an analysis of "facts", while recognizing that on a deeper metaphysical level facts and values are intertwined. In developing that applied empirical work, for example in identifying and studying specific causal mechanisms, they will come to different judgments about the facts and their real world significance, but those differing judgments do not mean that they differ about the ultimate goal. SU AND COLANDER / A FAILURE TO COMMUNICATE VOLUME 6, ISSUE 2, AUTUMN 2013 8 VALUE-FREE ECONOMICS? The debate between Putnam and Dasgupta is just part of a more general debate between philosophers of science. Insight can be gained into their debate by considering that broader philosophical debate, specifically the work of Andrea Scarantino (2009), who divided the relationship between science and values into three types: the 'naïve positivist view', the 'separatist view', and the 'non-separatist view'. The naïve positivist view is that values should not play any role at any stage of the activities of scientific economists and that, if they do, economists have violated the methodological conventions that make economics a science. Neither Putnam nor Dasgupta holds those views. Where they differ is that Dasgupta is more of a separatist, and Putnam is more of a nonseparatist. Following Scarantino (2009), in order to distinguish the separatist and non-separatist views we need to distinguish both between epistemic values and non-epistemic values, and between internal activities and bordering activities. The epistemic/non-epistemic distinction is similar to the distinction made by Mark Blaug between 'methodological values' and 'normative values' (Blaug 1992, 114;1998, 372). The term 'epistemic value' is used by philosophers of science to refer to those values which govern the meaning and formulation of scientific knowledge. For instance, accuracy, consistency, and simplicity. In contrast, 'non-epistemic value' is used to refer to all other values that may be involved, i.e., values which are not instrumental to the establishment of scientific knowledge. Ethical, political, and sociocultural values belong to this category. Internal activities are the core activities that economists do-the research that determines what will be considered economic facts (Scarantino 2009, 465-466). They relate to what philosophers call the context of justification. Bordering activities refer to the selection of which economic problems to investigate, or what philosophers call the context of discovery, and to the use made of economic knowledge once acquired. According to Scarantino, the non-separatist view holds that "both epistemic and non-epistemic values have a legitimate role to play in the 'internal activities' of scientific economists" (2009,466). 3 Putnam can thus be seen as a non-separatist. For him, it is impossible to exclude values-both epistemic and non-epistemic-from either the internal or the bordering activities of economists. The separatist view lies in between the naïve positivist view and the non-separatist view. While the naïve positivist view represents the ideal of science as free from all values, the separatist view represents the ideal of science as free only from non-epistemic values because it recognizes the inevitability of epistemic values in scientific activities. Moreover, as Scarantino points out, it is compatible with separatism to see the bordering activities of scientific economics as laden with non-epistemic values. But the legitimate influence of non-epistemic values is restricted to the prior and posterior stages of the pursuit of economic knowledge, such as choosing socially significant problems to work on and interpreting the policy relevance of results. Using Scarantino's classification, the disagreement between Putnam and Dasgupta about Dasgupta's position can be better understood. Putnam sees Dasgupta as a naïve positivist whereas the view Dasgupta actually holds seems closer to separatism. This understanding of their debate by no means allows us to resolve the ongoing disagreement between non-separatism and separatism. Nevertheless, the removal of an apparent misunderstanding can be a first step to more effective communication between them, since they would at least be in agreement about what it is they are disagreeing about. Putnam is fully aware of the distinction between epistemic and non-epistemic values. But he does not put much weight on it, because he considers that both types of values are ultimately inseparable (Putnam 2002, 31-33). Indeed, it is likely that non-epistemic values would indirectly influence economists' research by influencing how epistemic values are taken up. But the distinction does help us to clarify that whether economics is value-free is not the key point in the debate between Putnam and Dasgupta: both believe that economists' bordering activities are laden with non-epistemic values and that their internal activities are laden with epistemic values. The real disagreement between them is about whether any part of economic analysis can be free from ethical value judgments, or, more precisely, whether economists can avoid making ethical judgments in their internal activities. In our view, Putnam does not respond to this question adequately in his reply to Dasgupta, even if his non-separatist view is the right one. Several outstanding economists and economic methodologists have advocated a careful study of the impact of values on the scientific activities of economists. For instance, back in the 1930s Gunnar Myrdal (1953Myrdal ( [1930) argued that economists' personal traits, disciplinary traditions, and the interests and prejudices of the society they lived in would inevitably influence their research through influencing the approach they chose, their explanatory models and theories, the concepts they used, and the procedures they followed in making observations and drawing inferences. In 1973 Myrdal reiterated his argument, emphasizing the importance of studying the sociology and psychology of economists (Myrdal 1973). However, until recently the exploration of these fields remained a "neglected agenda" (see Backhouse 2005). How the formation of economic knowledge is influenced by non-epistemic values acting through epistemic values is indeed an important question. But in addition to pursuing a full account of such issues, there might be some other ways in which economists can improve the quality of economic studies. We argue that Dasgupta believes so and that this is the key message of his 2005 article. DASGUPTA'S MISSED MESSAGE ABOUT ECONOMIC METHODOLOGY The title of Dasgupta's 2005 paper 'What do economists analyze and why: values or facts?' implies the dichotomy of facts and values rather than their entanglement, as Putnam and Walsh commented. It reinforces the puzzle of why Dasgupta would insist that economists study facts not values if he accepts the entanglement of facts and ethical values, at least to some degree. We believe that Dasgupta had an important message to convey but failed to communicate it clearly, and we suggest that Putnam and Walsh's failure to understand him was partly due to their reading of him as under the influence of the logical-positivist tradition with its demarcation between fact-based science and valuebased ethics. Dasgupta's position cannot actually be understood in this logical-positivist tradition. For Dasgupta, the main challenge for policy analysis in the economics profession at present is not the lack of ethical foundations. The much more pressing issue for economists is to improve their understanding of the factual side of social problems. In our view, Dasgupta's claim that economists share many ethical values is an overstatement, but one that can be justified as a reasonable simplification that explains and justifies why economists try to structure their debates so as to focus on issues where their ethical differences are not in play. The simplification is a useful idealization because it allows Dasgupta to focus on the more important claim that refining our understanding of the factual aspects of a social phenomenon can benefit the policy debate regardless of what one's ethical views are. In our view, this key point in Dasgupta's argument did not receive enough attention from Putnam and Walsh. As an economist, and perhaps especially as a development economist, Dasgupta's main concern is with how to refine our understanding of facts for policy analysis. That is a question about the pragmatic methodology that economists should use. Dasgupta's aim is mainly practical, not theoretical or philosophical. He does not so much downplay the significance of ethics as play up the significance of operational solutions that improve policy analysis. As he put it bluntly, "I am a practicing economist, not a philosopher" (Dasgupta 2007a, 370). Dasgupta is not alone. The goal of improving the reading of facts for practical purposes has a long history in economics. Pursuing this goal does not really distinguish him from other contemporary economists. What makes Dasgupta unusual is his practice of economics, which, as recognized by Putnam and Walsh, distances him from mainstream neo-Walrasian theory and puts him more in line with classical economic theory (Putnam and Walsh 2007b, 195). We also see Dasgupta's approach as in line with the classical tradition. But unlike Putnam, who associated Dasgupta with Adam Smith, we argue that Dasgupta's approach to economic policy analysis is better placed in the Mill-Keynes tradition. Looking through this lens, what Dasgupta is doing is consistent with what he claims he is doing. Putnam and Walsh (2007b, 193-195) quoted extensively from Dasgupta's discussion of destitution to demonstrate that Dasgupta's work belonged to the classical tradition. Using the same passages quoted by Putnam and Walsh, we will provide an alternative reading of Dasgupta. DASGUPTA AND THE MILL-KEYNES TRADITION OF METHODOLOGY [A]ll the equilibria in the timeless economy are Pareto-efficient […] This means, among other things, that there are no policies open to the government for alleviating the extent of undernourishment other than those that amount to consumption or asset transfers. A common wisdom is that such policies impede the growth of an economy's productive capacity because of their detrimental effect SU AND COLANDER / A FAILURE TO COMMUNICATE VOLUME 6, ISSUE 2, AUTUMN 2013 12 on saving and investment, incentives, and so forth. But this is only one side of the picture. Our model will stress the other side, which is that a transfer from the well-off to the undernourished can enhance output via the increased productivity of the impoverished (Results 7 and 8). We don't know in advance which is the greater effect, but to ignore the latter yields biased estimates of the effects of redistributive policies. […] By developing the economics of malnutrition, I will offer a final justification for the thesis that it is the singular responsibility of the State to be an active participant in the allocation mechanism guiding the production and distribution of positive and negative freedoms. This justification is built on the idea that in a poor economy markets on their own are incapable of empowering all people with the opportunity to convert their potential labour power into actual labour power. As a resource allocation mechanism, markets on their own simply aren't effective. The theory I will develop below also shows how a group of similar poor people can become fragmented over time into distinct classes, facing widely different opportunities. Risk and uncertainty will play no role in this. It is a pristine theory of class formation (Dasgupta 1993, 476-477). Putnam and Walsh used these passages as evidence of the factvalue entanglement in Dasgupta's work and the concordance between Dasgupta's and Smith's economic writings. But reading Dasgupta through the Mill-Keynes lens gives us what seems a better view of his true intentions. We suggest the similarities of Dasgupta's approach with the Mill-Keynes tradition can be identified from the following two aspects. a) The knowledge of 'what ought to be' is distinct from, but based on, the knowledge of 'what is'. Dasgupta's work suggests that he would accept the science-art distinction proposed by John Stuart Mill. On the one hand, science and art are distinct (Mill 1967(Mill [1844, 312). Science, which concerns the knowledge of 'what is', is different in nature from art, which concerns the knowledge of 'what ought to be'. On the other hand, science and art are closely interrelated. Art assigns ends to science; science informs art of the means available for achieving those ends; based on the knowledge provided by science, art decides what ought to be done to achieve the ends (Mill 1974(Mill [1872. Note that the science-art distinction is not equivalent to the fact-value dichotomy. A key difference between the SU AND COLANDER / A FAILURE TO COMMUNICATE ERASMUS JOURNAL FOR PHILOSOPHY AND ECONOMICS 13 two is that while the latter implies that science deals with facts and art deals with values, the former does not. From the second passage cited above, we can see how Dasgupta intends to base his normative judgment on the knowledge of facts provided by science. The statement that "it is the singular responsibility of the State to be an active participant in the allocation mechanism guiding the production and distribution of positive and negative freedoms" is a normative one. It is clear in Dasgupta's writing that this normative judgment "is built on" the idea that "in a poor economy markets on their own are incapable of empowering all people with the opportunity to convert their potential labour power into actual labour power", which is a reading of fact derived from his scientific economic analysis of malnutrition (Dasgupta 1993, 477). Dasgupta would not deny that his claim that markets are incapable of empowering all people might involve a value judgment, but for him the statement is a positive statement, not a normative one. The statement does not indicate what ought to be done. It alone cannot tell us why the State rather than non-governmental organizations should be the remedy for the failure of markets. It does not even suggest that leaving the markets alone should not be an option, unless we already consider it desirable to try to empower all people to convert their potential labour power into actual labour power and this aim is not trumped by other aims. b) It is necessary to adopt an interdisciplinary approach to reading facts to remedy the limitations of mainstream models relating to their unrealistic assumptions. Despite being critical of mainstream economic models, Dasgupta does not deny their contribution. He has issues with them because he believes they present an unrealistic view of the world-because their construction neglects crucial facts, such as basic needs and physiological phenomena-and hence they are unable to provide an accurate reading of economic phenomena. For Dasgupta, the mainstream models can be a poor guide to the causal mechanisms involved because of inappropriate assumptions and construction. The ethical values held by economists might be the cause of the problem, but not necessarily. In his 2005 article, Dasgupta shows that as a practicing economist he aims to deal with those cases in which ethical values are not the cause of economists' mistaken reading of causality. In view of the limitations of the standard models, Dasgupta includes scientific knowledge from outside economics in his analysis of policy. In his research, the knowledge provided by disciplines such as physiology, the science of nutrition, ecology, and so on, plays an important role in understanding the factual side of social phenomena. 4 At the very beginning of chapter 16 of his 1993 book, Dasgupta points out that the standard theory of resource allocation fails to take into account the fact that meeting physiological maintenance requirements is a precondition of labour power. The term 'economic disfranchisement' is used by Dasgupta to point out the illusion, suggested by the standard theory, that every labourer is on an equal footing in terms of converting potential labour power into real labour power in the labour market. He therefore attempted to construct a theory that took human physiology into account. It is true that the ethical values held by Dasgupta may have contributed to his interest in the phenomenon of economic disfranchisement and redistributive policies. Yet it is also true that although concluding that "models that are dissonant with physiological truths are hopelessly incomplete" (1993, 475), Dasgupta does not attack the standard theory from an ethical point of view, but from a factual point of view. From the first passage cited above, we can see that Dasgupta intends to disprove the "common wisdom" by showing that the outcomes derived from the standard model will not come about if the positive effects on productivity of a transfer from the well-off to the undernourished are greater than its negative effects on saving and investment. The approach he took to refute the standard theory is very much 'scientific' in Mill's sense, rather than 'ethical' or 'normative'. According to Mill, social science is a deductive enterprise, but one which follows the model of the physical sciences, rather than that of geometry. Social science, he wrote, infers the law of each effect from the laws of causation on which that effect depends; not, however, from the law merely of one cause, as in the geometrical method; but by considering all the causes which conjunctly influence the effect, and compounding their laws with one another (Mill 1974(Mill [1872, 895). In Mill's view, the complexity of social phenomena does not arise from the number of the laws, but "from the extraordinary number and variety of the data or elements-of the agents which, in obedience to that small number of laws, co-operate towards the effect" (Mill 1974(Mill [1872, 895). Dasgupta's approach to asset transfer policies is a good example of Mill's deductive method. Dasgupta identifies two main effects of a transfer: decreasing savings and investment on the one hand while increasing the productivity of the impoverished on the other hand. These two tendencies can be seen as co-existent intermediate mechanisms which will have different effects on economic growth. According to the physical 'deductive method', the final result of the transfer policy should be estimated by summing up the individual effects of the co-existent intermediate causes. In contrast, the approach adopted by the standard model is equivalent to the 'geometrical method' because it does not admit the modification of the presumed psychological law (the behaviour of saving and investing will be negatively affected by the transfer) by another law (the improvement in nutrition will increase productivity). It is worth noting that Mill does not pretend that it is possible to calculate the aggregate result of many co-existent causes with complete precision. In his view, it is beyond human faculties to take into account all the causes which happen to exist in one case (Mill 1974(Mill [1872, 898). But, as a practical science, if economics can provide us with knowledge of tendencies, it gives us a considerable power to "surround [our] society with the greatest possible number of circumstances of which the tendencies are beneficial, and to remove or counteract, as far as practicable, those of which the tendencies are injurious" (Mill 1974(Mill [1872, 898). From the above discussion, we can see that the scientific aspirations of Dasgupta's economic writings are clearly in line with the approach explicitly stipulated by Mill. This scientific dimension is absent from Smith's work. Indeed, Mill's proposal of the science-art distinction specifically took Smith as a target. In Mill's view, the title and arrangement of Smith's book An inquiry into the nature and causes of the wealth of nations, despite being suitable for the purpose of his work, had caused a general misunderstanding of the nature of economics as a science. Smith's approach tended to mix up what makes a nation rich (what is) with what a nation ought to do to increase its wealth (what SU AND COLANDER / A FAILURE TO COMMUNICATE VOLUME 6, ISSUE 2, AUTUMN 2013 16 ought to be done). For Mill, the latter is not an appropriate subject for scientific economics; it should be the subject of political economy as art (Mill 1967(Mill [1844, 312). Moreover, according to Smith the object of political economy is firstly to enable the country's people to provide sufficient necessaries and conveniences of life for themselves and secondarily to supply the state with a revenue sufficient for the public service (Smith 1976(Smith [1776, book 5, Introduction). For Mill, the desirability of these objects is determined by art, not by science (Mill 1967(Mill [1844, 312). Dasgupta is not the only economist whom Putnam and Walsh have held up as a paradigm of Smithian methodology, and not the only one who turns out not to fit that model quite as well as they supposed. Putnam and Walsh have also suggested that Sen's work, and especially his capability approach, is in the Smithian tradition (Putnam 2002(Putnam , 2003Putnam and Walsh, 2007b). In terms of Sen's methodology, we do not see it that way-Smith blended normative and positive analysis without separating normative and positive economics in any logical way. Sen does the opposite; he carefully specifies what in his analysis is normative and what is positive, and explains why his normative analysis is much more consistent with most people's normative views than are the implicit normative judgments in standard analysis. This, in our view, puts him in the Mill-Keynes methodological tradition, which evolved from Smith's partly by criticizing Smith for his lack of clarity about the difference between what economics studies and what the ends of economics and economic policy ought to be. In the first chapter of his book On ethics and economics (1987), Sen identifies two origins for economics in ethics and engineering. Sen groups Smith and Mill together in the ethics-related tradition, which is correct in the sense that both Smith and Mill see economics as a branch of moral philosophy (i.e., the ultimate end of economic knowledge is to make life better, and hence ultimately economics cannot be independent from ethics). But we would add an extra distinction to Sen's classification that allows us to distinguish Smith and Mill in terms of their methodology. Whereas Smith blended his normative and positive analysis together, Mill carefully attempted to distinguish art from science. Thus, like Putnam and Walsh, we see Sen as following Smith's (and Mill's) ethical tradition-in the sense of seeing economics as a branch of moral philosophy. But unlike them we see Sen's methodology as deriving from the more sophisticated Mill-Keynes tradition rather than Smith's. This is what we mean by saying that Sen belongs to the Mill-Keynes approach, not the Smithian approach. It is intriguing to note that enriching the nation, the major goal of Smith's political economy, has been implicitly taken over by many modern economists as a value-neutral goal, while equitable distribution, which is less directly addressed by Smith, is considered as value-laden and hence as an illegitimate subject for economics. Mill's distinction between science and art could in effect support Putnam's intention of revealing the biased attitude of some economists towards different ethical values that leads to biased readings of facts. Dasgupta rarely if ever refers to Mill in his work. However, it is not entirely surprising to find similarities between their methods of doing economics. Daniel Hausman once commented that "[t]he temper and character of modern economics still embodies the Millian vision of the discipline as a separate science" (Hausman 1992, 225). Modern economics may not have developed in quite the way Mill had hoped, but it is fair to say that his analysis of the nature and methodology of economics was indirectly and partially inherited by contemporary economists through the influence of John Neville Keynes and Robbins. In The scope and method of political economy (1917 [1890]), J. N. Keynes took up Mill's distinction between positive science and normative art and further developed it into a tripartite division of economics in accordance with his classification of knowledge According to this classification, a positive science is a body of systematized knowledge concerning what is; normative or regulative science is a body of systematized knowledge relating to the criteria of what ought to be; and an art is a system of rules for the attainment of a given end. Each has its own distinct objectives: for a positive science the objective is to establish uniformities; for a normative science it is to determine ideals; for an art it is to formulate precepts. Accordingly, investigations into economic uniformities, economic ideals, and economic precepts can be categorised respectively as the positive science of political economy, the ethics of political economy, and the art of political economy (see 1917 [1890], 31-36). 5 In our view, the Millian approach did not end with J. N. Keynes. In particular, we have argued elsewhere (Colander 2009) that Robbins is best interpreted as working within this tradition, and that that sheds a quite different light on his message. Specifically, we argue that Robbins (1945Robbins ( [1932) advocated not only the importance of separating positive economics from ethics but also a separate, non-scientific branch of economics to deal with issues of values. Robbins noted that the majority of classical economists used the term political economy to cover "a mélange of objective analysis and applications involving value judgments" (1976, 1; 1981, 7). In his 1981 Ely Lecture and in the introduction to his 1976 book Political economy, past and present, Robbins suggested that the use of the term 'political economy' should be revived, to maintain a space in economics where ethical values play a central role (1976, 2-3; 1981, 7-8). 6 According to Robbins, this political economy is not part of economic science, but it is an integral part of economic studies. Mill's call for economics as a science separate from art has been largely realized in the economics profession over the past 150 years, but the line of descent from Mill through Keynes and Robbins to today took various turns. Each inflexion caused some changes to the direction of the development of economics, and the final outcome is very different from what Mill would have expected. We do not deny the problems of modern economics that emerged during its formation as a separate discipline. But, with a correct understanding of the Mill-Keynes tradition of methodology, and particularly by recovering the integral role of art in economic studies, the economics profession could do a much better job than it does now to highlight the way values are integrated into economic analysis. 7 Specifically, we believe that when Dasgupta's arguments are interpreted through the Mill-Keynes lens, rather than a Smithian one, his arguments make much more sense philosophically. They are not deep philosophical arguments but pragmatic arguments about how to move forward in tentatively separating positive truths from normative rules, even while accepting that on a deep level they may not be fully separable. Instead of letting fact-value entanglement lead one to an 6 Robbins uses the term in a narrower sense than Smith: Robbins uses the term to designate only the prescriptive part of economic investigation, whereas Smith's political economy concerned both what we have been calling positive science and normative art. 7 We have discussed elsewhere how the economics profession can improve by reintroducing the Mill-Keynes methodological tradition (see Colander 1992Colander , 2001Colander , 2013Su 2012). It involves distinguishing separate methodological approaches for applied policy economics and for the pure science of economics, along the lines suggested by J. N. Keynes. impasse, one distinguishes those factual judgments and normative judgments that are most separable, accepts that others are not, and gets on with one's analysis. We are not especially concerned with whether Dasgupta is actually a follower of either Smith or Mill. Our argument is that seeing Dasgupta within the Mill-Keynes tradition helps clarify his methodology. The Mill-Keynes interpretation allows us to understand how Dasgupta considers himself able to integrate ethical considerations into his economic policy analysis without sacrificing the scientific character and objectivity of his economic analysis. In the Mill-Keynes methodological tradition, the scientific branch of economic studies is separated from applied economic policy analysis. The separation is meant to enhance the quality of the latter by improving the understanding of economic phenomena through adopting appropriate scientific methods. Putnam may disagree with the Mill-Keynes methodology, but we believe his criticisms would be better understood by Dasgupta, and other economists, if they took explicit account of the pragmatic art-science foundations of his methodology, and did not reduce them immediately to the fact-value dichotomy associated with the logical-positivist tradition, and which the Mill-Keynes economic tradition did not embrace. CONCLUSIONS The debate between Putnam and Dasgupta was perceived by Putnam to be about whether economics is value-free or not, as indicated by the title of his recent book with Walsh about their side of the debate, The end of value-free economics. We have suggested in this paper that this was a misperception. The fact-value divide is problematic, but it is not the key to the Putnam-Dasgupta debate. We have argued that Dasgupta was mistakenly understood by Putnam and Walsh as holding a naïve positivist view, which insists on a dichotomy between fact-based science and value-based ethics and argues that economics should be free from all sorts of values. In our view, the confrontation between Putnam and Dasgupta is actually between a non-separatist view and a separatist view. More specifically, the disagreement between them is about whether it is possible for economists to avoid making ethical value judgments when they try to explain observed economic phenomena in an objective factual way. SU AND COLANDER / A FAILURE TO COMMUNICATE VOLUME 6, ISSUE 2, AUTUMN 2013 20 The philosophy of science debate between the non-separatist view and the separatist view is on-going. The implications of these two views for scientific activities require more investigation. In particular, if ethical value judgments cannot be avoided even in internal scientific activities-as the non-separatist view claims-then it is important for economists to understand how this entanglement occurs in order to know how to minimize the resulting biases in their research, as much as one can. However, real-world economic problems are pressing and cannot wait for solutions until we have a satisfactory answer to these profound questions. Moreover, even if it is true that economists' reading of facts is inevitably influenced by their personal values, it is not necessarily the case that their different readings of the facts can be solely explained by differences in their ethical values. For these reasons, the value of Dasgupta's call for refining the reading of facts should be acknowledged, and the Mill-Keynes tradition rediscovered. Huei-chun Su is an honorary research associate at the Bentham Project at University College London, UK. She is the author of the book Economic justice and liberty: the social philosophy in John Stuart Mill's utilitarianism (Routledge, 2013). Her research interests include history of economic thought, philosophy of social and economic policies, and moral philosophy. Contact email: <h.su@ucl.ac.uk> David Colander is College professor at Middlebury College. He has authored, co-authored, or edited over 40 academic books as well as numerous textbooks, and 150 articles on a wide range of topics.
2015-03-27T04:16:54.000Z
2013-12-02T00:00:00.000
{ "year": 2013, "sha1": "f1b5253b41c73c7096cdf84e8648fd598d95e3fc", "oa_license": "CCBY", "oa_url": "https://ejpe.org/journal/article/download/131/128", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "f1b5253b41c73c7096cdf84e8648fd598d95e3fc", "s2fieldsofstudy": [ "Philosophy", "Economics" ], "extfieldsofstudy": [ "Economics", "Sociology" ] }
225190421
pes2o/s2orc
v3-fos-license
THE SCIENCE OF COLOUR AND COLOUR VISION Colour science concerns the process of colour vision and those features of the environment that a ff ect the colours that we see and how we see them. Colour vision has been studied systematically from a variety of points of view since the nineteenth century. The science we discuss below draws on optics, psychology, neuroscience, neurology, ophthalmology, and biology. And, although the relevant basic facts of optics and physiology and their contribution to colour vision have been known for at least a century and half, there are still many aspects of colour vision—including some quite fundamental ones—that are poorly understood. In what follows we will provide an overview of what is known and indicate matters of current controversy. We will concentrate on giving the background necessary to understand those parts of colour science that are potentially relevant to philosophical work on colour. Our account is necessarily quite sketchy and we won’t be able to do more than provide a starting point for those interested in the topic. 1 Light Light is a form of electromagnetic radiation, and so can be described in both wave and particle terms.The particles of light, photons, are usefully characterized in terms of their energy (the usual unit is the electron-volt (eV), 1.6 × 10 -19 joules) while the waves associated with the photon are usefully characterized by their wavelength (the usual unit is the nanometer (nm), 10 -9 metres).These are not independent characterizations: specifically, the energy of a photon is inversely proportional to its wavelength.The intensity or power of a light is the amount of energy it delivers per unit time.Most light sources emit light at a variety of wavelengths so a complete characterization of a light in these terms requires describing how its power is distributed across wavelengths.The spectral power distribution (SPD) of a light specifies the proportion of the total power of that light that is carried by the photons at each wavelength.For many purposes in colour science, overall intensity is held fixed and it is the varying SPD that is the explanatory variable. Only a very small segment of the total electromagnetic spectrum is relevant to most questions in colour science because the receptors in the eye only respond directly to a narrow range of wavelengths.The precise boundaries are somewhat arbitrary but the visible spectrum runs roughly from 400 nm (3.1 eV) at the violet end to 700 nm (1.8 eV) at the red end.The range of intensities that are relevant is much larger-the ratio of the intensities of the illumination provided by direct summer sunlight to that available on a moonless night is about 10 billion to one.Normal indoor lighting typically lies somewhere near the centre of this range. The light sources that initiate the process of vision can be described in terms of two kinds of characteristics: spatial and spectral.First, light sources can be divided into those that are of significant spatial extent, like the sky on an overcast day or a bank of fluorescent tubes behind a di using panel, and those that approximate point sources, like the sun or a street lamp.An extended source can provide much more uniform illumination across the scene while a point source illuminates objects in a way that depends much more strongly on their position and orientation with respect to the source and the other objects in the scene.Second, as just noted, the light emitted by a source can be characterized in terms of its overall intensity and spectral power distribution.The SPD of light sources is critical to understanding how the process of colour vision works. Objects When light falls on an object some proportion of the light at each wavelength is reflected, some proportion is absorbed and-for transparent and translucent objects-some proportion is transmitted.Reflection can be quite complicated but for many purposes it is useful to separate the reflected light into two components.First, a di use component, in which the intensity of the reflected light displays relatively little dependence on the angle between the eye, the object's surface, and the light source.Second, a specular component in which the reflection is mirrorlike and highly directional.Typically, the di use component is much more influenced by characteristics of the object, while the specularly reflected light often approximates the SPD of the light source.A number of the characteristics of an object a ect the way in which it modifies the light it reflects, most notably its chemical composition and the roughness of its surface.Since many objects are heterogeneous in their composition the reflecting characteristics of an object are typically variable and the variation often is found at several di erent spatial scales, giving rise to both visible patterns and visible texture. The reflectance of an object (or surface) at a given wavelength is the ratio of the light (number of photons) it reflects at that wavelength to the incident light at that wavelength.The surface spectral reflectance (SSR) of an object is the reflectance of the object at each wavelength (in practice narrow bands of wavelengths) in the visible spectrum.Displaying an object's SSR graphically results in its spectral reflectance curve.In order to achieve a widespread system of colour measurement the illuminants need to be standardized.The most important of these is CIE illuminant C-an approximation to average daylight that has the virtue of being reproducible in the laboratory using a standard light source and filter. The visible light reaching the eye from an (opaque, non-luminous) object is the joint product of its SSR and the SPD of the incident light.Ignoring the e ects of scene composition, these exhaust the physical characteristics of objects and light relevant to predicting colour appearance. 2 What is missing, however, from this physical description is any way of relating this information to perceived colour.First, not all di erences in the SSR of the object or the SPD of the illuminant are perceptually detectable.Second, and more importantly, a pair of spectral reflectance curves is little help by itself as to whether or not the corresponding two objects will appear to match in colour when viewed in a given illuminant.Unsurprisingly, the physics of light and its interaction with objects is not enough to explain how we perceive colour. Basic physiology of colour vision Perceived colour is, in complicated ways, dependent on the spectral power distribution of the light reaching the eye from the objects in the scene.This entails that there are mechanisms in the eye/brain that respond di erentially to light of di erent wavelengths.A large amount of research in colour science, going back to the early nineteenth century, concerns the properties of those visual mechanisms that generate the di erential response to wavelength. The process of vision is initiated by the absorption of light by specialized cells in the retina called photoreceptors.A given photoreceptor will respond strongly to light at some wavelengths and much less strongly at other wavelengths, keeping intensity constant.The specification of how strongly a photoreceptor responds to light in the visible spectrum is known as the spectral sensitivity of the photoreceptor.Displaying a photoreceptor's spectral sensitivity graphically results in its spectral sensitivity curve-very roughly a bell shape, with the peak centred over wavelengths to which the photoreceptor is maximally sensitive, and tails of diminishing sensitivity on either side.In spite of responding di erently to light of di erent wavelengths, the behaviour of a single photoreceptor does not by itself contain any information about the SPD of the light to which it is responding.Photoreceptors provide the same response to an absorbed photon, no matter what its wavelength.Although photons of di erent wavelengths have di erent probabilities of being absorbed, the response of a single photoreceptor is the same to a dim light at a wavelength to which it is highly sensitive and a brighter light at a wavelength to which it is less sensitive.Since colour vision requires the ability to distinguish between lights with di erent wavelengths, that means that colour vision requires contributions from at least two types of photoreceptors that di er in their spectral sensitivity.In fact, as we will discuss in the next section, human colour perception is primarily driven by three distinct photoreceptor types. Rods and cones The human retina contains two morphologically and physiologically distinct classes of photoreceptors.The rods, so-called because of their characteristic shape, are active mainly at low light levels and play little role in colour vision. 3The photoreceptors that play the major role are the cones (similarly so-called), active mainly at high light levels.The cones are subdivided into three types on the basis of their di erences in spectral sensitivity.One type has a peak sensitivity in the short-wavelength end of the visible spectrum and the other two types have closely spaced peaks near the middle of the spectrum.The three cone-types are morphologically indistinguishable, and although their existence was inferred in the nineteenth century in order to explain the observed characteristics of human colour vision, it was only in the late twentieth century that direct measurements of their spectral sensitivities were made, and the light absorbing photopigments they contain were isolated (see Merbs and Nathans 1992). Since the ability to discriminate between spectrally di erent stimuli depends entirely on the di erences in spectral sensitivity among the three cone-types it is possible to compare the spectral sensitivities required to explain discrimination performance to the measured characteristics of the cones and their photopigments.The agreement is in general very good and simple colour discrimination tasks are an unusual case in which human behaviour (of a very specialized kind) can be predicted on the basis of knowledge of basic neurophysiology.This is possible because the later stages of visual processing preserve the information present in the cone responses and the behavioural response (under carefully controlled conditions) makes use of all the information available. Although the cone spectral sensitivities largely determine the ability to discriminate among coloured stimuli, their relation to colour appearances is much more complicated.Since the visible spectrum, under ordinary viewing conditions, has a characteristic colour appearance, it is tempting to apply colour labels to the individual cones based on the appearance of the region of the spectrum to which they are most sensitive.The usual labels are "blue" for the short wavelength receptors (S-cones), "green" for the middle wavelength receptors (M-cones), and "red" for the long wavelength receptors (L-cones).This labelling can suggest the theory-sometimes found in popular discussions-that the perceived colour of a light is the result of mixing blue, green, and red, in proportion to the excitation of the corresponding cone-type.However, the usual labelling is misleading and the theory is incorrect.One reason why the labelling is misleading is that the wavelength of peak sensitivity for the L-cones is actually in the yellow-green part of the spectrum.And even if the "red" cones were well-named, the idea that all colours are mixtures of blue, green, and red doesn't fit the phenomenological facts.Admittedly, purple is, in some intuitive sense, a mixture of red and blue, but what about yellow?That seems to be just as basic as red, green, and blue.In any event, yellow doesn't appear to be a mixture of these colours in the way that purple appears to be a mixture of red and blue.Further, how does the mixing theory explain the appearance of a green light that is neither yellowish or bluish?Presumably this is because the light excites only the "green" cones-but because of the overlap in the spectral sensitivities of the three cone-types, there is no such light.As we will see, the problem of explaining colour appearance is a di cult one that does not yet have a fully satisfactory solution. One important fact about photoreceptors, and neurons in general, helps explain one of the di culties in predicting colour appearance given just the characterization of the stimulus.Although the relative sensitivity of the photoreceptors to light of di erent wavelengths is fixed, the absolute sensitivity of the photoreceptors dynamically adjusts to the light level.This adaptation allows the cones to provide usable signals at the very wide range of light intensities that we encounter as we move about the environment.One consequence of this is that the cone outputs provide relatively little information about the absolute intensity of the light stimulating them.The darkest areas of a scene lit by direct daylight are comparable in absolute intensity to the brightest areas of a scene viewed under a typical reading light, even after correcting for the change in pupil size.Another consequence is that the same stimulus can produce very di erent cone outputs depending on the recent history of stimulation of the cones.After adaptation to short-wavelength light the S-cones will have decreased sensitivity and a given stimulus will tend to look less blue than it would if the adapting stimulus had consisted of long-wavelength light.Adaptation of various kinds is not unique to the cones but plays a role throughout visual processing. Chromatic processing in the retina The processing of visual information begins within the retina itself and its output neurons, the ganglion cells, have very di erent response properties, both spatial and spectral, from the photoreceptors themselves. 4A ganglion cell receives inputs (via other cells) from multiple photoreceptors arranged in a patch on the back of the retina-the cell's receptive field.Ganglion cells have centre-surround receptive fields, meaning that they are excited/inhibited by light in the centre of the receptive field and inhibited/excited by light in the periphery or surround.Importantly for understanding colour vision, the centre and surround can also di er in their sensitivity to light of di erent wavelengths.In foveal or central vision, where both spatial and spectral discrimination are best, in many cases the centre response is driven by a single photoreceptor while the surround draws on inputs from neighbouring photoreceptors.Consequently, ganglion cells respond best to spectral and spatial contrast.For example, a +L-M cell-one whose centre is excited by L-cone input and whose surround is inhibited by M-cone input-will respond well to a small red or white spot on a dark or blue background, less well to uniform red light (which will stimulate the M-cones to some degree) and poorly to uniform white light.Cells with this kind of opponent structure transform the original three cone channels into new channels based on contrast. Retinal processing also begins a tendency towards specialization that continues through later stages of the visual system.The most important is the subdivision of retinal ganglion cells into two separate processing streams known as the parvocellular (P) and magnocellular (M) streams.The P-stream carries chromatic information 5 and information about sustained, high spatial resolution aspects of the retinal image.The M-stream is responsive to rapidly changing stimuli, has lower spatial resolution, and is relatively insensitive to chromatic information.These two pathways are driven by the M-and L-cone outputs; the S-cone signal is carried by a separate pathway whose properties are less well understood. It is important to note that there is no purely chromatic channel originating in the retina.Not only are the outputs of the three cone-types subject to an opponent transformation almost immediately, but the cells in the P-stream combine spectral, intensity, and spatial information.It is only by comparing the responses of multiple cell-types to the same stimulus that it is possible to separate the chromatic information from the spatial and intensity information.It is not until the cortex that cells are encountered whose responses disambiguate the spatial and spectral information that jointly determines the activity of cells earlier in the visual pathway. e psychophysics of colour So far we have looked at colour vision from the point of view of physiology.Alternatively, we could look at how people (and other animals) behave in response to coloured stimuli.This kind of approach, in which very constrained responses to carefully constructed and varied stimuli are measured and analysed, has been central to colour science.As we saw in discussing the cone sensitivities, the physiology is intimately connected with measures of psychophysical performance, like spectral discrimination.Colour science has been traditionally characterized by an unusually integrated approach to its subject matter with studies of animal behaviour motivating and justifying physiological theorizing and vice versa.To give just two examples, the most widely used values for the cone spectral sensitivities derive from behavioural data, and what is known about the colour discrimination behaviour of many non-human animals is largely based on properties of the photopigments found in their eyes. Trichromacy, primaries, and colour spaces Any colour can be matched with an appropriate mixture of only three primaries.As might be suspected, this is a consequence of trichromacy, that exactly three types of photoreceptors contribute to human colour vision. The claim about matching needs to be qualified, in large part because of the many complicated e ects of the viewing context on perceived colour.These e ects can be largely discounted if we create a very simple perceptual situation, e.g. a bisected circle on a uniform neutral background.The two halves of the circle will appear identical in colour if and only if the light reaching the eye from each half produces the same output from each of the three cone-types.In this (somewhat artificial) situation, we can choose three lights such that, for any light projected on to the left half of the circle, an appropriate weighted mixture of the three lights projected on to the right half will result in uniform cone output across the circle's retinal image. All this only applies to additive mixtures, like mixtures of lights in which each element of the mixture simply adds to the light reaching the eye.In subtractive mixtures, like pigment mixtures, the contributions of the components of the mixture to the visual stimulus are much more complicated and it may take more than three elements to match an arbitrary stimulus.Another qualification is that some matches will require the addition of one of the primaries to the light to be matched rather than to the other two primaries-in e ect, negative amounts of one of the primaries.A final point to note is that there are numerous sets of primaries.In fact, any three lights, no two of which can be mixed to match the others, will serve as primaries.The traditional red, green, and blue additive primaries used in television and computer screens have the virtue of matching a very large set of lights without using any negative amounts, but this is only of technological significance. These facts about matching and primaries lead to an obvious method for a systematic representation of colour stimuli: represent the colour of each stimulus by the amounts of a certain set of primaries required to match it.In such a system, stimuli with the same coordinates will appear the same colour (at least in highly constrained viewing conditions).And given the coordinates of a stimulus in such a system, it will be possible to produce a new stimulus that will be an exact match by adding together the specified amounts of the three primaries.Since coordinates in one system can be transformed into corresponding coordinates in any other, the new stimulus need not even be constructed using the original primaries to guarantee a match.Many of the standard colour spaces used in science and industry employ this basic method.For example, the widely used CIE XYZ space is just a set of functions that take the spectral power distribution of a light into the amounts of three specially chosen primaries that match that light.These functions are based on colour-matching data collected on a relatively modest number of individuals in the early twentieth century.Many other more recent standards have a similar structure.RGB coordinates use an idealized set of monitor primaries to represent colour and although the primaries are very di erent the basic principle is the same. Such systems for representing colour based on three primaries are very useful for many purposes in research and industry, but they have two significant drawbacks.First, they do a relatively poor job of representing perceived colour similarity, especially for stimuli that are distant from each other in the space. 6Second, a system based solely on matching will fail to capture perceived colour since two stimuli may change their colour appearance substantially while still remaining matched.The fundamental problem is that the simple colour matching experiment that motivates these systems idealizes away from many factors that profoundly a ect perceived colour. Colour appearance and opponent-process theory Neither the physics of light, nor the cone outputs, nor the primaries used in matching provide an adequate basis for understanding colour appearance.One very influential attempt to provide the outlines of a theory of colour appearance involves combining psychophysical experimentation with speculative physiology.As we saw earlier (section 4.1), attempting to account for colour appearance in terms of the three cone-types leaves us with one too few basic colours.Red, yellow, blue, and green all have a plausible claim to being basic colours, unlike purple, orange, turquoise, and olive which appear to be mixtures (in some intuitive sense) of the basic colours.In addition, these four basic colours are naturally sorted into two "opponent" pairs: red and green on the one hand and blue and yellow on the other.Red and green are opposed in the sense that there are no reddish greens or greenish reds, and similarly for yellow and blue.Red and green are so famously opposed that there is a significant philosophical literature devoted to explaining the nature of the opposition. 7Opponent-process theory is a physiological hypothesis put forward to explain these observations, together with many others. The core of opponent-process theory is that information about the spectral characteristics of a stimulus is carried by two opponent channels (plus a non-opponent channel for intensity).In the simplest model, one channel is generated by subtracting the M-cone signal from the L-cone signal (L-M) while the other channel results from subtracting the sum of the L-and M-signals from the S-cone signal (S-(L+M)).The L-M (or red-green) channel results in the perception of reddishness when positive and greenishness when negative, while the S-(L+M) (or yellow-blue) channel results in the perception of bluishness when positive and yellowishness when negative.Thus a stimulus that looks bluish-red will produce a high positive value for the L-M channel and a (less high) positive value for the S-(L+M) channel.Since no channel can produce a signal that is both negative and positive, the hue incompatibilities mentioned above are explained.This framework, motivated by phenomenological observations about basic colours and opponency, proved to be a powerful unifying tool that allowed a simple and intuitive understanding of diverse set of colour phenomena.When the chromatic opponency of cells in the peripheral visual pathway was first discovered in the 1960s it seemed as if direct experimental support for the hypothetical opponent processes had been found.Unfortunately, in the subsequent decades the status of opponent-process theory has become less clear.Although chromatic information is encoded in the visual pathways using opponent coding, the response properties of these cells don't match the characteristics of the psychophysically characterized opponent-processes.Unlike the good fit between the measured cone spectral sensitivities and the hypothesized sensitivities required to explain the psycho-physical discrimination data, the hoped-for match between physiology and psychophysically characterized opponent processes has failed to materialize.Although this is an area of current controversy it seems safe to say that the simple opponentprocess model that seemed so promising in the late twentieth century is at best a very rough approximation. The uncertain status of opponent-process theory leaves the field with no unified physiological account of the elementary facts about colour appearance that helped motivate it.Although there have been claims to find some basis for the special status of the unique hues in the response properties of some cortical neurons, the claims are controversial and anyway don't provide the kind of unifying framework that earlier looked to be on the cards (see Stoughton and Conway 2008;Conway and Tsao 2009;Mollon 2009;Wool et al. 2015).Further, the phenomenological foundations have themselves been disputed, with some claiming that there are more than four basic colours, or even that the notion of a basic colour is suspect (Saunders and van Brakel 1997). These controversies aside, there is still a need for colour order systems that capture central facts about colour appearance and that provide a more natural representation of colour similarity than the primary based models that were discussed in section 5.1.There are a number of such systems and they all share one significant feature: the colours are represented in terms of three dimensions.It is tempting to assume that this is because the three-dimensionality that originates with the cones is maintained in the ultimate cortical representation of colour, but if so this is a peculiarity of colour vision, not an instance of general truth about perception.The human auditory system samples the frequency spectrum much more densely, but the representation of pitch is essentially one-dimensional.Moreover, there are reasons to doubt that three-dimensions are, in fact, capable of fully capturing all of the variation in colour appearance (Fairchild 1998).Nevertheless, three dimensions do an e cient job for most practical purposes. One way to construct an ordering system that reflects colour appearances starts with the phenomenological claims that underpinned opponent-process theory.The Natural Colour System (NCS) is an example of a system with this structure (Hård et al. 1996a(Hård et al. , 1996b;;Kuehni 2003: 301-9).The NCS represents colour using two opponent axes (red-green and blue-yellow) and a non-opponent lightness axis.No physiological interpretation is associated with this system, and it is not directly tied to any system of primary-based matching.To classify colours with the NCS, samples are matched to standards generated in accordance with the underlying opponent model.As the name suggests, the system is intended to be a better fit to our perceptual representation of colour than other alternatives.In this form, the representation of colour embodied in opponent-process theory can be maintained independently of its success or failure as a physiological theory. A widely used alternative is to represent colour in terms of three dimensions of hue, brightness, and saturation (HBS).These representations give rise to the familiar colour solid with hue being represented by a circle around the origin, brightness by the vertical axis, and saturation by horizontal distance from the origin.The popular Munsell system is a variant of the HBS system with its three dimension of hue, value (brightness), and chroma (a relative of saturation).One reason for the popularity of the Munsell system is that brightness and saturation are very di cult to estimate visually and the Munsell system has a physical realization that allows colours to be placed in the system by comparison to samples.Although the system was constructed to do a good job of capturing perceived similarity, the visual inaccessibility of the brightness and saturation dimensions suggest that it is not a good match for the way colour is represented by the visual system. Contrast, adaptation, and other psychophysical e ects As we saw in the discussion of basic physiology above, the cones do not provide a fixed response to a fixed stimulus, and the channel carrying chromatic information from the retina to the brain combines spatial and spectral information.These and other physiological features have measurable (and sometimes very large) e ects on how we perceive colour. To start with a simple example, we are all familiar with the large changes in perceived lightness and colour when going inside on a bright day.Many parts of the visual system (pupil, cones, retinal ganglion cells, etc.) have adapted to the bright light and, at varying speeds, will then adapt to the much dimmer (and spectrally di erent) illumination indoors.The initial perception of dark and desaturated colours gradually moves back towards the brighter and more saturated colours perceived outside and there may be shifts in hue as well.One way to understand the overall e ect of adaptation at the various levels of visual processing is that the visual system changes to maximize the amount of information it can extract from the visual stimulus.For example, as noted in section 4.1, the range of responses that the cones can produce is orders of magnitude smaller than the variation in the intensity of the stimulation they receive.If the cones did not adapt to changes in light intensity then they would provide useful information about only a very narrow range of stimuli.By becoming less sensitive as the stimulus intensity increases and more sensitive as it decreases, the cones preserve their ability to signal di erences in stimulation across a much broader range of stimuli.One consequence of the various forms of adaptation is that large changes in the stimulus (resulting from changes in the illumination) typically produce much smaller changes in perceived colour once adaptation has run its course.Adaptation contributes to the relative stability of perceived colour across changes in illumination known as colour constancy (discussed in more detail in section 5.4 below). As we have seen, the chromatic and spatial characteristics of stimuli interact in early colour processing.One illustration of this fact can be found in the familiar phenomena of colour contrast.If a neutral grey square is viewed surrounded by a larger coloured background it will appear tinted with a hue contrary to that of the background: reddish backgrounds thus induce greenish tints and greenish backgrounds induce reddish tints.Not all spatial e ects involve the induction of a contrasting hue and, in assimilation, the colour of thin, but clearly visible, lines spreads to neighbouring areas.It needn't be only the directly adjacent regions of a scene that influence perceived colour.In the watercolour illusion, the colour of an appropriately chosen border spreads to large areas of the white space it encloses (Pinna et al. 2001).Even simple patterns, like a disc surrounded by concentric rings, can produce greater e ects on perceived hue than a uniform background (Monnier and Shevell 2003).The causes of these kinds of e ects are understood to varying degrees but in general they fall into two overlapping classes.First, there is averaging over stimulus areas at di erent spatial scales resulting from the underlying physiology.For example, assimilation is due, in part, to the fact that the visual system has higher resolution for achromatic contrast than for chromatic contrast.The dark lines in a typical stimulus that produces assimilation are visible to the luminance channel but not resolvable by the chromatic channels which then averages their lower lightness in with surrounding areas.Similar e ects can occur with hue alone since there are many fewer S-cones than there are L-and M-cones so the averaging occurs over larger areas for the S-cone input than for the other two cone-types. 8A di erent way of looking at these kinds of e ects is that they are consequences of the visual system's attempt to use all of the information available to it in arriving at a representation of the spatial layout of the perceived scene and to assign visual features to di erent regions of it.Chromatic information is useful in extracting the spatial features of the scene from the stimulus and the spatial layout is useful in generating stable and useful colour assignments to the di erent areas of the scene.We will return to some of these issues later in the discussion of colour constancy. The variety and quantity of informative and sometimes surprising interactions known to exist between perceived colour and various features of the stimulus other than the SPD of the light coming from an object is much too large to catalogue here.There are two important points worth keeping in mind with respect to the large literature on the psychophysics of colour vision.First, the psychophysics is often very informative as to the underlying physiological mechanisms, and much of the empirical literature in colour psychophysics is aimed at illuminating the underlying physiology using behavioural data collected in response to carefully controlled stimuli.Knowledge of the existence and response characteristics of the three human cone-types was almost entirely based on psychophysical data.For these purposes, the choice of stimuli need not reflect important features of the kind of stimuli encountered outside the laboratory.Second, the fact that many factors other than the character of the light reaching the eye from an object can influence its perceived colour should not be surprising.The point of vision is not to accurately characterize the proximal stimulus but rather to guide action.For the purpose of guiding action the properties of a distal object are important, and so to ignore factors other than the light an object sends to the eye would be to throw away valuable information about it. Colour constancy We have already mentioned simultaneous contrast, in which the perceived colour of an object is influenced by the colour of its surround.This phenomenon illustrates the important point that the relation between stimuli and perceived colour cannot be fully understood by taking each point in the scene before the eyes in isolation.Holding the subject's perceptual apparatus constant, the perceived colour of an object is determined by the character of the light produced by the entire scene before the eyes. Colour constancy, the stability of perceived colour across alterations in the character of the illuminant, is another manifestation of these non-local influences. 9Recall that the light reaching the eye from an area of a surface is the joint product of the SPD of the illuminant and the SSR of the surface.As the illuminant varies, so does the SPD of the light reaching the eye.In spite of this variation in the local visual stimulus, under many conditions the perceived colour of an object will not appreciably change.However, it is an important (although entirely unsurprising) fact that colour vision (in humans and other animals) is only approximately colour constant.(Similarly, shape constancy is only approximate.)It is easy to devise scenes and viewing conditions for which constancy e ects are minimal or non-existent and, as it happens, these kind of viewing conditions are favoured for colorimetric and many experimental tasks.An interesting but virtually intractable question is how much colour constancy human colour vision displays under natural conditions.The di culty is partly conceptual: is it constancy in colour phenomenology or colour judgement that we are attempting to measure?It is also partly technical: how can we construct a representative sample of natural viewing conditions and scenes in order to make laboratory measurements?In spite of these problems there has been a great deal of both experimental and theoretical work done on the nature of the constancy mechanisms. One important but controversial approach to colour constancy treats it as the result of the visual system's attempt to estimate object reflectances from the light reaching the eye.The perceived colour of objects is approximately constant under many conditions because under those conditions the reflectance estimate generated by the visual system is reasonably accurate.In this framework, the most common strategy is to first generate an estimate of the SPD of the illumination in a scene and use that estimate to compute the reflectance of an object from the light reaching the eye from that object.A simple example of a theory of this kind involves the assumption that the environment, on average, is grey.That is, if the reflectances of the objects in a scene are averaged together the resulting curve will be flat across the visible spectrum and approximately ½.Given this assumption, averaging the light reaching the eye across the entire scene and dividing at each wavelength by ½ gives an estimate of the illuminant on the scene.Unfortunately, the grey world assumption is false for many scenes in which humans have reasonably good constancy, so this cannot be the entire explanation.More sophisticated theories of this kind have been developed and this is still an area of active research. A wide variety of other factors have been invoked to explain constancy e ects in various circumstances.Comparing the ratios of cone outputs across a scene contains important information about whether changes in the retinal image are due to changes in the illumination or to changes in the surface (although not about the absolute reflectance) (see Foster 2003).There are many di erent types of contrast, spatial and spectral, that seem to have some relationship with colour constancy.There can also be a powerful influence of perceived scene geometry on how the visual system disentangles illumination and surface properties.Although human colour vision displays some degree of several di erent kinds of constancy there is no current consensus on the best explanation of the various constancy phenomena or even of the best way to characterize those phenomena. 106 Colour in the cortex e role of chromatic information in the cortex Chromatic discrimination is extraordinarily precise in some ways and extraordinarily coarse in others.Extremely small di erences in SPD are discriminable in the right circumstances and, by some measures, the visual system is better at detecting this type of chromatic contrast than achromatic contrast (di erences in the overall intensity of illumination).On the other hand, the spatial and temporal resolution for chromatic contrast is much worse than for achromatic contrast, and consequently chromatic contrast makes hardly any contribution to high-resolution spatial or temporal vision.This and other factors lead to a picture of cortical colour processing in which chromatic and achromatic information are combined in the eye and mid-brain areas but separated in the cortex, and contribute very di erently to visual processing.In particular, chromatic contrast does not contribute to spatial vision in ordinary contexts and only plays a role in perceiving colour (and via that in tasks like object identification).Contrariwise, information derived from the achromatic signal plays only a minimal role in perception of colour.The one thing that's safe to say about colour in the cortex is that this picture has been rejected, at least in anything like its original form.In cortical area V1, the first cortical visual area, there are very few cells that are responsive only to chromatic signals and even that small minority are also orientation sensitive, so their behaviour reflects both chromatic and spatial information.The overwhelming majority of cells in V1 are sensitive to both chromatic and achromatic inputs.S-cone input, which does not contribute to the achromatic pathway, is found throughout the visual areas that receive inputs from V1 including areas that have nothing to do with perceiving colour, like area MT which is thought to play a role in motion perception.Similarly, chromatic contrast plays a role in spatial vision and vice versa, as can be shown using psychophysical methods.Although the precise details are still a matter of controversy, it's clear that colour is a cue used by the brain to perform a variety of tasks and that the information about the SPD of the stimulus delivered by the cones is utilized for many purposes other than that of discriminating and recognizing colour. 11 e organization of cortical colour processing There are two central issues involved in accommodating this new understanding of the role of chromatic information in the cortex.The first is partly conceptual.The results of the previous section are often described as showing that colour vision contributes to spatial vision.Although all that is intended is that chromatic information contributes to spatial vision, it can be read as implying that colour as perceived contributes to spatial vision-and this is a much more controversial claim.It's important to keep separate the role of chromatic information in, for example, the perception of shape and the perception of hue.It is unlikely that perceived hue is an input to the perception of shape even though both draw on the chromatic signal originating in the cones.This is supported by studies of achromatopsia (colour blindness resulting from cortical damage).Some achromatopsics can continue to perceive shapes that are defined solely by chromatic information even though they cannot discriminate, sort, or recognize hues at all.They have lost the ability to see colour but not the ability to utilize chromatic information for other visual functions. 12 The second issue is primarily empirical.Although the explanation of why human beings experience colour as they do is presumably to be found in the cortex, the identified cortical cells and cortical areas do not seem well-suited to explaining the details of how we visually represent colour.Related to this is the extended controversy over whether there is a cortical area specifically dedicated to colour and, if so, where it is.Much of this controversy has centred on Zeki's controversial identification of the human analogue of macaque V4 as the brain area responsible for the perception of colour (Lueck et al. 1989).What does seem clear is that there are neurons responsive specifically to chromatic information in V1 and there are clusters of such neurons in areas outside of V1 as well.Our ability to discriminate and identify colour presumably relies on these neurons but going beyond that is highly speculative.Cortical processing of colour, beyond the clarification of the role of chromatic information in spatial vision, remains a confused (and confusing) topic.(For recent overviews see Conway 2014 andJohnson andMullen 2016.) Defects of colour vision and naming Colour vision, like any other biological characteristic, varies from individual to individual.A familiar and extreme example of such variation is that a non-negligible proportion of human beings are colour "blind", most of them being specifically insensitive to the di erence between red and green.In light of the salience of colour and, in particular, the striking di erence between red and green for those of us with normal colour vision, it is a surprising fact that colour blindness was first clearly characterized around 1800.Thus colour blindness does not appear to be a functionally significant problem in most practical contexts. Colour blindness is of great theoretical interest.Study of such defects has proven very illuminating in understanding normal colour vision and also raises some interesting questions about the contribution of the photoreceptors to the character of colour experience.Most colour blind individuals are not, in fact, colour blind in any strict sense of the phrase.Rather their colour vision di ers from that of colour normal individuals in several well-defined respects, none of which amount to a complete loss of colour vision.The most common form of colour blindness is dichromacy.Dichromats require only two primaries in matching experiments, and lack the ability to discriminate some stimuli that are readily discriminable by normal (trichromatic) subjects.For example, all dichromats will accept a match between some monochromatic lights and a white light.Dichromacy results from a loss of function of one of the three cone photoreceptor types, and comes in three corresponding forms. 13 What is commonly called red-green colour blindness actually consists of two di erent defects depending on whether it is the long or middle wavelength receptor whose function has been lost.Protanopes have no functioning long wavelength receptor and deuteranopes have no functioning middle wavelength receptor.They can be di erentiated by, among other methods, the loss of long wavelength sensitivity relative to normals that is found in protanopia but not in deuteranopia.Although both protanopes and deuteranopes are unable to distinguish spectral lights in the middle to long wavelengths that appear green to red to normal observers, thus the name red-green colour blind, their ability to discriminate non-spectral lights is substantially di erent.Subjects having any of the three forms of dichromacy will accept all matches made by a normal observer, although not vice versa.Protanopia and deuteranopia are the overwhelmingly most common forms of dichromacy, and most cases are the result of recessive inherited abnormalities in genes on the X chromosome which code for the photopigments contained in the long and middle wavelength photoreceptors.Consequently, red-green colour blindness is much more common among males than among females.The third form of dichromacy, tritanopia, is much less common and is due to the loss of function of the short wavelength receptor.Monochromacy is much rarer than dichromacy and is most often due to the loss of all cone function.Monochromatic individuals are only able to make light-dark distinctions and are strictly speaking colour blind. The genes coding for the three cone photopigments have now been isolated and sequenced.This achievement has provided new methods for understanding the early stages of colour vision and also for investigations of the evolution of colour vision.It is now known precisely what genetic abnormalities are responsible for the two varieties of red-green dichromacy and how these abnormalities a ect the spectral sensitivity of the photopigments in colour blind individuals.The genetics has also helped in the discovery of the detailed structure of the photopigment proteins themselves which in turn has led to a more detailed understanding of normal variation in human colour vision (Neitz and Neitz 2011).In addition, it is now possible, using the methods of molecular genetics, to trace evolutionary relationships among the photopigments found in di erent species. With the precise characterization of the di erent forms of colour blindness in the nineteenth century arose a puzzle as to what the visual experience of colour blind individuals is like.Protanopes and deuteranopes, for example, perceive only a single hue in the regions of the spectrum between 550 and 700 nm, but it is di cult to get empirical evidence for which hue it is.Opponent process theory suggests that, as protanopes and deuteranopes have no functioning red-green opponent channel, they should see only yellow, blue, black, and white.But colour blind subjects talk about colour just like the rest of us, only making mistakes normal observers would never make.They know that grass is green and tomatoes are red and although deuteranopes may have trouble telling the di erence between ripe and unripe tomatoes, they will not say they are yellow or blue.Some very unusual individuals have normal vision in one eye and a colour deficiency in the other.These subjects might seem ideal, since they are familiar with the full range of colour experience due to their normal eye, and so can report on what they see through their colour-deficient eye.Unfortunately, the small but much discussed literature on such subjects has produced more controversy than consensus.(For a brief review see Boynton 1979: 380-2). Most defects of colour vision are due to receptoral abnormalities.These cases are in most respects well understood, partly because there are many examples to study and partly because the role the photoreceptors play in colour vision is well understood.But receptoral abnormalities are not the only cause of defects of colour vision: as mentioned in section 6.2, damage to areas of the visual cortex is another cause.These achromatopsic disorders are, in general, less well characterized and understood than the much more common disorders discussed above.In addition, there is very little understanding of what contribution the damaged areas make to normal colour vision. In some well studied cases of achromatopsia it has been established that all three cone-types are present and contributing to visual functioning.Even more striking is that serious impairments of colour vision can be accompanied by essentially normal perception of luminance resulting in subjects who appear to perceive the world in shades of white, grey, and black.Not all cases of achromatopsia are total and there is a great deal of variation in the severity of the impairment.There can be some remaining degree of colour vision and the defect may even be limited to some areas of the visual field.However, the specific characteristics of the colour abnormality in at least some cases of achromatopsia are very di erent from the forms of dichromacy. Cortical damage can cause other kinds of colour-related deficits where the pattern of which abilities are spared and which are preserved is complicated.Colour agnosia is an inability to recognize the colours of seen objects with other aspects of colour vision remaining apparently intact.One colour agnosic performed normally on many non-verbal tests of colour perception, had a normal colour vocabulary and was able correctly to remember common colour associations, for example that grass is green and blood is red.When presented with an object and asked for its colour he would reply with a colour term, but his performance was no better than chance.He performed well on tasks that involve arranging colour samples in terms of similarity but poorly on sorting them into categories on the basis of similarity (van Zandvoort et al. 2007). Animal colour vision Some degree of colour vision is widely distributed throughout the animal kingdom, and appears to have evolved independently in several groups.Almost all vertebrates that have been studied possess some form of colour vision, although many only have a rudimentary ability which may not play a significant role in guiding behaviour.Although comparatively few have been tested, many invertebrates also possess colour vision, which in some (e.g.bees) is highly developed.The number of photoreceptor types and the spectral characteristics of the photoreceptors varies from species to species.Among mammals only (some) primates are known to have trichromatic colour vision.All other species of mammal that have been studied are dichromats with possibly a few, such as rats, lacking colour vision altogether.Some birds and fish are tetrachromats. 14 Further, the spectral range over which their vision extends is broader, particularly into the ultraviolet.Colour vision in these groups is phylogenetically older, and some respects more highly developed, than it is among mammals. An organism is said to have colour vision if and only if it is able to discriminate between some spectrally di erent stimuli that are equated for brightness (or luminance). 15There are two basic methods for determining the presence or absence of colour vision in non-human organisms.The first is behavioural: the organism's ability to discriminate equiluminant stimuli is tested directly.A complication arises because stimuli that are equiluminant for a human observer will not, in general, be equiluminant for a non-human observer.The luminance of stimuli for an organism can be equated if its spectral sensitivity function (the function from stimulus wavelength to stimulus brightness) can be determined.Alternatively, the relative luminance of the stimuli can be randomly varied over a wide range, assuming that consistently successful discrimination can only be based on colour di erences.(For a review of these techniques see Jacobs 1981: 5-11.)Both techniques are somewhat tedious, and consequently have only been used to investigate a relatively small number of animals.The second method is physiological: the visual capacities of an organism are inferred from information about the physiological characteristics of its visual system.For example, it is possible to measure the absorption spectra of individual photoreceptor cells using a technique known as microspectrophotometry.Establishing the existence of two cone photoreceptor types in this way provides reasonably good evidence that the organism in question is a dichromat.These measurements, and other physiological techniques, although not easy to perform, are often less time-consuming than behavioural methods.
2020-08-20T10:05:49.356Z
2020-09-11T00:00:00.000
{ "year": 2020, "sha1": "49c51dc9d9a5e13b250ea8058e53bbb9fd095d29", "oa_license": "CCBYNCSA", "oa_url": "https://dspace.mit.edu/bitstream/1721.1/137689.2/1/BYRTSO-6v3.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "e8363969a98c96f08f38111fe9d0c37f33b9898c", "s2fieldsofstudy": [ "Biology", "Physics", "Psychology" ], "extfieldsofstudy": [ "Art" ] }
2206940
pes2o/s2orc
v3-fos-license
Smooth *-algebras Looking for the universal covering of the smooth non-commutative torus leads to a curve of associative multiplications on the space $\Cal O_M'(\Bbb R^{2n})\cong \Cal O_C(\Bbb R^{2n})$ of Laurent Schwartz which is smooth in the deformation parameter $\hbar$. The Taylor expansion in $\hbar$ leads to the formal Moyal star product. The non-commutative torus and this version of the Heisenberg plane are examples of smooth *-algebras: smooth in the sense of having many derivations. A tentative definition of this concept is given. Introduction The noncommutative torus in its topological version (C * -completion) as well as in its smooth version [6] is one of the most important examples in noncommutative geometry. Beside the fact that the classical tools of differential geometry have unambiguous generalizations to it, it provides a very nontrivial example of noncommutative geometry satisfying the axioms of [7] (see also in [8], [9]). We looked at its smooth version and asked for its universal covering. We found the Heisenberg plane as it is presented in this paper: a twisted convolution on a carefully chosen space of distributions, namely the topological dual space O ′ M of the Schwartz space O M of smooth slowly increasing functions at ∞, [29], [30]. It is large enough to contain the space of rapidly decreasing measures with support in the lattice (2πZ) 2 that is a space isomorphic to the space of smooth functions on the noncommutative torus (as well as on the usual commutative torus). The multiplication turns out to be a smooth curve in the deformation parameter . Moreover, looking at it via Fourier transform, Taylor expansion of the multiplication in the deformation parameter leads to the formal Moyal star-product which is well known from deformation quantization, [24], [1]. Then we noticed that we found examples of noncommutative * -algebras generalizing algebras of complex smooth functions. These * -algebras which can be realized as * -algebras of unbounded operators in Hilbert space admit "many" derivations specifying thereby the generalized smooth structure (see below). These algebras are defined in Section 1 and are tentatively called smooth * -algebras. Section 2 contains our treatment of the smooth non-commutative torus, and also some related material like the smooth non-commutative circle of rational slope b/a, a quotient of the smooth non-commutative torus. The appendix in section 4 gives an overview on convenient calculus in infinite dimensions which is necessary to obtain our results about smoothness in the deformation parameter , and which also gives the right setting for multilinear algebra with locally convex vector spaces. Work on this paper started in 1996, but we were unable to prove that the Heisenberg plane is a smooth *-algebra. Finally we gave up and stated this as a conjecture. The problem is finding enough states. Smooth * -algebras 1.1. Preliminaries. Throughout this paper by a * -algebra we always mean a complex associative algebra A with unit equipped with an antilinear involution f → f * which reverses the order of products i.e. which satisfies (f g) * = g * f * , ∀f, g ∈ A. Given a * -algebra A, a hermitian representation [25] of A in a Hilbert space H is a homomorphism π of unital algebras of A into the algebra of endomorphisms of a dense subspace D(π) of H satisfying (Ψ, π(f )Φ) = (π(f * )Ψ, Φ) for any f ∈ A and Ψ, Φ ∈ D(π); the dense subspace D(π) of H is refered to as the domain of π. The image of a hermitian representation in H is a unital subalgebra of the algebra of endomorphisms of the dense domain D of the representation which is also a *algebra for an obvious involution; such a * -algebra will be refered to as a * -algebra of (unbounded) operators in the Hilbert space H with domain D. A linear form ϕ on a * -algebra A is said to be positive if ϕ(f * f ) ≥ 0 for all f ∈ A. Such a positive linear form satisfies ϕ(f * ) = ϕ(f ) (for all f ∈ A) and (f, g) ω = ϕ(f * g) is a pre-Hilbert scalar product on A which induces a Hausdorff pre-Hilbert structure on the quotient D ϕ = A/I ϕ where I ϕ = {f ∈ A|ϕ(f * f ) = 0}. In view of the Schwarz inequality, I ϕ is a left ideal of A so one has a homomorphism of unital algebras π ϕ of A into the endomorphisms of D ϕ which is in fact a hermitian representation of A in the Hilbert space H ϕ obtained by completion of D ϕ with domain D(π ϕ ) = D ϕ . Let Ω ϕ ∈ D ϕ be the canonical image of the unit 1 ∈ A under the projection A → D ϕ = A/I ϕ . Then one has ϕ(f ) = (Ω ϕ , π ϕ (f )Ω ϕ ) for any f ∈ A and D ϕ = π ϕ (A)Ω ϕ . This construction which associates to a positive linear form ϕ on A the triplet (π ϕ , H ϕ , Ω ϕ ) of a hermitian representation π ϕ of A in Hilbert space H ϕ with Ω ϕ in the domain of π ϕ such that π ϕ (A)Ω ϕ is dense in H ϕ and ϕ = (Ω ϕ , π ϕ (·)Ω ϕ ) is known as the GNS construction; given ϕ, the triplet (π ϕ , H ϕ , Ω ϕ ) is unique up to a unitary. Given a hermitian representation π of a * -algebra A with domain D(π), to each vector Φ ∈ D(π) coresponds the positive linear form ϕ on A defined by ϕ(f ) = (Φ, π(f )Φ). Conversely, the GNS construction shows that any positive linear form on A can be realized in this manner. To the action (f, 1.2. Proposition. The following conditions (i) and (ii) are equivalent for a locally convex * -algebra A. (i) A is a * -algebra of unbounded operators in Hilbert space H with domain D and its locally convex topology is generated by seminorms There is a subset S of positive linear forms on A which is invariant by the action of A on A * + and which is such that the locally convex topology of A is generated by the seminorms f → (ϕ(f * f )) 1/2 , ϕ ∈ S and is Hausdorff. (ii) ⇒ (i). Let (π ϕ , H ϕ , Ω ϕ ) denote the GNS triplet associated to ϕ ∈ S. Take H to be the Hilbertian direct sum⊕ ϕ∈S H ϕ , take D = ⊕ ϕ∈S π ϕ (A)Ω ϕ and notice that it follows from the assumptions that π = ⊕ ϕ∈S π ϕ is injective so A identifies canonically to the * -algebra π(A) of unbounded operators in H with domain D. It is clear that the locally convex topology on A generated by the seminorms f → (ϕ(f * f )) 1/2 , ϕ ∈ S is the same as the one generated by the seminorms f → π(f )Φ , Φ ∈ D. Notice that if ϕ is a positive linear form on A one has |ϕ(f )| ≤ (ϕ(1)) 1/2 (ϕ(f * f )) 1/2 for any f ∈ A (Schwarz inequality) so any ϕ ∈ S is automatically continuous, (notice also that the same inequality shows that ϕ = 0 whenever ϕ(1) = 0 for ϕ ∈ A * + ). 1.3. Definition. Let A be a * -algebra, S be a subset of positive linear forms on A invariant by the action of A on A * + and let D be a Lie subalgebra of the Lie algebra Der(A) of derivations of A which is also a Z(A)-submodule of Der(A) where Z(A) denotes the center of A. Assume that : (1) The locally convex topology on A generated by the semi-norms f → ν ϕ (f ) = (ϕ(f * f )) 1/2 , ϕ ∈ S is Hausdorff; (2) ∩{ker(X)|X ∈ D} = C1; (3) The locally convex topology τ (S, D) on A generated by the seminorms ν ϕ • X 1 • . . . X p , ϕ ∈ S, X i ∈ D, p ∈ N is such that (A, τ (S, D)) is complete. Then A will be said to be a smooth * -algebra relative to S and D, or simply a smooth * -algebra when no confusion arises, the topology τ = τ (S, D) being called smooth topology of A. where (f k,l ) is any rapidly decreasing sequence of complex numbers, i.e. for each m ∈ N the seminorm and where u = exp(2πit) and v = exp(2πis) are the coordinates on the torus. Let us fix a complex number q with |q| = 1. Then the smooth q-torus C ∞ (T 2 q ) is the convenient associative algebra (in fact a Fréchet algebra) which is given by all elements of the form (1), but where we assume now that U , V are two indeterminates which satisfy Using the convention we get nicer descriptions for the product and the adjoint f * : If (the argument of) q is rational (mod 2π), let N ∈ N be the smallest positive natural number such that q N = 1. If q is irrational, we put N = 0. Proposition. If q is rational, then there exits a smooth vector bundle A q → S 1 × S 1 with standard fiber the algebra Mat N (C) of all complex (N × N )-matrices and with transition functions in GL(n, C) acting on Mat N by conjugation, such that the non-commutative torus C ∞ (T 2 q ) is isomorphic to the algebra Γ(A q ) of all smooth sections of the algebra bundle A q → S 1 × S 1 . The center of C ∞ (T 2 q ) is isomorphic to C ∞ (S 1 × S 1 , C). The first Chern class of the complex vector bundle A q vanishes. Moreover, there is a smooth vector bundle E q → S 1 × S 1 with standard fiber C N such that A q is the full endomorphism bundle End(E q ). The first Chern class of E q also vanishes. Proof. We first claim that the algebra Mat N is the unique algebra generated by two unitary elements U 0 and V 0 which are subject to the relations To see this note that each element in the algebra generated by U 0 and V 0 may be written in the form 0≤k,l≤N−1 a k,l U k 0 V l 0 , so this algebra is of dimension ≤ N 2 . On the other hand we consider the matrices in Mat N , which satisfy relations (1) and thus generate a C * -subalgebra which clearly commutes only with the multiples of the identity, so it has to be the full matrix algebra. Now we consider the trivial bundle The space of smooth section is then C ∞ (S 1 × S 1 , Mat N ) = C ∞ (S 1 × S 1 , C) ⊗ Mat N , which is generated by the unitary central elements u, v, and unitary U 0 , V 0 with the relations (1), where the coefficients are again rapidly decreasing with respect to the powers of u and v. Consider now the cyclic group Z N = Z/N.Z, the q-action of (m, n) ∈ Z N ×Z N = Z 2 N on S 1 × S 1 given by (u, v) → (q m .u, q n .v), and the q-action on Mat N given by Note that inside the adjoint action of GL(n, C) the matrices U 0 and V 0 commute, since they do so in P GL(n, C), and that (m, n) maps U 0 to q m .U 0 , and maps V 0 to q n .V 0 . We may consider the following diagram, where the horizontal arrows are covering mappings since all involved actions are strictly discontinuous, and where the left vertical arrow is Z 2 N -equivariant. Since the action of Z 2 N on Mat N is by algebra automorphisms, the resulting smooth mapping A q → S 1 × S 1 is a smooth algebra bundle. The sections of A q correspond exactly to the Z 2 N -equivariant sections of the left hand side. A section f : and only if the following condition is satisfied: c k,l,s,t = 0 only if k ≡ s mod N and l ≡ t mod N . But then we may put c k,l = c k,l,s,t , where s ≡ k mod N and t ≡ l mod N , and the section f can be written as We just have to note that U = uU 0 and V = vV 0 satisfy only the relations 2.1.3 of the noncommutative torus. The first Chern class c 1 (A q ) of the complex vector bundle A q vanishes, by the following argument: The mapping π : S 1 ×S 1 → S 1 ×S 1 in the diagram above is an N 2 -sheeted covering, has mapping degree N 2 . Thus the mapping in cohomology is Now we will construct the bundle E q → S 1 × S 1 . We cannot push it down from a trivial bundle via the group action by We have to absorb the non-commutativity into a larger group acion. Thus we consider the following semidirect product group, its action on S 1 × S 1 × S 1 , and its unitary representation on C: Using the actions we can define the bundle E q → S 1 × S 1 as follows: It is easy to check that all these actions are compatible with each other in such a way that we get a free fiberwise action of the algebra bundle A q on the vector bundle E q . By counting dimensions we see that A q = End(E q ). For the first Chern class we can repeat the argument from above. Corollary. Let q be a primitive N -th root of unity. Then the noncommutative torus algebra C ∞ (T 2 q ) is Morita equivalent to the commutative torus algebra C ∞ (T 2 ). Proof. By theorem 2.2 we have the algebra isomorphism C ∞ (T 2 q ) ∼ = Γ(End(E q )). But for any vector bundle the full automorphism algebra, which acts from the left on the space of smooth sections of the vector bundle, is Morita equivalent to the algebra of smooth functions on the base, which we may view as acting from the right. 2.4. Derivations of the non-commutative torus. Let D ∈ Der(C ∞ (T 2 q )), let us assume that D is bounded. Then D is uniquely determined by the values The relation D(U ).V + U.D(V ) = qD(V ).U + qV.D(U ), by comparison of coefficients, leads quickly to Now let N be the smallest integer with q N = 1 for rational q, let N = 0 for irrational q. Then for k ≡ 1( mod N ) equation (2) implies that we have v k−1,l = 0 for l ≡ 1 mod N , and that v k−1,l can be prescribed arbitrarily (but rapidly decreasing) for l ≡ 1 mod N . This means that we may prescribe Similarly for l ≡ 1 mod N equation (2) implies that we have u k,l−1 = 0 for k ≡ 1 mod N , and that u k,l−1 can be prescribed arbitrarily (but rapidly decreasing) for k ≡ 1( mod N ). This means that we may prescribe Let us write D U for the derivation given by D U (U ) = U and D U (V ) = 0, similarly describes a derivation which is not inner, since it acts on the center (if N > 0). On the other hand for any a = a k,l U k V l the inner derivation ad so that all other derivations specified by (2) are inner derivations. So we see that and v = e 2πis , with respect to the unique flat connection on the algebra bundle A q → S 1 ×S 1 , which is induced by the description in 2.2, and which respects the fiberwise 'matrix'-multiplication. In this case the outer derivations correspond exactly to the derivations of the center. For q irrational this is not the case. Here Out(C ∞ (T 2 q )) is linearly generated by the two derivations D U and D V . 2.5. Conjecture. It might be the case that every (algebraic) derivation of the noncommutative torus is automatically bounded. This would follow from an automatic continuity result for algebra homomorphisms. One can find such results in the literature but they have too strong assumptions to be immediately applicable. The following argument shows how to carry over continuity from algebra homomorphisms to derivations: A linear mapping D : ε is an algebra homomorphism, where ε is in the center and ε 2 = 0 so that the multiplication in The non-commutative torus is a smooth * -algebra. In fact we will show that the topology described in 1.1.3 is the one we started with in 2.1. What are the states on C ∞ (T 2 q )? We consider first the trace tr( k,l c k,l U k V l ) = c 0,0 . We will use only states of the form for some g ∈ C ∞ (T 2 q ), and indeed g = 1 will suffice. We start to check that we can reproduce a generating system of seminorms. For that it suffices to consider and to compose it with an appropriate composition of the two basic derivations D U and D V from 2.4 which give us: It remains to show that an arbitrary state ω on C ∞ (T 2 q ) is bounded: We use the Gelfand-Naimark-Segal construction. The subspace , and π ω is bounded. Thus the representation π ω and the state ω can be extended to the 'C * -algebra completion' Higher dimensional non-commutative tori. Let us fix a complex number q with |q| = 1, and let us consider the algebra C ∞ (T n q ) consisting of all where (f k ) is any rapidly decreasing sequence of complex numbers so that for each m ∈ N the seminorm and where the generators S 1 , . . . , S n satisfy the commutation rules This looks like an interesting generalization of the non-commutatve torus C ∞ (T 2 q ). But it is not so interesting as the following result shows: where we may use the projective tensor product. For n = 2p + 1 we have the projective tensor product of 2p copies of the non-commutative 2-torus with one 1-torus. Proof. Let first n = 2p. Consider the new set of generators of the algebra T 2p Then obviously U j V j = qV j U j and all other pairs commute so that the first result follows. If we have moreover an element S 2p+1 then we also consider the last generator Z = S 1 S 3 . . . S 2p+1 which lies in the center of T 2p+1 q (it even generates the center if q is irrational) and thus splits off a central subalgebra isomorphic to C ∞ (S 1 ). 2.8. The non-commutative circle. We look for the non-commutative circle as a smooth algebra which is a quotient of the non-commutative torus. Since C ∞ (T 2 q ) is a simple algebra for irrational q we will succeed only for rational q, thus let us take q ∈ S 1 ⊂ C with q N = 1 for minimal N . As in 2.1 let u = exp(2πit) and v = exp(2πis) be the coordinates on the torus S 1 × S 1 , and let z = exp(2πix) be the coordinate on S 1 . Let us consider the embedding where a, b ∈ Z are relatively prime. Then we consider the algebra bundle A q → S 1 ×S 1 with typical fiber Mat N constructed in the proof of proposition 2.2, and take the pullback bundle i * A q → S 1 and the space of smooth sections is then viewed as the non-commutative q-circle. We want to describe it by generators and relations. For that consider the following diagram where all diagonal mappings are covering maps with the groups of covering trans- . The outer horizontal mappings are equivariant with respect to the homomorphism Z N → Z 2 N which is given by p → (ap, bp). So the smooth sections of the algebra bundle i * A q → S 1 correspond to the Z N -equivariant smooth functions is Z N -equivariant if and only if the following condition is satisfied: But then the function f can be written as where Z := z N , U := z a U 0 and V := z b V 0 satisfy the relations We also have Z = U Na ′ V Nb ′ where a ′ , b ′ ∈ Z satisfy aa ′ + bb ′ = 1. So the noncommutative q-circle of slope b/a in the non-commutative q-torus is the associative algebra generated by two elemets U, V with the relations (1), and with rapidly decreasing coefficients. If q = 1 we have N = 1, thus U = Z a , V = Z b , and clearly we just have the algebra of smooth functions on S 1 . The smooth Heisenberg algebra 3.1. We recall here (see [22], [29], or [30]) some wellknown results from the theory of distributions which we shall need in the following. We consider the following spaces of smooth functions on R n : The space S(R n ) of all rapidly decreasing smooth functions f for which x → (1 + |x| 2 ) k ∂ α f (x) is bounded for all k ∈ N and all multiindices α ∈ N n 0 , with the locally convex topology described by these conditions, a nuclear Fréchet space. Its dual space S ′ (R n ) is the space of tempered distributions. The space O C (R n ) of all smooth functions f on R n for which there exists k ∈ Z such that x → (1 + |x| 2 ) k ∂ α f (x) is bounded for each multiindex α ∈ N n 0 , with the locally convex topology described by this condition (a nuclear LF space). Its dual space O ′ C (R n ) is usually called the space of rapidly decreasing distributions (see [29]). The is bounded, with the locally convex topology described by this condition (a nuclear space). This is the space of tempered smooth functions. Its dual space O ′ M (R n ) will be called the space of speedily decreasing distributions. There are the following inclusions between these spaces: The Fourier transform of functions f ∈ S and its inverse, , for the completed projective tensor product which agrees with the injective one. Since we have been unable to locate this result in the literature we sketch a proof: We start with O M . By ( [29], p. 246) the space O M (R n ) is the space of the multipliers in L b (S(R n ), S(R n )), with the induced topology, where L b denotes the space of continuous linear mappings with the topology of uniform convergence on bounded sets (i.e. on compact sets, since S is Montel), whose bornology is the same as that from 4.5.1. It is well known that L b (S(R n ), S(R n )) ∼ = S(R n ) ′⊗ S(R n ). Thus we have the following diagrams of embeddings: It remains to check that the spaces of smooth functions with compact support are dense in O M , which is easy, and that the trace topology on subspaces of functions with fixed compact support is the usual Fréchet topology, so that 3.2. The Heisenberg relation. Let Q, P be two generators which satisfy the Heisenberg relation (1) [Q, P ] = QP − P Q = i . We suppose that they are hermitian: Q * = Q and P * = P , which implies that should be real. Lemma. Then the unitary generators e iQ and e iP satisfy the Weyl relation (2) e itQ .e isP = e −its .e isP .e itQ for (t, s) ∈ R 2 Algebraic proof. We claim that the Heisenberg relations imply that for all m, n ∈ N 0 we have which is in fact a finite sum. In the simplest cases (3) boils down to QP m = P m Q + mi P m−1 and Q n P = P Q n + ni Q n−1 which follow easily from (1). From these simple cases one may then prove (3) by induction. Finally (2) follows from (3) by a simple power series calculation. Analytic proof. Another proof of (2) goes as follows. Let Q and P act on the space S(R) of all rapidly decreasing functions, by (Qf )(u) = uf (u) and (P f )(u) = i ∂ u f (u). Then the operators Q and P satisfy the Heisenberg relation (1), and they are selfadjoint with respect to the inner product R f (u)g(u)du. It is more difficult to see that there are no other relations between these operators. Let us consider the smooth 1-parameter subgroups of isomorphisms e isP and e itQ with infinitesimal generators iP and iQ: Using the Baker-Campbell-Hausdorff formula. Recall that (for finite dimensional matrices) we have e Q e P = e C(Q,P ) where C(Q, P ) = P + 1 0 ∞ n=0 (−1) n n + 1 (e t. ad Q .e ad P ) n .Q dt Since we have [Q, P ] = i , we see that C(Q, P ) = Q + P + i 2 . Thus we may use formally new generating elements (5) e itQ e isP = e itQ+isP − i 2 ts = e − i 2 ts e i(tQ+sP ) and we see that the multiplication then will be where ω(x, y) = x 1 y 2 − x 2 y 1 is the symplectic form on R 2 . If we multiply two such expressions and compute (formally, but see below) in the space of endomorphisms of S(R) we get R 2 a(t, s)e itQ e isP dt ds. so that we may consider the 'twisted convolution' (formally, but see below) For a speedily decreasing distribution a(t, s) ∈ O ′ M (R 2 ) we consider the formal expression If we multiply two such expressions and compute as above we get R 2 a(t, s)e i(tQ+sP ) dt ds. which motivates the 'other twisted convolution' for speedily decreasing distributions Moreover, for both multiplications the algebras O ′ M (R 2n ) decompose as (bornological or projective or injective) tensorproduct of n commuting factors ) which is injective if = 0, and is an algebra homomorphism from the twisted convolution (4) to the composition. Likewise formula (2) defines a bounded linear mapping O ′ M (R 2 ) → L(S(R), S(R)) which is injective if = 0, and is an algebra homomorphism from the other twisted convolution (5) to the composition. The analoga on R 2n also hold. Proof. We have to check that a * b, given by (4), defines a distribution in O ′ M (R 2 ). So let g ∈ O M (R 2 ), then which makes sense since we shall see that (t, s, u, v) → e isu g(t + u, s + v) is an , and moreover that → ((t, s, u, v) → e isu g(t + u, s + v)) is a smooth curve R → O M (R 4 ). All this is a consequence of the following facts: is a bounded algebra for the pointwise multiplication. (7) For a polynomial p : R n → R m the mapping p * : is bounded linear. This shows that a * b is a bounded (thus continuous, since O M is bornological by [15], II, §4,4, théorème 16 (page 131)) linear functional on O M (R 4 ), and that (a, b) → a * b is bounded. It is easy to see that * is an associative product, since this is clear for = 0 and for = 0 we have an injective algebra homomorphism The statement about the noncommutative torus is clear. The statement about the other twisted convolution follows via the isomorphism. The extension to R 2n is obvious and the decomposition into the tensorproduct follows from the considerations in 3.1. Finally, on R 2 , the statement about the representation on S(R) can be proved as follows. Using 3.2.4 for f ∈ S(R) we have We observe that for u ∈ R and f ∈ S(R) the mapping (t, s, u) → e itu f (u + s ) belongs to O M (R)⊗S(R 2 ), but not to O C (R 2 )⊗S(R). This follows from (6)-(9) and from the fact that for a polynomial p : R n → R m the mapping p * : S(R m ) → S(R n ) is bounded linear. This implies the result, since the extension to R 2n is again obvious. Remarks. The twisted convolution * is not well defined on the classical [29], p. 245. Property 3.3.7 is wrong for O C , but it holds for linear mappings. Is it true that O ′ M is the optimal space of distributions on which the twisted convolution defines an algebra structure? The statement that a * b is smooth in cannot be improved to real analytic R → O ′ M (R 2 ) in the weak sense of [20]. The source of this is the fact that → (x → e ix ) is not real analytic R → O M (R), even after composing with a linear functional: Let f ∈ S(R) ⊂ O ′ M (R) be such that the Fourier transform F f ∈ S(R) is not real analytic. Then is not real analytic. This is related to the fact that the Moyal * -product is only formal in , although there exist integral expressions in the sense of distributions which are smooth in , see 3.5 and 3.6 below. In [23] J. Maillard defined spaces of distributions O ′ (R 2 ) as follows, depending on : O ′ (R 2 ) consists of all distributions a ∈ S ′ (R 2 ) such that the formal expression from above 3.5. The Fourier transform of the twisted convolution. Suppose that a = F f and b = F g for f, g ∈ O C (R 2 ). Then we have in the weak sense (as distributions) R 4 e i( y,x−u + z,u−v +(z 1 y 2 −z 1 z 2 ) ) dy dz du dv Let us now use F = F 1 • F 2 , the composition of the two one dimensional Fourier transforms in both variables separately, and recall that the integals above are weak, are in O M (R 2 ) ⊂ S ′ (R 2 ), so they make sense only when applied to test functions in S. Then the last but one expression becomes where we used i∂ x f (x) = F −1 y (y(F f )(y))(x). The last expression is half of the Moyal star product, represented by a convergent integral. Obviously the series can only be interpreted as a formal power series in . But note that the divergence appears only after the interchange of the sum with the integral; before the expressions are bounded bilinear in f and g, and even smooth in . Also one should compare this result with the treatment of the Weyl calculus in [21], III, 18.5. 3.6. The Fourier transform of the other twisted convolution. Let us apply the other twisted convolution to Let us now use (i∂ 1 ) m (i∂ 2 ) n f (x) = F −1 y (y m 1 y n 2 (F f )(y))(x), which also holds in the weak sense for tempered distributions. Then we may continue to compute in the weak sense of distributions: This is now really the Moyal star product, expressed as a sum of bidifferential operators. Let us consider the bounded linear mapping between the spaces of speedily decreasing distributions Since the Haar measure on He 2 is just the usual measure dx 1 ∧ dx 2 ∧ dα, where we choose S 1 dα = 1, we can then compute the convolution as a weak integral (in the sense of tempered distributions): The groups He 2 are all isomorphic for = 0, an isomorphism He 2 → He 1 2 is given by (x 1 , x 2 , α) → ( x 1 , x 2 , α). Thus all the algebras (O ′ M , * ) are isomorphic for = 0, in strong contrast to the behaviour of the subalgebras T 2 e i , the noncommutative tori. Applying the Fourier transform we have to findb ∈ O C which satisfies It remains to show that a bounded derivation D which vanishes on Q and on P must vanish on O ′ M . For that we note the following facts: The curve t → e itQ is a smooth 1-parameter group of isomorphisms of S(R) with infinitesimal generator iQ, and it is the unique 1-parameter group with this generator, since for any other C(t) we have ∂ t (e itQ )C(−t) = e itQ iQC(t)−e itQ iQC(t) = 0, so that e itQ C(−t) is the constant Id. Thus t → e itQ + D(e itQ )ε is a smooth 1-parameter group in the semidirect product with infinitesimal generator iQ + D(iQ)ε = iQ + 0 and with second 1parameter group e itQ + 0, thus D(e itQ ) = 0 for all t. Similarly D(e isP ) = 0 for all s. Thus D vanishes on e itQ e isP for each t and s. In fact we think that the topology described in 1.1.3 is the one of O ′ M (R 2n ) ∼ = O C (R 2n ). One has to show that each state is a bounded linear functional, and that we are able to find enough states and derivations in order to generate the topology described in 3.1. APPENDIX: Calculus in infinite dimensions and convenient vector spaces 4.1. The notion of convenient vector spaces arose in the quest for the right setting for differential calculus in infinite dimensions: The traditional approach to differential calculus works well for Banach spaces, but for more general locally convex spaces there are difficulties. The main one is that the composition of linear mappings stops to be jointly continuous at the level of Banach spaces, for any compatible topology, so that even the chain rule is not valid without further assumptions. In addition to their importance for differential calculus convenient vector spaces together with bounded linear mappings and the appropriate tensor product form a monoidally closed category, the only useful one which functional analysis offers beyond Banach spaces. In this section we sketch the basic definitions and the most important results concerning calculus for convenient vector spaces. All locally convex spaces will be assumed to be Hausdorff. Proofs for the results sketched here can be found in [12] (sauf for 4.8 which was proved in [5]). A complete coverage is in the book [20]; [5] contains an overview and a presentation of non-commutative geometry based on convenient vector spaces. Smooth curves. Let E be a locally convex vector space. A curve c : R → E is called smooth or C ∞ if all derivatives exist (and are continuous) -this is a concept without problems. Let C ∞ (R, E) be the space of smooth curves. It can be shown that the set C ∞ (R, E) does depend on the locally convex topology of E only through its underlying bornology (system of bounded sets). Convenient vector spaces. Let E be a locally convex vector space. E is said to be a convenient vector space if one of the following equivalent conditions is satisfied (called c ∞ -completeness): (1) Any Mackey-Cauchy-sequence (so that there are scalars λ n,m → ∞ such that {λ n,m (x n − x m ) : n, m ∈ N} is bounded) converges. Here a mapping f : R → E is called Lip k if all partial derivatives up to order k exist and are Lipschitz, locally on R. To be scalarwise C ∞ means for a curve f that λ • f is C ∞ for all continuous (equivalently: all bounded) linear functionals λ on E. Obviously c ∞ -completeness is weaker than sequential completeness, so any sequentially complete locally convex vector space is convenient. From 4.2.4 one easily sees that (sequentially) closed linear subspaces of convenient vector spaces are again convenient. We always assume that a convenient vector space is equipped with its bornological topology. All spaces which a working mathematician needs in daily life are convenient. For any locally convex space E there is a convenient vector spaceẼ called the completion of E, and a bornological embedding i : E →Ẽ, which is characterized by the property that any bounded linear map from E into an arbitrary convenient vector space extends toẼ. Smooth mappings. Let E and F be locally convex vector spaces. A mapping f : E → F is called smooth or C ∞ , if f • c ∈ C ∞ (R, F ) for all c ∈ C ∞ (R, E); so f * : C ∞ (R, E) → C ∞ (R, F ) makes sense. Let C ∞ (E, F ) denote the space of all smooth mappings from E to F . For E and F finite dimensional (or even Fréchet spaces) this gives the usual notion of smooth mappings (Already for E = R 2 this is a non-trivial statement). Multilinear mappings are smooth if and only if they are bounded. We denote by L(E, F ) the space of all bounded linear mappings from E to F . 4.5. Differential calculus. We equip the space C ∞ (R, E) with the bornologification of the topology of uniform convergence on compact sets, in all derivatives separately. Then we equip the space C ∞ (E, F ) with the bornologification of the initial topology with respect to all mappings c * : C ∞ (E, F ) → C ∞ (R, F ), c * (f ) := f • c, for all c ∈ C ∞ (R, E). We have the following results: (1) If F is convenient, then also C ∞ (E, F ) is convenient, for any E. The space L(E, F ) is a closed linear subspace of C ∞ (E, F ), so it is convenient also. which is even a homeomorphism. Note that this result, for E = R, is the prime assumption of variational calculus. As a consequence evaluation mappings, insertion mappings, and composition are smooth. (4) The differential d : C ∞ (E, F ) → C ∞ (E, L(E, F )), given by df (x)v := lim t→0 For convenient vector spaces E 1 , . . . ,E n , and F we can now consider the space of all bounded n-linear maps, L(E 1 , . . . , E n ; F ), which is a closed linear subspace of C ∞ ( n i=1 E i , F ) and thus again convenient. It can be shown that multilinear maps are bounded if and only if they are partially bounded, i.e. bounded in each coordinate and that there is a natural isomorphism (of convenient vector spaces) L(E 1 , . . . , E n ; F ) ∼ = L(E 1 , . . . , E k ; L(E k+1 , . . . , E n ; F )) 4.7. Result. On the category of convenient vector spaces there is a unique tensor product⊗ which makes the category symmetric monoidally closed, i.e. there are natural isomorphisms of convenient vector spaces L(
2014-10-01T00:00:00.000Z
2001-06-18T00:00:00.000
{ "year": 2001, "sha1": "0ec2ffa51b8790dd05560238c4b8e761ba1d1174", "oa_license": null, "oa_url": "https://academic.oup.com/ptps/article-pdf/doi/10.1143/PTPS.144.54/23001213/144-54.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "0ec2ffa51b8790dd05560238c4b8e761ba1d1174", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
36821291
pes2o/s2orc
v3-fos-license
Prevalence of EGFR Tyrosine Kinase Domain Mutations in Head and Neck Squamous Cell Carcinoma: Cohort Study and Systematic Review Background: Mutations in the epidermal growth factor receptor (EGFR) tyrosine kinase domain (TKD) are associated with response and resistance to targeted therapy. The EGFR mutation status in patients with advanced oral and oropharyngeal squamous cell carcinoma (OOSCC) was evaluated. A systematic literature review was undertaken to summarize current evidence and estimate the overall prevalence of EGFR TKD mutations in patients with head and neck squamous cell carcinoma (HNSCC). Materials and Methods: Genomic DNA was extracted from formalin-fixed, paraffin-embedded tumor samples of 113 patients with OOSCC. Pyrosequencing was performed to investigate mutations in EGFR exons 18 to 21. Medline databases were searched for relevant studies. Studies reporting mutations in the EGFR TKD in HNSCC were eligible for inclusion in the systematic review. Results: No mutations in the EGFR TKD were observed in 113 samples of OOSCC. A total of 53 eligible studies were included in the systematic review. In total, from the review, 117 patients harboring a total of 159 EGFR TKD mutations were reported among 4122 patients with HNSCC. The overall prevalence of EGFR TKD mutations in HNSCC was 2.8%. Conclusion: Large-scale studies are warranted to provide further evidence regarding the mutation status of EGFR in patients with HNSCC. Head and neck squamous cell carcinoma (HNSCC) remains a challenging disease despite intensive clinical and translational research (1-3). A subset of head and neck cancer is caused by human papillomavirus (HPV) and represents a biologically distinct entity (4). In the past decades, several treatment strategies have been applied to treat HNSCC, however, survival outcomes have not substantially changed, emphasizing the need for more personalized medicine (5-8). Many efforts have, therefore, been made to identify predictive biomarkers and tailor treatment to the individual patient based on their own genetic and molecular profile. The epidermal growth factor receptor (EGFR) is a transmembrane cell surface receptor belonging to the human epidermal growth factor receptor (HER) family of receptor tyrosine kinases. EGFR overexpression occurs in more than 90% of HNSCCs and has been correlated with poor outcome (9). Robust preclinical evidence underlines the role of EGFR in the development of HNSCC, showing that EGFR activation triggers several downstream signaling pathways that play a crucial role in cancer pathogenesis (3, 10, 11). In this context, strategies for inhibition of EGFR signaling using monoclonal antibodies and tyrosine kinase inhibitors (TKIs) have been investigated intensively in clinical trials. De novo or acquired resistance to EGFR-targeted therapy, however, has led to a modest survival benefit for patients with HNSCC, while up-to-date predictive biomarkers of treatment response remain elusive (8, 12, 13). In non-small cell lung carcinoma (NSCLC), patients with activating mutations in the EGFR tyrosine kinase domain (TKD) are sensitive to small-molecule EGFR TKIs such as gefitinib, erlotinib, and afatinib (14-17). Given that mutations in the EGFR TKD may help in the selection of patients for EGFR TKIs or other targeted therapies, the EGFR mutation status in treatment-naïve patients with locally advanced oral and oropharyngeal squamous cell carcinoma (OOSCC) was retrospectively evaluated. In addition, a systematic literature review was undertaken to 23 This article is freely accessible online. Correspondence to: Christos Perisanidis, Medical University of Vienna, Department of Cranio-, Maxillofacial and Oral Surgery, Waehringer Guertel 18-20, 1090 Vienna, Austria. Tel: +43 1404004252, Fax: +43 1404004253, e-mail: christos.perisanidis@meduniwien.ac.at Head and neck squamous cell carcinoma (HNSCC) remains a challenging disease despite intensive clinical and translational research (1)(2)(3). A subset of head and neck cancer is caused by human papillomavirus (HPV) and represents a biologically distinct entity (4). In the past decades, several treatment strategies have been applied to treat HNSCC, however, survival outcomes have not substantially changed, emphasizing the need for more personalized medicine (5)(6)(7)(8). Many efforts have, therefore, been made to identify predictive biomarkers and tailor treatment to the individual patient based on their own genetic and molecular profile. The epidermal growth factor receptor (EGFR) is a transmembrane cell surface receptor belonging to the human epidermal growth factor receptor (HER) family of receptor tyrosine kinases. EGFR overexpression occurs in more than 90% of HNSCCs and has been correlated with poor outcome (9). Robust preclinical evidence underlines the role of EGFR in the development of HNSCC, showing that EGFR activation triggers several downstream signaling pathways that play a crucial role in cancer pathogenesis (3,10,11). In this context, strategies for inhibition of EGFR signaling using monoclonal antibodies and tyrosine kinase inhibitors (TKIs) have been investigated intensively in clinical trials. De novo or acquired resistance to EGFR-targeted therapy, however, has led to a modest survival benefit for patients with HNSCC, while up-to-date predictive biomarkers of treatment response remain elusive (8,12,13). In non-small cell lung carcinoma (NSCLC), patients with activating mutations in the EGFR tyrosine kinase domain (TKD) are sensitive to small-molecule EGFR TKIs such as gefitinib, erlotinib, and afatinib (14)(15)(16)(17). Given that mutations in the EGFR TKD may help in the selection of patients for EGFR TKIs or other targeted therapies, the EGFR mutation status in treatment-naïve patients with locally advanced oral and oropharyngeal squamous cell carcinoma (OOSCC) was retrospectively evaluated. In addition, a systematic literature review was undertaken to summarize current evidence regarding the EGFR mutation status in HNSCC. The present study aimed to estimate the overall prevalence of EGFR TKD mutations in patients with HNSCC and whether differences in the prevalence of EGFR mutations exist between patients with HNSCC across different countries and geographic regions. Materials and Methods Population of cohort study. This study included 113 patients diagnosed with primary locally advanced OOSCC who underwent neoadjuvant chemoradiation followed by tumor resection at the Department of Radiotherapy and Department of Cranio-Maxillofacial and Oral Surgery at the Medical University of Vienna between 2000 and 2009. All of them had: biopsy-proven OOSCC, available pretreatment biopsy tumor tissues, clinical TNM stage III or IV disease, no distant metastasis, no previous history of head and neck cancer, performance status and laboratory parameters permitting chemoradiotherapy and surgery, and had undergone complete resection (R0) of the primary tumor. The multimodal treatment comprised neoadjuvant chemotherapy with mitomycin C (15-20 mg/m 2 , i.v. bolus injection on day 1) and 5-fluorouracil (750 mg/m 2 /day, continuous infusion on days 1-5), concurrent with radiation therapy delivered over 5 weeks up to a total dose of 50 Gy (25 fractions of 2 Gy per day) followed by post-treatment radical surgery. Surgery was performed 4-8 weeks after the end of radiotherapy. Patients were followed-up regularly for further 5 years. Patient data were obtained from the Vienna General Hospital Patient Information System (AKIM). Clinical and pathological TNM staging was based on the seventh edition of the classification Union for International Cancer Control (UICC) (18). The surgical specimens were histopathologically evaluated by means of an institutional standard protocol. Pathological complete response (pCR) was defined by the absence of residual cancer within both the primary tumor site and regional lymph nodes. A study-specific patient number was given to patients to protect their identity. After the extraction of genomic DNA, EGFR was amplified by PCR with HotStarTaq DNA Polymerase (Qiagen) after the DNA concentration of each sample was adjusted to 2 ng/μl. The PCR conditions were 95˚C for 15 min for the initial activation of HotStarTaq DNA Poymerase, followed by a 3-step-cycling: denaturation at 95˚C for 20 s, annealing at 53˚C for 30 s and extension at 72˚C for 20 s for 42 cycles. Finally, incubation at 72˚C for 5 min was accomplished for the final extension. After amplification, the immobilization of PCR products on Streptavidin Sepharose High-Performance (GE Healthcare, Uppsala, Sweden) beads was performed using a volume of 10 μl PCR product which was added to a 24-well PCR plate containing 70 μl master mix [2 μl Streptavidin Sepharose High-Performance beads, 40 μl of PyroMark Binding Buffer (Qiagen) and 28 μl water]. The next step was the preparation of single-stranded DNA with a PyroMark Q24 Vacuum Workstation (Qiagen) and the annealing of the sequencing primer (included in the Therascreen ® Pyro Kit; Qiagen) to the template. Pyrosequencing of the samples was then carried out on a PyroMark Q24 MDx system (Qiagen). The results were analyzed with PyroMark Q24 software (version 2.0.6; Qiagen). Pyrosequencing results in the initial round of sequencing were confirmed by subsequent runs of independent PCRs and pyrosequencing, as well as by Sanger sequencing. Systematic literature review: Data sources, search strategy, selection of studies, and data extraction. This study followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines (19). Medline databases (hosts: PubMed and OVID) from inception up to October 20, 2016 were searched for relevant studies using the key words "head and neck cancer", "EGFR", and "mutation". No search restriction was applied. The complete search strategy can be found in Appendix A. In addition, manual searches were conducted on the web and by reviewing the reference lists of the retrieved articles. Studies reporting the mutation status of the EGFR TKD in tumor tissues of patients with HNSCC were eligible for inclusion in the systematic review. For quantitative synthesis, only studies reporting the prevalence of EGFR TKD mutations in HNSCC were considered. Letters and unpublished research were not included in the present review. Case reports were considered as qualitative evidence. Two reviewers (CP and RP) independently carried out study selection and data extraction. Any disagreements between Figure 2. Representative pyromarks of wild-type EGFR in exons 18-21. reviewers were resolved by consensus involving a third reviewer (JE). The reviewers independently screened all records that were identified by the search strategy. Duplicate publications were excluded both electronically and manually. The selected records were pooled, retrieved as full-text publications, and assessed for eligibility. The two reviewers independently extracted data from each eligible study using a predefined data-abstraction sheet. The following data were collected: name of the first author, year of publication, study location, characteristics of study cohorts (sample size, tumor stage, tumor site), source of tumor profiled, exon location and type of EGFR mutations, detection methods, prognostic effect of EGFR mutations, and the prevalence of EGFR mutations. The PRISMA flow diagram was used to describe the study selection processes. Statistical analysis. For the cohort study, patient characteristics were summarized using descriptive statistics. Categorical variables are described with frequencies and percentages. Patient demographic, clinical, and tumor characteristics were tested for association with pathological resonse using the chi-square test for categorical variables. Overall survival was defined as the time from surgery to death from any cause or to date of last follow-up. The Kaplan-Meier method was used for overall survival assessment and the log-rank test to compare differences in survival between groups. A two-sided pvalue of less than 0.05 was considered statistically significant. The systematic review was quantitatively analyzed to pool the overall prevalence of EGFR mutations in HNSCC. Subgroup analysis was performed to assess the prevalence of EGFR mutations according to geographic region. Prevalence of EGFR mutations was defined as the proportion of patients with EGFR-mutated tumors among patients who underwent the a mutation testing and was assessed as percentage with the 95% confidence interval (CI) (20). Subgroups of geographic regions (Europe, North America, Southeast Asia, and South Asia) were generated if two or more studies on a specific geographic region were present. Study location was defined based on the country where the patients were recruited in the study. Statistical analysis was performed using the Statistical Package for the Social Sciences (SPSS ® , version 21.0; IBM Corp., Armonk, NY, USA). Results Description of patient cohort and survival analysis. The clinical and histopathological characteristics of 113 study patients are presented in Table I. The median age of patients was 58 years (range=24-79 years) and most of the patients were male (73%) and current smokers (83%). The primary tumor was predominantly located in the oral cavity (86%). Pathological complete response to neoadjuvant chemoradiotherapy was observed in 46 patients (41%). Nine patients (8%) were HPV-positive. Among HPV-positive patients, two (22%) achieved a complete pathological response. Of 16 oropharyngeal tumours, two samples (13%) were positive for HPV, compared to seven HPV-positive samples (7%) out of 97 oral cavity tumors. At 2 years, the overall survival rate of the cohort was 66% and at 5 years 46%. The median follow-up time was 4.6 years by which time 56 (50%) patients had died. The overall survival of the 113 patients with OOSCC was assessed according to the pathological response status using the Kaplan-Meier method ( Figure 1). The median overall survival was significantly higher in patients with tumors showing pathological complete response compared with those having non-complete tumor regression (7.9 versus 2.8 years respectively, log-rank p=0.001). Mutation status of EGFR. The kinase domain of EGFR (exons [18][19][20][21] was analyzed in tumor samples from 113 treatmentnaïve patients with primary locally advanced OOSCC. Using pyrosequencing technology, no EGFR mutations were detected in any of the 113 tumor tissues (Figure 2). Discussion In this study, pyrosequencing technology was used to identify the EGFR mutation status in pretreatment tumor samples of patients with locally advanced OOSCC. No EGFR TKD mutations were observed among 113 cases of Perisanidis: Prevalence of EGFR TKD Mutations in HNSCC (Review) OOSCC. The results of this cohort study showed, however, a strong association of pathological complete response to neoadjuvant treatment with improved overall survival of patients with OOSCC, thus indicating the need for discovery of predictive biomarkers. The identification of activating mutations in the EGFR TKD in a subset of NSCLC and their association with substantial sensitivity to gefitinib, erlotinib or afatinib represents an important milestone in the therapy of this malignancy (14,17,24). Driven by the paradigm in NSCLC, several studies in HNSCC attempted to define the mutational spectrum of the EGFR TKD. To date, genomic data from whole exome sequencing and targeted next-generation sequencing studies have provided a comprehensive landscape of genomic alterations in HNSCC (25)(26)(27). In a recent study, Ock et al. using targeted next-generation sequencing identified EGFR TKD mutations in 19 out of 71 (26.7%) HNSCCs (28). The Cancer Genome Atlas data from whole exome sequencing of HNSCCs demonstrated, however, that only one out of 279 (0.4%) tumor samples from HNSCCs harbored a missense mutation in the EGFR TKD (25). Given this background, the present systematic review aimed to summarize current evidence regarding the EGFR mutation status in HNSCC. Based on the quantitative data analysis, this study demonstrated that the overall prevalence of EGFR TKD mutations in HNSCC is 2.8%. This study revealed that the EGFR mutation prevalence in patients with HNSCC varies modestly across geographic regions, with the highest prevalence shown (4.9%) in Southeast Asia and the lowest in South Asia (0%). The EGFR mutation prevalence within the population of Southeast Asia varies by country, from approximately 1% in Taiwan to 15% in the Republic of Perisanidis: Prevalence of EGFR TKD Mutations in HNSCC (Review) 29 Korea. In addition, this study showed that the EGFR mutation status in HNSCC has been insufficiently assessed worldwide as evident from the limited number of studies conducted in Australia (n=1) and South America (n=1), and the lack of data from several large geographic regions, particularly Africa, Central America, the Middle East, and Central Asia. Therefore, it is apparent that large-scale and multicenter studies are necessary to provide more definitive answers regarding the prevalence of EGFR mutations across geographic regions and countries and to assess their potential clinical value in patients with HNSCC. In this systematic review, the overall EGFR mutation status in HNSCC according to exon location and mutation type was explored. The data showed that the most prevalent EGFR kinase domain mutations, accounting for 73% of all EGFR mutations in HNSCC, are missense mutations in exons 18-21. The L858R substitution, well-known in NSCLC, which comprises about 40% of all EGFR mutations in NSCLC and is associated with sensitivity to EGFR TKIs, was found in only 2.5% of all EGFR-mutated HNSCCs (29). The missense mutation T790M in exon 20, which is associated with acquired resistance to EGFR TKIs in about half of all patients with NSCLC, was found in 7.5% of all EGFR mutations in HNSCC (17). In-frame deletions in exon 19, which account for about 45% of all EGFR mutations in NSCLC and are linked to responsiveness to EGFR TKIs, were observed in 22% of all EGFR-mutated HNSCCs (24). Insertion mutations in exon 20, which occur in about 3% of all EGFR mutations in NSCLC and are frequently associated with resistance to EGFR TKIs, were observed in 5% of all EGFR mutations in HNSCC (30). Taken together, it is clear that substantial differences exist between HNSCC and NSCLC regarding the distribution of mutations within exons 18-21 of the EGFR TKD. Unlike NSCLC, EGFR mutations in HNSCC do not involve specific hotspots but are rather scattered throughout exons 18 to 21. Thus, mutation screening in HNSCC should not be limited to the NSCLC hotspot regions in exons 19 and 21 of EGFR. Moreover, given that the overall prevalence of EGFR TKD mutations in HNSCC is 2.8%, it is challenging to identify specific EGFR mutations related to response or resistance to anti-EGFR therapy or other targeted therapies (31). The present cohort study has some weaknesses, including its retrospective nature and the relatively small sample size. Additionally, next-generation sequencing methods to compare and validate the results of the EGFR mutation testing by pyrosequencing were not used. However, recent studies have shown that pyrosequencing has the ability to detect EGFR mutations at a low ratio of mutant to wild-type alleles and thus provides high analytical sensitivity for identifying EGFR mutations (32,33). The systematic review is limited in several ways. Firstly, high heterogeneity has to be assumed across the study populations given the differences in study location, tumor site, stage and interventions. Secondly, various mutation testing methods with different sensitivities in detecting EGFR mutations were used across studies. Thirdly, a number of studies limited their EGFR mutation testing to hotspot regions in exons 19 and 21, thus the true prevalence of EGFR mutations might in fact be under-reported. Taken together, the results of the quantitative synthesis should be interpreted cautiously. In the emerging era of personalized medicine, the identification of clinically useful prognostic and, most importantly, predictive biomarkers to guide treatment decision in patients with cancer is urgently needed. In this study, no mutations were detected using pyrosequencing when analyzing the EGFR TKD mutation status in a cohort of 113 patients with advanced OOSCC. In addition, the systematic review demonstrated that EGFR TKD mutations are rare in HNSCC, with an overall prevalence of 2.8% and modest variation in the prevalence across countries and in vivo 31: 23-34 (2017) 30 geographic regions. Large-scale studies are warranted to provide further up-to-date evidence regarding the mutation status of EGFR in patients with HNSCC and to investigate whether the EGFR mutation profile of individual tumors is associated with sensitivity or resistance to targeted therapy. Conflicts of Interest None declared. Financial Disclosure None. Grant Support This project was supported by fund of the Oesterreichische Nationalbank, Anniversary Fund, Project Number 13469.
2018-04-03T03:51:14.834Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "f3852deae57d49526ac31b3d84684195d73aba77", "oa_license": null, "oa_url": "http://iv.iiarjournals.org/content/31/1/23.full.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b2465eaef0b8158103e43a0954516414c4d491fc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
198238085
pes2o/s2orc
v3-fos-license
Two new species of Spiniphallellus Bidzilya & Karsholt, 2008 (Lepidoptera, Gelechiidae) from Afghanistan and Iran . Spiniphallelus eberti sp. nov. (Iran) and Spiniphallellus naumanni sp. nov. (Afghanistan) are de -scribed. The position of the genus within the subfamily Anomologinae is briefly discussed, as is the degree of development of the gnathos in the male genitalia of two species within the same genus. A key to all Spiniphallellus species is given, and adults and male and female genitalia of the new species are illustrated. Introduction The genus Spiniphallellus was established for three species of Gelechiidae, respectively from the deserts of Kazakhstan, Uzbekistan and Turkmenistan (S. desertus Bidzilya & Karsholt, 2008), mountains of Kazakhstan (S. stonisi Bidzilya & Karsholt, 2008) and mountains of Turkey (S. fuscescens Bidzilya & Karsholt, 2008).Recently an additional new species, S. chrysotosella Junnilainen, 2016, was described from Bulgaria, Turkey and Georgia.The first three species are externally very similar, but can easily be separated by their genitalia.S. chrysotosella looks externally quite distinct from the other species both by its wingspan and wing pattern, but its genitalia match well the configuration for the genus, being most similar to those of S. fuscescens. The host plant is known only for S. desertus, whose larva feeds on Rheum sp.(Polygonaceae) in Kazakhstan (Falkovitsh and Bidzilya 2009).The adults of S. chrysotosella were observed around Jasminum fruticans L. (Oleaceae) in all three localities where this species was recorded (Junnilainen 2016). As a result of studying collected material in the Staatliches Museum für Naturkunde in Karlsruhe, five plain coloured, rather narrow-winged greyish black specimens of Gelechiidae were discovered amongst material from Afghanistan and Iran.Their assignment to the genus Spiniphallellus was proved by the study of the genitalia.It turned out that they represent two different species, which do not match any known species in the genus.Their description is given below. Material and methods Male and female genitalia were dissected and prepared using standard methods.Pinned specimens were photographed with an Olympus E-410 digital camera attached to an Olympus SZX12 micro-Nota Lepi.42(1) 2019: 113-119 | DOI 10.3897/nl.42.34484 scope.Slide-mounted genitalia were photographed with a Canon EOS 600D digital camera mounted on an Olympus U-CTR30-2 combined with a Carl Zeiss microscope.Sets of 4-7 images were taken of each specimen and montaged using Helicon Focus 6 and edited in Adobe Photoshop CS5.The descriptive terminology of the genitalia structures generally follows Bidzilya and Karsholt (2008) and Huemer and Karsholt (2010). The type material is deposited in the Staatliches Museum für Naturkunde, Karlsruhe, Germany (SMNK).Diagnosis.The new species is characterized superficially by a greyish brown forewing with black markings.It can be separated from its congeners by the hindwing which is distally more narrowed.The male genitalia are unique in having a short and broad valva with a lateral process and well developed distal triangular sclerite of the gnathos.The female genitalia are defined by the presence of distinct medial sclerites on sternum VIII, strongly sclerotized anterior margin of segment VIII and a long anterior apophysis.S. fuscescens differs in the weaker sclerotized anterior margin of sternum VIII, the shorter posterior apophysis, less distinct medial sclerites and a rounded rather than tubular antrum. Key to Description.Adult (Figs 1-3).Wingspan 15-17 mm.Head, thorax and tegulae covered with grey scales with light brown tips, labial palpus greyish brown, segment 2 twice as broad and slightly longer than segment 3, lower surface with short brush of modified scales, apex and upper surface light grey, scape grey with pale apex, flagellum ringed black and grey; forewing narrow, pale greyish brown, with indistinct black spots at base and in middle, sub-costal vein mottled with grey, light grey sub-apical transverse fascia at ¾ wing length, cilia grey.Hindwing covered with grey, brown-tipped scales, medial third pale grey, distinctly narrowed from base to ¾ length of wing. Variation.The female is more unicolorous brown, and the grey pattern on the subcostal vein and sub-apical fascia are not developed. Male genitalia (Figs 5,6).Uncus broadly rounded, posterior margin with long setae; distal sclerite of gnathos short, triangular, strongly edged; tegumen broader than long in middle, anteromedial emargination trapezoidal, about 1/3 length of tegumen; valva about 1.5 times as long as broad, strongly sclerotized, with distinct lateral process, posterior margin weakly serrated and thickened, densely setose, extending to the top of uncus; transtilla lobes reduced; vinculum 2.5 times as broad as long, posterior margin broadly emarginated with narrow drop-shaped medial incision; saccus twice as broad as long, narrowed at base, anterior margin broadly rounded.Caecum as long and twice as wide as phallus, rounded, distal part of phallus gradually narrowing towards rounded apex, lateromedial process thorn-shaped. Female genitalia (Fig. 7).Papilla analis sub-ovate, densely covered with short setae; posterior apophysis as long as the length of ductus bursae; anterior apophysis twice as long as segment VIII; sternum VIII sub-rectangular, slightly broader than long, anterior margin strongly sclerotized, paired narrow ribbon-like gradually curved sclerite extending from posterolateral corner of sternum VIII to sub-rhomboid ostium; antrum short, tubular, as broad as ductus bursae, strongly sclerotized laterally; ductus bursae long, nearly of equal width; corpus bursae sub-oval, elongated; signum a sub-oval plate with serrated margins and transverse medial ridge, near the entrance of corpus bursae.Diagnosis.The new species can hardly be recognized externally without examination of the genitalia.The male genitalia are characterized by a rounded valva densely covered with short, strong setae, very short and broad saccus and phallus with narrow weakly s-curved distal portion and reduced lateral process.Description.Adult (Fig. 4).Wingspan 15 mm.Head, thorax, tegulae and labial palpus black, segment 2 twice as broad and slightly longer than segment 3; forewing narrow, plain greyish-brown, with diffuse light brown costal spot at ¾ wing length, cilia grey.Hindwing light grey. Male genitalia (Fig. 8).Uncus three times as long as broad, posterior margin weakly rounded, covered with long setae; gnathos reduced; tegumen as broad as long in middle, anteromedial emargination very short; valva rounded, extending to about the tip of uncus, anterolaterally covered with strong setae; transtilla lobes reduced; vinculum 2.5 times broader than long, posterior margin broadly emarginated with very narrow medial incision; saccus four times as broad as long.Caecum rounded, distal part of phallus twice as long as caecum, weakly s-curved and gradually narrowed towards pointed apex, without lateral process. Female genitalia.Unknown.Biology.Host plant unknown.The holotype was collected in late July at an elevation of about 3500 m. Distribution.Afghanistan. Etymology.The new species is named in the honour of one of its collectors, the late Clas M. Naumann, a famous German lepidopterist. Note.The holotype is rather greasy, a situation often seen in other specimens of Spiniphallellus (Bidzilya and Karsholt 2008).One can argue that a new species should not be based on a single, greasy holotype.Even if the holotype had been in perfect condition it would probably have added little to the diagnosis of this species.As mentioned above most Spiniphallellus species are externally similar, with the diagnostic characters being found in the structures of the genitalia.The male genitalia of S. naumanni sp.nov.show some distinct characters which adds to our knowledge of the diversity of the genus.A further argument for describing this species is that it is very unlikely that additional material will become available in the foreseeable future, if the distribution of S. naumanni is restricted to the high mountains of Afghanistan. Discussion The genus Spiniphallellus was placed in Anomologinae based on the general similarity of the male genitalia characters, such as sternum VIII and tergum VIII separate, tendency to reduction of gnathos and short valvae covered with hairs.Within the subfamily the genus was provisionally associated with a group of genera related to Monochroa Heinemann, 1870, namely Eulamprotes Bradley, 1971, Metzneria Zeller, 1839, Ptocheuusa Heinemann, 1870and Isophrictis Meyrick, 1917(Bidzilya and Karsholt 2008).However, it was noted that the phallus without cornuti and a well developed uncus of Spiniphallellus are not characteristic for the above group of genera.The discovery of a distinct distal sclerite of the gnathos in S. eberti sp.nov.indicates that Spiniphallellus is less related to Monochroa and other related genera than was initially argued.The position of the genus within Anomologinae remains rather unclear and may be clarified in the context of a global revision of this subfamily with the application of data obtained from the DNA-studies.Only a DNA barcode for S. chrysotosella (cluster number BOLD:ACW1628) is yet available whose placement is uninformative. In the original description of Spiniphallellus it is stated that the gnathos of the male genitalia is absent.However, a gnathos is at least to some extent present in all species of the genus, but in different stages of reduction.This is true for S. naumanni, which has a reduced distal sclerite of gnathos, whereas the male genitalia of S. eberti have a short, triangular, strongly edged distal sclerite of the gnathos.In most Lepidoptera families the presence or absence of a gnathos would be considered as a character important at genus level, but several genera of Gelechiidae (especially within the Anomologinae and the Litini) show a tendency to reduction of the gnathos and sometimes also the uncus.Based on other characters S. eberti fits well into Spiniphallellus.The species of Spiniphallellus vary also in the degree of development of transtilla lobes.This character is represented by slender or broad medially projecting processes in S. desertus, S. fuscescens and S. chrysotosella.The transtilla lobes are reduced in S. stonisi and both species described here.
2019-07-26T08:08:03.652Z
2019-02-07T00:00:00.000
{ "year": 2019, "sha1": "998356c7e74c2f063fccb01548f65a0c5e589e0d", "oa_license": "CCBY", "oa_url": "https://nl.pensoft.net/article/34484/download/pdf/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a9d45c89af17ba2627f2350b5644ca8665b9c781", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
12016695
pes2o/s2orc
v3-fos-license
KI and WU Polyomaviruses in Patients Infected with HIV-1, Italy To the Editor: Before 2007, two human polyomaviruses were known to infect humans: BK virus and JC virus (1,2). Recently, 2 novel polyomaviruses, KI polyomavirus (KIPyV) and WU polyomavirus (WUPyV), were identified in the respiratory secretions of children with signs of acute respiratory signs (3,4); little evidence exists to suggest that these viruses are causative agents of respiratory tract disease (5). To determine the prevalence of WUPyV and KIPyV in the plasma of HIV-1–infected patients, we screened 62 persons who were HIV-1 positive by using PCR to detect the 2 viruses. We also conducted phylogenetic analysis of the identified strains. Our patient likely had rickettsial infection acquired in Honduras. We present this case to alert clinicians to consider the diagnosis of rickettsial infections in the Americas, even if infections have not been previously documented in a specific country or region. Because rickettsial infections can be severe and are treatable, the clinician should consider rickettsial infections in returned travelers with compatible clinical findings. Our case also demonstrates the potential role of travelers as sentinels of emerging infectious diseases. KI and WU Polyomaviruses in Patients Infected with HIV-1, Italy To the Editor: Before 2007, two human polyomaviruses were known to infect humans: BK virus and JC virus (1,2). Recently, 2 novel polyomaviruses, KI polyomavirus (KIPyV) and WU polyomavirus (WUPyV), were identified in the respiratory secretions of children with signs of acute respiratory signs (3,4); little evidence exists to suggest that these viruses are causative agents of respiratory tract disease (5). To determine the prevalence of WUPyV and KIPyV in the plasma of HIV-1-infected patients, we screened 62 persons who were HIV-1 positive by using PCR to detect the 2 viruses. We also conducted phylogenetic analysis of the identified strains. Plasma specimens were collected at Istituto di Ricovero e Cura a Carattere Scientifico Instituto Fisioterapico Ospetaliere-San Gallicano Institute and Tor Vergata University Hospital, Rome, Italy, from April 2005 through September 2008. Patients were adults (37-54 years of age, median age 45.5 years) and were being treated with antiretroviral drugs. HIV-1 viral load determination, CD4+ counts, and HIV-1 genotyping were performed as part of the routine investigation. Plasma viremia levels ranged from <50 to 2,877,764 copies/mL, and CD4+ counts ranged from 150 to 1,218. Most patients (64.5%) were infected by HIV-1 subtype B. Other subtypes found were F, G, and C. To date, KIPyV and WUPyV have been detected in respiratory secretions and stool and serum specimens from pediatric patients with acute respiratory symptoms and have been found in respiratory tissue of adults and children (3,4,6,8). Few data are available on the detection and reactivation of these novel polyomaviruses in immunocompromised patients (9,10). In this study, KIPyV and WUPyV sequences were found in 3.2% and 1.6% of HIV-1-infected patients, respectively. None of the patients had respiratory symptoms, so the presence of the 2 viruses in plasma raises the question of whether they play a pathogenic role in immunocompromised patients. Molecular analysis of the KIPyV and WUPyV identified in plasma showed that these polyomaviruses were not closely related to strains identified previously in other countries nor to the KIPyVs and WUPyVs identified in Italy in stool, respiratory tract tissue, and tonsils. Whether this difference reflects a tropism of some strains for a particular tissue or organ remains to be established. Further studies are needed to clarify the possible patho- (6,7) and the prototype strains for KIPyV (GenBank accession nos. EF127906, EF127908, EF520288) and WUPyV (GenBank accession nos. EF444549-EF444554, EU711054-EU711058, EU296475, EU358768, and EU358769). GenBank accession numbers for all virus strains are shown in parentheses. Multiple nucleotide sequence alignments were performed by using ClustalX software (http://bips.u-strasbg. fr/fr/documentation/clustalx/#g), and the phylogenetic tree was constructed by using the neighbor-joining algorithm with LogDet-corrected distances (http://paup.csit.fsu.edu/about. html) (8). An asterisk (*) beside a branch represents significant statistical support for the clade subtending that branch (p<0.001 in the zero-branch-length test) and bootstrap support >75%. Scale bar indicates nucleotide substitutions per site. genic role of KIPyV and WUPyV in immunocompromised patients.
2017-04-15T09:46:24.943Z
2009-08-01T00:00:00.000
{ "year": 2009, "sha1": "beb7d3b7a1c42545f5f0aab7cf505b1a62925ee6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3201/eid1508.090424", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7cda3904ab2d029deb002f999206c06604f7081f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
37232318
pes2o/s2orc
v3-fos-license
PiSa Syndrome induCed by raPid inCreaSe and high doSage of riSPeridone Síndrome de PiSa induzida Por aumento ráPido e doSagem elevada de riSPeridona Doutor em Ciências pelo Departamento de Psiquiatria da Faculdade de Medicina da Universidade de São Paulo, São Paulo SP, Brazil (FMUSP), Pesquisador do Instituto de Psiquiatria do Hospital das Clínicas da FMUSP; Mestre e Doutor em Ciências pelo Departamento de Psiquiatria da FMUSP, Pesquisador do Instituto de Psiquiatria do Hospital das Clínicas da FMUSP, Diretor Clínico do Hospital João Evangelista; Professor Associado do Departamento de Psiquiatria da FMUSP. Pisa syndrome (PS) is a rare condition of acute or tardive dystonia characterized by a body involuntary sustained lateral flexion with the head to one side, creating a "leaning tower" posture.The essential symptoms for the diagnosis of PS are the presence of persistent dystonia of the trunk, and lateral flexion with mildly backward axial rotation of the trunk, the absence of other dystonic regions of the body, a history of medication preceding or concurrent with the onset of dystonia, the absence of known reasons for secondary dystonia, and a negative family history for dystonia 1 .The referential items supporting the diagnosis of the syndrome are worsening of the posture abnormality in walking, indifference to posture abnormality without agony (anosognosia), and improvement in the abnormality of the posture by withdrawal of the causal agents 1 .Different from other side effects related to antipsychotic treatment 2 , there are putative risk factors described for PS, including previous prolonged treatment with typical antipsychotics, combined pharmacological treatment, female gender, old age, presence of an organic brain disorder 3 .Most of PS cases have been described as adverse effects of prolonged exposure to conventional antipsychotics, however more recently, other drugs, including atypical antipsychotics, have been implicated in the pathophysiology of PS 4 . We describe the case of a patient with no classical risk factors for PS, who developed the disorder induced by rapid increase and high dosage of risperidone. CaSe An 18-year-old man was admitted to our inpatient unit because of a severe psychotic exacerbation.Since the diagnosis of DSM-IV hebephrenic schizophrenia, two years ago, he had been treated with risperidone, a maximum of 4 mg/day, presenting a good clinical control.However two months before the admission, the patient discontinued his medication and began to present thought and behavioral disorganization, delusions, auditive hallucinations, agitation, insomnia. In the beginning of the admission, he was administered risperidone 4 mg/day and diazepam 30 mg/day for one month.As the clinical features were not improving with such medication, the dosage of risperidone was increased until 6 mg/day and diazepam was switched to clonazepam 6 mg/day with the aim to control the episodes of agitation and insomnia.After three weeks with such medication, risperidone was increased to 12 mg /day because of the inadequate control of the psychotic symptoms.One week after risperidone augmentation, an acute dystonic reaction (tonic flexion of trunk and head toward the left along with a slight backward axial rotation) was observed (Fig- ure).Risperidone was immediately discontinued and an adjunctive treatment with biperiden, an anticholinergic drug, was initiated.Biperiden was introduced in a dosage of 6 mg/day by oral administration.In addition, in the period of three days that followed the beginning of PS the patient has received 20 mg of biperiden by intramuscular administration.PS completely disappeared within 3 days after risperidone discontinuation and complementary anticholinergic therapy.After that, risperidone was switched to olanzapine, until a dosage of 10 mg/day, with a good antipsychotic response.On admission, the physical examination revealed non-focal neurological deficits.No evidence of other extrapyramidal symptoms was found.The patient had no personal history of drug abuse, no history of head trauma or other neurological problems, and no family history of dystonia or other movement disorders.Secondary dystonias resulting from metabolic disorder, organic disorder, or infection were ruled out. diSCuSSion A dysfunction of cerebral dopaminergic pathways, which are strategic in the regulation of axial muscle tone, has been related as a possible central factor in PS pathophysiology 5 .Risperidone is a selective monoaminergic antagonist with a high affinity for dopaminergic D 2 receptors.Blockade of D 2 receptors by classical antipsychotics ameliorate the positive symptoms of schizophrenia.However, this blockade is considered responsible for the occurrence of extrapyramidal symptoms.At therapeutic dosages, risperidone's combined serotonin and dopamine antagonism is supposed to be responsible for its effectiveness on positive and possibly negative symptoms of schizophrenia and its lack of extrapyramidal side effects at dosages lower than 6 mg/day.The reported incidence of acute dystonia with risperidone is greater than placebo at high dosages (16 mg/day), however no greater than placebo in dosages lower than 6 mg/day 6 .In clinical populations, risperidone has been associated with dosage-dependent induction of extrapyramidal adverse effects occurring in the upper dosage range (>6 mg/day) 6,7 .Therefore, high dosages of risperidone may be associated with extrapyramidal side effects, such as dystonia 7 .In the present report, such phenomenon may be happened, as the patient has received a high dosage of risperidone (12 mg/day) before the occurrence of acute dystonia. Rapid increase of a dopaminergic antipsychotic may also be involved in the onset of dystonia 7 .A previous case of dystonia appearance related to rapid increase of risperidone dosage was reported, however with no PS manifestation 8 .Moreover other case reports of PS-induced risperidone have also been described 9 , nevertheless in the present case the patient had no history of putative risk factors described for PS. Once the PS begins, the treatment remains empirical, which reflects the poor understanding of its underlying pathophysiology 10 .The first-line treatment for PS has been the reduction in dosage or discontinuation of antipsychotics, while the second-line treatment has been the introduction of an anticholinergic medication, as PS is a side effect caused by the central dopaminergic blockade 9 . In the follow-up of patients who presented PS with risperidone, the substitution to other atypical antipsychotics that does not present high affinity for dopaminergic D 2 receptors, as olanzapine, may provide an interesting alternative for their treatment, as occurred in the present case report 9 . In summary, clinicians should be aware of rapid upward titration and high dosage of risperidone because these conditions may precipitate PS even in patients without risk factors for the development of such adverse effect.Once the patient presents PS, the treatment may include the reduction in dosage or discontinuation of the antipsychotic drug, associated to the introduction of an anticholinergic medication, and in the follow-up drugs with low affinity for dopaminergic D 2 receptors must be used. Figure . Figure.Patient presenting tonic flexion of trunk and head toward the left along with a slight backward axial rotation.
2017-07-16T05:14:15.259Z
2008-12-01T00:00:00.000
{ "year": 2008, "sha1": "c788faa5de536fe67e198a9f19abd58a245f305b", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/anp/a/sbLcxSNDjZ5tSkF4HGNg5qj/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c788faa5de536fe67e198a9f19abd58a245f305b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
86403190
pes2o/s2orc
v3-fos-license
Estimation of Olfactory Sensitivity Using a Bayesian Adaptive Method The ability to smell is crucial for most species as it enables the detection of environmental threats like smoke, fosters social interactions, and contributes to the sensory evaluation of food and eating behavior. The high prevalence of smell disturbances throughout the life span calls for a continuous effort to improve tools for quick and reliable assessment of olfactory function. Odor-dispensing pens, called Sniffin’ Sticks, are an established method to deliver olfactory stimuli during diagnostic evaluation. We tested the suitability of a Bayesian adaptive algorithm (QUEST) to estimate olfactory sensitivity using Sniffin’ Sticks by comparing QUEST sensitivity thresholds with those obtained using a procedure based on an established standard staircase protocol. Thresholds were measured twice with both procedures in two sessions (Test and Retest). Overall, both procedures exhibited considerable overlap, with QUEST displaying slightly higher test-retest correlations, less variability between measurements, and reduced testing duration. Notably, participants were more frequently presented with the highest concentration during QUEST, which may foster adaptation and habituation effects. We conclude that further research is required to better understand and optimize the procedure for assessment of olfactory performance. Introduction The appreciation of food involves all senses: sight, smell, taste, touch, and also hearing. While the sight of a cup of coffee may indicate its availability, it is typically its smell that makes it appealing and that triggers an appetite for most people. During consumption, the smell or aroma is perceived again retronasally and supported by its pleasant temperature and a bitter taste. These largely parallel sensations occur automatically and only raise awareness when one or more senses are disturbed. That said, the sense of smell has been shown to influence food choice and eating behavior [1], and its impairment has even been associated with a higher risk for diet-related diseases like diabetes [2]. Even more, olfactory stimuli can invoke emotional states, are linked to memory storage and retrieval, and as such also serve as important cues to rapid detection of potentially dangerous situations and threats (see e.g., [3,4]. Given that the estimated prevalence of smell impairment is 3.5% in the United States [5], continuous efforts are made toward an efficient and precise assessment of olfactory function. The Sniffin' Sticks test suite (Burghart, Wedel, Germany; [6]), is an established tool in the assessment of olfactory function. It consists of three tests involving sets of impregnated felt-tip pens: odor detection threshold (T), odor discrimination (D), and odor identification (I). Each test produces numbers in the range from 1 to 16 (T) or from 0 to 16 (D and I) as a performance measure. Overall olfactory function is assessed by summing all three test results, resulting in the TDI score. Comparison of individual TDI scores to the comprehensive set of available normative data (e.g., [7,8]) facilitates the Participants 36 participants (32 women; median age: 29.5 years, age range: 19-61 years) completed the study. The influence of gender on olfactory performance has been investigated in previous studies. The results typically showed no (e.g., [15], several hundred participants; [7], >3000 participants, no main effect) or only rather small gender differences with negligible diagnostic and real-world relevance (e.g., [8], >9000 participants). We therefore did not enforce a gender balance in our sample. Due to a technical error, the identification test data was not recorded for one participant (female, 26 years old). All participants were non-smokers and reported being healthy and not having suffered from an infectious rhinitis for at least two weeks before testing. The study conformed to the revised Declaration of Helsinki and was approved by the ethical board of the German Society of Psychology (DGPs). Stimuli Stimuli were so-called Sniffin' Sticks (Burghart, Wedel, Germany; [6]), felt-tip pens filled with an odorant. The Sniffin' Sticks test battery consists of three subtests: an odor threshold test, an odor detection test, and an odor identification test. The threshold test comprises 48 pens. There were 16 pens filled with different concentrations of 2-phenylethanol (rose-like smell) ranging from 4% to approx. 1.22 × 10 −4 % (a geometric sequence with the common ratio of 2, so the first pen contained a 4% dilution, the second 2%, the third 1%, and so on), dissolved in 4% propylene glycol, an odorless solvent. Note that in this test, the 1st pen contained the highest, the 16th pen the lowest odorant concentration. The remaining 32 pens contained 4% propylene glycol and served as blanks. The pens were arranged in triplets such that each triplet contained one pen with odorant and two blanks. The detection test comprised 48 pens that were filled with 16 different odorants at supra-threshold concentrations. The pens were arranged in triplets such that two pens contained the same and one pen a different odorant. The identification test comprised 16 pens filled with different odorants at supra-threshold concentrations. Experimental Sessions Participants were invited for two experimental sessions -the Test and Retest session for the odor threshold. To ensure similar testing conditions across sessions, participants were instructed to refrain from eating and drinking anything but water 30 min before visiting the laboratory. Further, both sessions were scheduled at approximately the same time of day, and took place with a median inter-session interval of 3.0 days (SD = 2.6, range: 0.9-8.9 days); only four participants had an inter-session interval of more than 7.0 days. In each session, olfactory detection thresholds were determined using two distinct algorithms, staircase and QUEST, described below. The order of algorithms was balanced across participants and kept constant for Test and Retest within each participant. Additionally, odor discrimination and odor identification ability were measured at the end of one session following the standard Sniffin' Sticks protocol (Burghart, Wedel, Germany). Stimulus Presentation Testing took place in a well-ventilated testing room and was performed by the same experimenter, who refrained from using any fragrant products (e.g., soap, lotion, perfume, etc.) and wore odorless cotton gloves when presenting the stimuli. At the beginning of each test session, participants were blindfolded. To present a stimulus, the experimenter removed the cap from the pen, held the tip of the pen in front of the participant's nose, approx. 2 cm from the nostrils, and asked the participant to take a sniff. For the threshold test, participants were blindfolded and informed that the odorant may be presented in very low concentrations, and that only one of the three pens presented in each trial contained the odorant, while the others contained the solvent exclusively. The task was to "indicate which of the three pens smells different from the others", and participants had to provide a response even when unsure. Participants were familiarized with the odorant by presenting pen no. 1 (highest concentration) before testing commenced. A similar procedure was used for the discrimination test: participants were blindfolded and presented with a triplet of pens containing clearly perceivable odorants. Each triplet consisted of two pens with the same and one pen with a different odorant. Again, participants were to indicate the pen that smelled different from the others. During threshold and discrimination testing, stimulus triplets were presented during each trial, which lasted approx. 30 s and included the presentation of three pens (approx. 3 s each) and a pause of 20 s. These tests yield a probability of 1 ⁄3 of guessing correctly. For the identification task, the blindfold was removed and participants smelled one pen at a time. They were to identify the odor by pointing to the matching word on a response sheet with four written response options. The interval between pens was approx. 30 s. The probability of guessing correctly in this task was 1 ⁄4. Staircase Following the standard protocol as detailed in the test manual; see also [16]), the order of presentation within the triplets varied from trial to trial. In the first trial, the odor pen was presented first, in the second trial, it was presented between two blanks, and in the third, after two blanks. After the third trial, this sequence was repeated. We first determined the starting concentration. Beginning with the presentation of triplet no. 16 or 15 (balanced across participants), participants had to indicate which of the pens smelled different. Concentration was increased in steps of two (e.g., from pen 16 to 14) for each incorrect response. Once participants provided a correct response, the same triplet was presented again. If the response was incorrect, the concentration was increased again by two steps as before. However, if the triplet was correctly identified a second time, that dilution step served as the starting concentration. Contrary to the standard protocol, where testing would then continue without interruption, our participants were granted a short break of approx. 1 min before the actual threshold estimation started with the presentation of the triplet containing the starting concentration. The threshold was determined in a one-up/two-down staircase procedure: odor concentration was increased by one step after each incorrect response (one-up), and decreased by one step after two consecutive correct responses at the same concentration (two-down). This kind of staircase targets a threshold of 70.71% correct responses ( [11]; but cf. [17], who found small deviations from this value). That is, if presented repeatedly with a stimulus at threshold intensity, participants would be able to correctly identify it in about 71 out of 100 cases. The probability of providing two consecutive correct responses purely by guessing is 1 ⁄3 × 1 ⁄3 = 1 ⁄9. The procedure finished after seven reversal points were reached. The final threshold estimate was the mean of the last four reversal concentrations. This procedure is referred to simply as staircase throughout the this manuscript. QUEST QUEST requires to set parameters that describe the assumed psychometric function linking stimulus intensity and expected response behavior. We assumed a sigmoid psychometric function of the Weibull family, as proposed by [12] (albeit in a slightly different parametrization) and used for gustatory testing [13], with a slope β = 3.5, a lower asymptote γ = 1 /3 (chance of a correct response just by guessing), and a parameter λ = 0.01 to account for lapses (response errors due to momentary fluctuation of attention): Here, the presented concentration is denoted as x, and the assumed threshold as T. This yielded a function extending from 0.33 to 0.99 in units of "proportion of correct responses". The granularity of the concentration grid was set to 0.01. All parameters of this function were constant, except for the threshold, which was the parameter of interest that was going to be estimated in the course of the procedure. The prior estimate of the threshold was a normal distribution with a standard deviation of 20, which was centered on the concentration of pen no. 7, which was used as the starting concentration. The algorithm was set to target the threshold at 80% correct responses, which is slightly higher than the threshold target in the staircase procedure, but had proven to produce good results both in pilot testing as well as in gustatory threshold estimation [13,14]. Unlike in the staircase procedure, where the order of pen presentation varied systematically from triplet to triplet, triplets were presented in random order during the QUEST procedure. Notably, QUEST updates its knowledge on the expected threshold after each response and proposes the concentration to present in the next trial such that it maximizes the expected information gain about the "true" threshold. As the set of concentrations was discrete and limited to 16, QUEST might propose concentrations other than those contained in the test set. In this case, the software selects the triplet with the concentration closest to the one proposed. In contrast to the staircase, where the concentration was always decreased or increased by a single step after the starting concentration had been determined, the step width was not fixed in QUEST. For example, QUEST might step up three concentrations in one trial, step down two in the next, and present the exact same concentration again in the following trial. Whenever the same concentration had been presented on two consecutive trials, the concentration for the next trial was decreased if both responses were correct, and increased if both responses were incorrect. QUEST might suggest to present concentrations outside of the range of available dilution steps. Therefore we set up the algorithm such that, whenever the presentation of a pen below 1 or above 16 was suggested, we would instead present pen no. 1 and 16, respectively. QUEST would be informed about the actually presented pen, and incorporate this information into the threshold estimate. Note, however, that final threshold estimates outside the concentration range could still occur occasionally, and needed to be dealt with accordingly; see the data cleaning paragraph in the next section for details. The procedure ended after 20 trials. The final threshold estimate is the mean of the posterior probability density function of the threshold parameter. We will refer to this procedure as "QUEST". Analysis Odor Discrimination and Identification The discrimination and identification tests comprised 16 trials each. For each test, the number of correct responses was summed up, resulting in a test score which can range from 0 to 16. Together with the staircase threshold, which yielded values from 1 to 16, the sum of all three test results formed a cumulative score: the TDI score. Data Cleaning When a participant reached one of the most extreme concentrations (i.e., pens no. 1 or 16) and provided a response that would, theoretically, require us to present a concentration outside the stimulus of set, the staircase procedure cannot be safely assumed to yield a reliable threshold estimate anymore. For example, if a participant fails to identify the highest concentration (pen no. 1), the staircase procedure would accordingly demand to present a hypothetical pen no. 0, which obviously does not exist. Since our sole termination criterion was "seven reversals", we would repeatedly present pen no. 1 until a correct identification allowed the procedure to move up to pen no. 2 again. The resulting threshold estimate, then, would systematically overestimate this participant's sensitivity. Therefore we set the threshold values of staircase runs where participants could not identify pen no. 1 at least once to T = 1 after the run was completed, following [7] (but cf. [16], who suggest to set the value to T = 0 instead). This was the case in five out of the 72 staircase threshold measurements (two during test, three during retest; five participants affected). Conversely, when a participant were to correctly identify the lowest concentration (pen no. 16), the staircase procedure would require the presentation of a hypothetical pen no. 17, in which case we would have assigned a threshold value of T = 16; however, this situation did not occur in the present study after the starting concentration had been determined. For QUEST, pen no. 1 was not correctly identified at least once in 12 of the 72 measurements, concerning 11 participants; no participant reached and correctly identified pen no. 16. QUEST yielded final threshold estimates T < 1 in 11 measurements (8 during Test, 3 during Retest; 10 participants affected). Similarly to the data cleaning procedure for the staircase, we assigned threshold T = 1 in these cases. Notably, this again concerned 3 of the 5 participants for whom we had assigned T = 1 in a staircase experiment. Test-Restest Reliability To establish test-retest reliability, we first compared the means of Test and Retest thresholds for each procedure. Q-Q plots and Shapiro-Wilk tests revealed that thresholds were not normally distributed for the QUEST test session (W = 0.90, p < 0.01); we, therefore, compared the means using non-parametric Wilcoxon signed-rank tests. We then correlated Test and Retest threshold estimates via Spearman's rank correlation (Spearman's rho, denoted as ρ) to estimate the degree of monotonic relationship between measurements. Ordinary least squares (OLS) models were used to fit regression lines to provide a better understanding of the nature of the relationship between the threshold estimates (i.e., whether test thresholds could predict retest thresholds). Q-Q plots and Shapiro-Wilk tests showed that the regression residuals were normally distributed (all p > 0.05) and thus satisfied an important requirement for OLS regression. Although correlation and regression analyses are widely used to assess test-retest reliability and to compare methods, it has been argued that these measures may in fact be inappropriate (see e.g., [18][19][20]). Instead, analyses that focus on the differences between, not agreement of, measurements should be preferred. A possible approach is to calculate the mean differenced and standard deviation of the differences between two measurements to derive limits of agreement,d ± 1.96 × SD [18]. These limits correspond to the 95% confidence interval. This means that in 95 out of 100 comparisons, the difference between two measurements can be expected to fall into this range. Narrower limits of agreement indicate a better agreement between two measurements. The related repeatability coefficient (RC) was simply 1.96 × SD, and its interpretation was very similar to the limits of agreement: only 5% of absolute measurement differences will exceed this value, and a smaller RC indicates better agreement. (It should be noted that an alternative method for calculating the repeatability coefficient has been suggested, based on the within-participant standard deviation, s w [20]. The results we obtained from these calculations were similar to those based on the standard deviation of the measurement differences. Because the latter are directly visualized in the Bland-Altman plot by the limits of agreement, i.e., mean difference ± 1.96 × SD, we opted to only report these values.) If the differences between two measurements are plotted over the mean of the measurements, and d and the limits of agreement are added as horizontal lines, the resulting plot is called a Bland-Altman plot (sometimes also referred to as Tukey mean difference plot). It can be used to quickly visually inspect how well measurements can be reproduced, specifically which systematic bias (d = 0) and which variability or "spread" of measurement differences to expect. Accordingly, we assessed the RC, limits of agreement, and produced Bland-Altman plots for both methods, staircase and QUEST, to gain more insight into the repeatability (or lack thereof) of measurements for each method. The use of these analyses requires the measurement differences to be normally distributed, which we confirmed using Q-Q plots, and Shapiro-Wilk tests failed to reject the null hypothesis of normal distributions (all p > 0.05). Confidence intervals for the limits of agreement were calculated using the "exact paired" method [21]. Lastly, to test whether the duration of the inter-session interval might be a confounding factor in the threshold estimates, we also calculated the Spearman correlation between inter-session intervals and differences between Test and Retest thresholds. Comparison between Procedures To compare the threshold estimates across procedures, we averaged Test and Retest thresholds for each participant within a procedure, and, similarly to the analysis of reliability, compared the means with a Wilcoxon signed-rank test, followed by the calculation of Spearman's ρ and the fit of a regression line using an OLS model. The regression residuals were normally distributed, according to a Q-Q plot and a Shapiro-Wilk test (W = 0.96, p = 0.26), satisfying the normality assumption of errors on which OLS regression crtitically relies. Additionally, we estimated the 95% limits of agreement from the differences between the within-participant session means for the two procedures, and generated Bland-Altman plots. The measurement differences were normally distributed, according to a Q-Q plot and a Shapiro-Wilk test (W = 0.96, p = 0.30). Like in the investigation of test-retest reliability, we assessed confidence intervals of the limits of agreement via the "exact paired" method [21]. Because the limits of agreement derived from session means might actually be too narrow, as within-participant variability is removed by averaging measurements across sessions [20], we calculated adjusted limits of agreement from the variance of the between-subject differences, σ 2 d , which in turn can be calculated as σ 2 d = s 2d + 0.5 s 2 xw + 0.5 s 2 yw . Here, s 2d is the variance of the differences between the session means; and s 2 xw and s 2 yw are the within-participant variances of methods x and y, respectively (staircase and QUEST in our case). The limits of agreement can then be calculated asd ± 1.96 × σ d , withd being the mean difference between the session means of both procedures. Again, the interpretation of these limits is straightforward: 95% of the differences between staircase and QUEST measurements can be expected to fall into this interval, and narrower limits indicate a better agreement across the measurement results produced by both procedures. Finally, we derived 95% confidence intervals for these limits ( [20], Section 5.1, Equation (5.10)). Odor Discrimination and Identification The average test score was 13.3 (SD = 1.5, range: 11-16; N = 35) for odor discrimination, and 13.0 (SD = 1.6, range: 11-16; N = 36) for odor identification. When summed with the staircase threshold estimates from the Test and Retest sessions, we observed TDI scores of 33.34 (SD = 3.8; range: 26.5-43) and 33.64 (SD = 3.8; range: 26.75-41.75), respectively. Individual as well as cumulative scores indicate a below-average ability to smell (roughly around the 25th percentile) in our sample compared to recent normative data from over 9000 subjects [8]. Starting Concentrations The average starting concentration was pen no. 9.9 (SD = 4.2, range: 1-16) for the Test and 9.6 (SD = 4.1, range: 1-16) for the Retest session of the staircase. The average difference in starting concentrations between sessions was 4.9 (SD = 4.0, range: 0-15). In comparison, we used a slightly higher, fixed starting concentration of pen no. 7 for QUEST. Test Duration The average number of trials needed to complete the staircase measurements was 23.6 (SD = 4.8, range: 13-41), which translates to approx. 11.5 min and is 2 minutes longer than for QUEST, which per our parameters always lasted 9.5 minutes (20 trials). Test duration varied slightly between staircase sessions and was 24.4 trials (SD = 4.2, range: 16-34) for the test and 22.9 trials (SD = 5.4, range: 13-41) for the retest session. Please note that the number of trials and the testing duration for the staircase are based on the time required to reach seven reversal points after the starting concentration had been determined, thereby deviating from the "standard" procedure, which treats the starting concentration as the first reversal. As already pointed out, correlation gives an indication of the strength of the monotonic relationship between values, but only provides limited information on their agreement. We therefore calculated the repeatability coefficient RC and created Bland-Altman plots to generate a better understanding of the measurement differences. The prediction of the RC is that two measurements (test and retest) will differ by the value of RC or less for 95% of participants. We found that RC was about 16% smaller for QUEST than for the staircase (RC Staircase = 6.44, RC QUEST = 5.43), suggesting a slightly better agreement between Test and Retest measurements for the QUEST procedure. Accordingly, the Bland-Altman plot ( Figure 2B) [3.46, 6.29]; 95% CIs in brackets). The mean of the differences between measurements was relatively small and deviated less than 1 T unit from zero-the "ideal" difference-for both methods (M ∆T,Staircase = −0.35 [−1.43, 0.72]; M ∆T,QUEST = −0.99 [−1.89, −0.08]). This systematic negative shift indicates that participants, on average, reached higher T units in the second session than in the first. The differences between Test and Retest measurements for three (staircase) and two participants (QUEST), respectively, fell outside their respective limits of agreement, which corresponds to the expected proportion of 5% of outliers ( 3 /36 = 8.3%; 2 /36 = 5.6%), demonstrating the appropriateness of the estimated limits. Considering the confidence intervals of the limits of agreement, an equal number of measurement differences (four) fell outside the predicted range for both procedures. To test whether the time between Test and Retest sessions might be linked to the observed differences between Test and Retest threshold estimates, we computed correlations between those measures. We found no relationship for either method (staircase: ρ 34 = −0.12, p = 0.50; QUEST: ρ 34 = 0.03, p = 0.85). Figure 2. (A) Correlation between Test and Retest threshold estimates for the staircase and QUEST procedures. (B) Bland-Altman plots showing mean differences between Test and Retest, and limits of agreement corresponding to 95% confidence intervals (CIs) as mean ± 1.96 × SD. The shaded areas represent the 95% CIs of the mean and the limits of agreement. Each dot represents one participant. Comparison between Procedures Although the threshold estimates, averaged across sessions, for the staircase were significantly higher than those for QUEST (M staircase = 7.0, SD staircase = 2.7; M QUEST = 5.7, SD QUEST = 3.3; W = 101.0, p < 0.001; Figure 3A), we found a strong correlation between the procedures (ρ 34 = 0.80, p < 0.001; Figure 3B). The regression slope was close to 1, providing an indication of agreement across procedures. The Bland-Altman plot based on the session means ( Figure 3C) shows a systematic difference between both procedures; specifically, QUEST thresholds were, on average, 1.38 [0.78, 1.97] T units smaller than the staircase estimates (95% CIs in brackets). The limits of agreement reached from −2.20 [−3.37, −1.56] to 4.95 [4.31, 6.12], meaning the difference between the two procedures will fall into this range for 95% of measurements. Only for 1 participant the observed differences between staircase and QUEST fell outside the limits of agreement ( 1 /36 = 2.8%; when considering the CIs of the limits, 3 participants fell outside the expected range ( 3 /36 = 8.3%) The corrected limits of agreement, taking into account individual measurements (as opposed to session means only), were −4.20 [−23.6, 15.3] and 6.96 [−12.5, 26.4], which is substantially larger than the uncorrected limits. The large confidence intervals that expand even beyond the concentration range reflect the relatively large within-participant variability across sessions in both threshold procedures. Discussion In the presented study we used a QUEST-based algorithm to estimate olfactory detection thresholds for 2-phenylethanol with the aim to provide a reliable test result as it had recently been demonstrated for taste thresholds [13] with reduced testing time. The results were compared to a slightly modified version of the widely-used testing protocol based on a one-up/two-down staircase procedure [6,7,9,15,16]. Test-retest reliability was assessed using multiple approaches. Comparison of Test and Retest thresholds revealed a small yet significant mean difference for QUEST: threshold estimates during retest were higher than in the test, indicating an increase in participants' sensitivity. A similar effect was reported in a previous study [6]. However, with a mean difference of approx. 1 T unit or pen number, the practical relevance of this effect is debatable, even more so when considering the large variability of measurement results within individual participants. To acknowledge previous criticism of correlation analysis -which focuses on the agreement, but not on the differences between measurements [18][19][20] -we calculated repeatability coefficients and generated Bland-Altman plots for the analysis of session differences. Repeatability was higher for QUEST than for the staircase; however, measurement results of both procedures varied considerably across sessions for many participants. This inter-session variability is further substantiated by the differences in starting concentrations assessed for the staircase, which varied up 15 pen numbers in the most extreme case. The effect was not universal: some participants performed better in the Test than in the Retest session, whereas for others performance dropped across sessions, and remained almost unchanged in others. Since both sessions had been scheduled within a relatively short time period and all measurements have been performed by the same experimenter, measurement variability can be mostly attributed to variability within participants themselves. The comparison of the staircase and QUEST procedures via the session means of each participant showed that the staircase yielded slightly higher pen numbers (i.e., lower thresholds) than QUEST. This was expected as the procedures were assumed to converge at approx. 71% and 80% correct responses, respectively. We found a strong correlation between the session means of the procedures (ρ = 0.80), and regression analysis showed an almost perfect linear relationship, which some would interpret as a good agreement between QUEST and staircase results. The 95% limits of agreement, taking into account the within-participant variability, showed a large expected deviation between both procedures (range: QUEST thresholds almost 7 T units smaller or more than 4 T units greater than staircase results), with the corresponding CIs of those boundaries even exceeding the concentration range. This result is indicative of the large variability we found within participants in both procedure. The limits of agreement based on the within-participant session means were much narrower, as variability is greatly reduced through averaging. A potential source of variability might be guessing. In fact, the probability of responding correctly merely by guessing is 1 ⁄3. In a series of simulations, it could be shown that with an increasing number of trials the frequency of correct guesses might get unacceptably high, potentially leading increased variability in the threshold estimates [30]. The author determined that, for a staircase procedure like the one in our study, the expected proportion of such false-positive responses exceeds 5% with the 23rd trial. For our staircase experiments, the average number of trials was 23.6; and the procedure finished after 23 or more trials for 24 of the 36 participants in the Test, and for 20 participants in the Retest session. Therefore, the large variability between Test and Retest threshold estimates in the staircase could, at least partially, be ascribed to correct guesses "contaminating" the procedure. However, QUEST-which always finished after 20 trials-only had slightly better test-retest reliability according the the repeatability coefficient, suggesting that the largest portion of test-retest variability in our investigations was probably not caused by (too) long trial sequences and related false-positive responses alone. Surprisingly, a number of participants were unable to correctly identify pen no. 1 at least on one occasion, and this effect was more pronounced during QUEST compared to the staircase. It seems plausible that the variable step size used by QUEST made it possible to approach even the extreme concentration ranges quickly, whereas the staircase requires a longer sequence of incorrect responses to reach pen no. 1. Despite careful selection of healthy participants who reported no smell impairment, olfactory performance was lower than recently reported in a sample comprising over 9000 participants [8]. This coincidental finding highlights the need for a comprehensive smell screening before enrollment. To what extend olfactory function contributed to the present results and limits their generalizability remains to be explored. All QUEST runs completed after 20 trials for all participants. The procedure could be further optimized by introducing a dynamic stopping rule. For example, [13] set the algorithm to terminate once the threshold estimate had reached a certain degree of confidence. Such a rule can reduce testing time, as the run may finish in fewer than 20 trials, and should be considered in future studies. Although the reduction or omission of a minimum trial number bears potential to reduce the testing time further, it needs to be shown first that the algorithm performs well under these conditions and, most importantly, large-scale studies need to show whether such a reduced or faster protocol is appropriate to assess odor sensitivity in participants with odor abilities at the extremes (particularly insensitive/sensitive). Inspection of the data showed that some staircase runs had not fully converged although seven reversal points were reached. In these cases, participants exhibited a somewhat "fluctuating" response behavior (or threshold) that caused the procedure to move in the direction of higher concentrations throughout the experiment (see Figure A1 in the appendix and supplementary data for an example). QUEST proved to behave more consistently, at least in some cases, by either converging to a threshold or by reaching pen no. 1, which would then sometimes not be identified correctly. These interesting differences between procedures require further investigation to fully understand their cause and influence on threshold estimates and, ultimately, diagnostics. Conclusions The present study compared the reliability of olfactory threshold estimates using two different algorithms: a one-up/two-down staircase and a QUEST-based procedure. The measurement results of both procedures showed considerable overlap. QUEST thresholds were more stable across sessions than the staircase, as indicated by a smaller variability of test-retest differences and a higher correlation between session estimates. QUEST offered a slightly reduced testing time, which may be further minimized through a variable stopping criterion. Yet, QUEST also tended to present the highest concentration, pen no. 1, more quickly than the staircase, which may induce more rapid adaptation and habituation during the procedure and, eventually, produce biased results. Further research is needed to better understand possible advantages and drawbacks of the QUEST procedure compared to the staircase testing protocol. Data and Software Availability The data analyzed in this paper along with graphical representations of each individual threshold run are available from https://doi.org/10.5281/zenodo.2548620. The authors provide a hosted service for running the presented experiments online at https://sensory-testing.org; the sources of this online implementation can be retrieved from https://github.com/hoechenberger/webtaste.
2019-01-31T11:57:48.287Z
2019-01-28T00:00:00.000
{ "year": 2019, "sha1": "56027ba2146008bb51bcd1dcd3605d5dc53118ed", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/11/6/1278/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "288810d5fa6c513886632d201356d0000914d2c8", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
1298906
pes2o/s2orc
v3-fos-license
Asymptotics of class numbers A"simple trace formula"is used to derive an asymptotic result for class numbers of complex cubic orders. Introduction For an order O in a number field let h(O) denote its class number and R(O) its regulator. Proving a conjecture of C.F. Gauss, C.L. Siegel showed in [29], where the sum ranges over the set of all real quadratic orders (i.e., orders in real quadratic fields) with discriminant d(O) bounded by x. For a long time it was believed to be impossible to separate the class number and the regulator. However, in 1981 P. Sarnak showed [28], using the trace formula, that the sum ranging over all real quadratic orders with regulator bounded by x. Sarnak established this result by identifying the regulators with lengths of closed geodesics of the modular curve H/ SL 2 (Z) (Theorem 3.1 there) and by using the prime geodesic theorem for this Riemann surface. Actually, Sarnak proved not this result but the analogue where h(O) is replaced by the class number in the narrower sense and R(O) by a "regulator in the narrower sense". But in Sarnak's proof the group SL 2 (Z) can be replaced by PGL 2 (Z) giving the above result. See also [11,32]. Our goal is to generalize Sarnak's result to number fields of higher degree. Such a generalization has resisted all efforts so far since the trace formula is not yet in a state that would make it useful in spectral geometry. For instance, as yet a proof of the absolute convergence of the spectral side of the trace formula is outstanding. However, recent partial results by W. Müller are sufficient for the case treated in this paper. We will now formulate the main theorem. Since there are several concepts of class numbers, we have to make clear which one we use. Let O be an order in a number field F . Let I(O) be the set of all finitely generated Osubmodules of F . According to the Jordan-Zassenhaus Theorem [26], A cubic field F (i.e., a number field of degree 3 over the rationals) is either totally real or has two complex and one real embedding in which case we call it a complex cubic field. Let O be the set of isomorphism classes of orders in complex cubic fields. The following is our main result. Our method is based on a new "simple trace formula". Such formulae have been used in the past by various authors, for instance by Deligne-Kazhdan or Kottwitz. They come about by plugging special test functions into Arthur's trace formula. The test functions are chosen such that many terms in the trace formula vanish. The simple trace formula of this paper will be such that the geometric side only consists of orbital integrals of globally elliptic elements as opposed to locally elliptic elements which is what the previous simple trace formulae reduced to. In Section 1 the general form of the simple trace formula is given, in which the test functions are characterized by vanishing conditions. In Section 2, test functions are constructed explicitly by twisting with virtual characters. This facilitates the computation of orbital integrals and still leaves great freedom in the choice of test functions. The convergence of the spectral side for noncompactly supported test functions is discussed in Section 3. From this point on we restrict to the case SL 3 . In order to separate orbital integrals of splitrank one, twisted resolvents are used. The validity of the trace formula for these non-compactly supported functions is derived via a Casimir functional calculus in Sections 4 to 7. The Prime Geodesic Theorem, which is our main result in a different guise, is given in Section 8. In the light of Sarnak's result and the result of the present paper, one is tempted to formulate a conjecture about the growth rate of class numbers in number fields of a given type. We must however warn that our method does not support any speculation of this kind. The reason is that we do not count orders, but rather their units which in the setting of reductive groups come about as globally elliptic elements. Only in the case when the rank of the unit group is one, it is possible to draw conclusions about class numbers from the distribution of units. Thus one is limited to real quadratic fields (Sarnak), complex cubic (present paper) or purely imaginary fields of degree 4. In the latter case a new difficulty emerges: since the degree of the field extension is not a prime, ellitic elements can no longer be identified with order-units in number fields any more, so the method gives a different asymptotic altogether. We derive a simple version of Arthur's trace formula by inserting functions with certain restrictive properties which guarantee the vanishing of the parabolic terms on the geometric side. The trace formula for SL (3, Z) has also been studied in [33], which unfortunately arrives at an incorrect formula due to a wrong handling of the truncation. Let G be a linear algebraic Q-group. If E is a Q-algebra, any rational character χ of G defined over Q defines a homomorphism G(E) → GL 1 (E). If E comes with an absolute value | . |, we define G(E) 1 to be the subgroup of all elements g such that |χ(g)| = 1 for all rational characters χ defined over Q. We will use this notation in the cases when E is R or the ring A of adeles of Q. One should be aware that G(R) 1 could also be defined with respect to characters defined over the field R, but this is not be the point of view in the present paper. From now on we denote by G a connected reductive linear algebraic group over Q. If P is a parabolic Q-subgroup of G with unipotent radical N , we have a Levi decomposition P = LN . Generally, we denote the group of real points of a linear algebraic Q-group by the corresponding roman letter, so that P = LN. However, if A is a maximal Q-split torus of L, we denote by A the connected component of the identity A(R) 0 . One has decompositions L(A) = L(A) 1 A, L = MA (direct products) and P 1 = MN, where M = L 1 . Let A fin denote the subring of finite adeles, then we have direct product decompositions A = A fin R and G(A) = G(A fin )G. Fix a maximal compact subgroup K of G. The geometric expansion of the trace formula . Here the sum runs over all classes o in G(Q) with respect to the following equivalence relation: Two elements are called equivalent if the semisimple components in their Jordan decomposition are conjugate in G(Q). Further, for certain functions k o whose definition we will recall below. The sum and the integral converge if we replace k o by its absolute value. The integrand in the definition of J o (f ) is given as where the sum runs over the standard parabolic Q-subgroup P = LN , for which we write A P = A L and All we need to know about the factorτ P (H(δx) − T ) at this point is that in the case P = G it is identically equal to 1. We call a function on G(A) 1 parabolically regular at the infinite place if it is supported on K fin × G 1 for some compact open subgroup K fin of G(A fin ) and vanishes on all G-conjugates of K fin × P 1 for every parabolic Q-subgroup P = G. An element γ ∈ G(Q) is called Q-elliptic if it is not contained in any parabolic Q-subgroup other than G itself. This notion is clearly invariant under conjugation, and we say that a class o is Q-elliptic if some (hence any) of its elements is so. It is known that Q-elliptic elements are semisimple, so Q-elliptic classes o are just conjugacy classes in G(Q). In light of the above remarks the proposition is a consequence of the following lemma. Proof: First we show for any P = G that f (x −1 qx) = 0 for q ∈ P(A) 1 and x ∈ G(A). By the assumption on the support of f we have only to consider q = q fin q ∞ with x −1 q fin x ∈ K fin , i.e., q fin ∈ xK fin x −1 ∩ P(A), a compact subgroup of P(A). Any continuous quasicharacter with values in ]0, ∞[ will be trivial on that subgroup, hence q fin ∈ P(A) 1 . Since q was already in P(A) 1 , it follows that q ∞ ∈ P(A) 1 ∩ P = P 1 , and so f (x −1 qx) = 0 due to the assumption on f applied to the parabolic Q-subgroup P. If o is not Q-elliptic, then every γ ∈ o is contained in some parabolic Q-subgroup P = G, and in view of P(Q) ⊂ P(A) 1 we have f (x −1 γx) = 0. Thus K G,o (x, x) = 0. Lemma 1.2 and Proposition 1.1 follow. We will now rewrite, in a non-adelic language, the geometric side of our simple trace formula in a special case. Let G be a semisimple simplyconnected linear algebraic Q-group such that G = G(R) has no compact factors. Let Γ be a congruence subgroup of G(Q), i.e., assume that there exists an open compact subgroup where G y denotes the centralizer of y in G. We define f = f fin ⊗ f ∞ , where f fin is the characteristic function of K Γ divided by the volume of that group with respect to the Haar measure of G(A fin ). Corollary 1.3 Under the above conditions, we have where the sum on the right-hand side runs over the set of all conjugacy classes [γ] in the group Γ which consist of Q-elliptic elements. Proof: Consider the formula for J o (f ) given in Proposition 1.1 for a Qelliptic class o. The integral can be taken over G(Q)\G(A)/K Γ , because the integrand is right K Γ -invariant. Under our assumptions on G, strong approximation [18] holds, i.e., the action of G by right translation on that double quotient is transitive and hence induces an isomorphism of G-spaces This isomorphism identifies suitably normalized G-invariant measures on these two spaces with each other, and we get The characteristic function f fin has the effect of restricting summation to o ∩ Γ. Note that J o (|f |) < ∞, because |f ∞ | can be bounded by a nonnegative function in C ∞ c (G). Now the equality of J o (f ) with the partial sum over [γ] ∈ o in the asserted formula follows by the familiar Fubini-type argument. Test functions In this section G is a semisimple real Lie group with finite center and finitely many connected components. We would like to use resolvent kernels as test functions in the trace formula. The convergence of the geometric side has been proved in [1] for compactly supported test functions only. Thus we are going to approximate resolvent kernels by compactly supported functions with the aid of the functional calculus of the Casimir operator. Let g R denote the Lie algebra of G and g = g R ⊗C its complexification. Let B denote the Killing form. Let θ be the Cartan involution fixing K. The form X, Y = −B(θ(X), Y ) is positive definite on g R and induces a G-invariant Riemannian metric on G/K. Let dist(x, y) denote the corresponding distance function and write d(g) = dist(gK, eK) for g ∈ G. Let U(g) denote the universal enveloping algebra of g. Every element X of U(g) gives rise to a left-invariant differential operator written h → h * X, and a right-invariant differential operator h → X * h, h ∈ C ∞ (G). Recall that for p > 0, the L p -Schwartz space C p (G) is defined as the space of all h ∈ C ∞ (G) such that, for every n > 0 and X, Y ∈ U(g), the seminorms are finite. Here Ξ is the basic spherical function, and it suffices for our present purposes to know that there exist r 1 > r 2 > 0 such that e −r 1 d(g) ≤ |Ξ(g)| ≤ e −r 2 d(g) . If we complete the space C p (G) with respect to the seminorms involving only derivatives up to order N, we obtain a space C p N (G), whose topology can be given by a Banach norm. For each τ ∈K, there is a subspace We also need the space H r N of even holomorphic functions φ on the strip {z ∈ C | | Im z| < r} extending continuously to the boundary and such that the norm |φ| r,N = sup Recall that a Schwartz function φ on R is called a Paley-Wiener function if its Fourier transformφ(x) = 1 2π R φ(y) e −ixy dy has compact support. For π ∈Ĝ and an irreducible unitary representation (τ, V τ ) of K, let P π,τ be the orthogonal projection defined on the space of π whose image is the τ -isotypical component. Let C be the Casimir operator of G. For π ∈Ĝ the Casimir C acts on π by a scalar π(C). for every π ∈Ĝ. The map H r N ′ → C p N (G, τ ) so defined is continuous. If φ is a Paley-Wiener function, then h φ,τ is compactly supported. (The factor 1 dim τ is put here in order to give π(h φ,τ ) a nice trace.) Proof: The uniqueness of h φ,τ is clear from the Plancherel theorem. In the case of an even Paley-Wiener function φ, the existence of h φ,τ ∈ C ∞ c (G) with the required properties (except for the bounds) has been proved in [10], Lemmas 2.9 and 2.11. For this one considers the G-homogeneous vector bundle E τ = G × V τ /K and identifies the space of smooth sections with (C ∞ (G) ⊗ V τ ) K . Then the operator D τ induced by −C − b is a generalized Laplacian in the sense of [5]. The operator φ √ D τ defined by functional calculus is an operator with smooth kernel x|φ √ D τ |y , and by the theory of hyperbolic equations ( [31], ch. IV) it follows that φ √ D τ has finite propagation speed (compare [8]). Identifying the sections of E τ with K-invariant functions as above, it follows that the G-equivariant operator 1 dim τ φ √ D τ acts as a convolution operator by a function h φ,τ . Now observe that by the estimates in [8] there exists a constant c > 0 such that, for every φ, This means that there is a constant c > 0 such that for every φ we have Since the subspace of even Paley-Wiener functions is dense in H r N , we may extend the map φ → h φ,τ by continuity provided we check that it is continuous with respect to the correct seminorms, and then the asserted formula for π(h φ,τ ) will remain valid. By moving the contour of integration in the formula for the Fourier transform, we get sup |φ(x)|e r|x| ≤ c|φ| r,2 for some c > 0. Thus it follows from the above estimate that for every p > 0 there exist r > 0, c > 0 such that In order to estimate the derivatives, recall from [34] that for every N there exist N ′ and functions µ, ν ∈ C N c (G) such that where ∆ is the Laplacian on G also occurring in Proposition 4.1. Together with standard properties of the functions Ξ and l this shows that there exists c > 0 such that |h| p,n,X,1 = |Xµ * ∆ N ′ h + Xν * h| p,n,1,1 ≤ c(|∆ N ′ h| p,n,1,1 + |h| p,n,1,1 ) for X of order N. We have a similar inequality for |h| p,n,1,Y and hence for |h| p,n,X,Y . Note that ∆ can be chosen as 2C K − C, where C K is the Casimir of K induced by the restriction of the Killing form of g. allows us to deduce the required estimate. Twisting by characters A virtual representation of the group G is a Z/2Z-graded finite dimensional complex representation ψ = ψ + ⊕ ψ − . We define the virtual trace and determinant as The function tr ψ(x) is then called a virtual character. Note that tr(ψ 1 ⊕ψ 2 ) = tr ψ 1 +tr ψ 2 and tr(ψ 1 ⊗ψ 2 ) = (tr ψ 1 )(tr ψ 2 ). Thus the virtual characters form a ring, the character ring R(G). If G = G(R) for a semisimple Q-group as in the previous section, we call a function on G parabolically regular if it vanishes on xP 1 x −1 for every parabolic Q-subgroup P = G and every x ∈ G. Proposition 3.1 Let G be a connected semisimple group over Q. Then there is a virtual representation ψ of G such that tr ψ is nonzero and parabolically regular. Proof: Since characters are class functions, we may restrict attention to proper parabolic Q-subgroups P of G. We have the Langlands decomposition P = MAN, where P 1 = MN is defined with respect to the Q-structure as in section 1. Since the rank of M is smaller than the rank of G, the restriction map R(G) → R(M) is not injective. Let tr ψ P be a nonzero element of its kernel. Since N is the unipotent radical of MN it follows that tr ψ P (mn) = tr ψ P (m) = 0 for all m ∈ M, n ∈ N. Set where the product runs over the finite set of all conjugacy classes of proper parabolic Q-subgroups of G. Then the virtual character tr ψ is parabolically regular. Since tr ψ = P tr ψ P and each tr ψ P is a nonzero algebraic function on G, it follows that tr ψ is nonzero. Assume now that tr ψ is a parabolically regular virtual character. Let h ∈ C ∞ c (G) and set f ∞ = h tr ψ, and let f fin be the characteristic function of K Γ as in section 1. Lemma 3.2 The geometric side of the trace formula for the test function Proof: Since tr ψ is a class function it follows that the orbital integral This simple form of the trace formula is quite advantageous since tr ψ is easy to compute and concerning h ∈ C ∞ c (G) we have total freedom of choice. However, the problem remains that the spectral side of the trace formula does not simplify. To discuss the spectral side we will consider representations π ⊗ σ for π ∈Ĝ and σ finite-dimensional. We endow σ with a K-invariant norm, so that these are admissible Hilbert representations. They are no longer bounded, and hence π ⊗ σ(h) is not defined for h ∈ L 1 (G). However, the operator norm of σ(g) and hence that of π⊗σ(g) grows at most exponentially with d(g) = dist(gK, eK). Thus there exists p > 0 such that the integral defining π ⊗ σ(h) converges for all h ∈ C p 0 (G) and π ∈Ĝ. Let us first prove two general facts. Proof: We compute where tr 1 and tr 2 are the traces on the first and second tensor factor. Let B(π) denote the Banach space of all bounded linear operators on the Hilbert space of π. Lemma 3.4 For the linear map where the partial inner product on the right is defined as a map from The lemma follows. In order to understand the representation π ⊗ σ for an induced π the following Lemma will be needed later. Lemma 3.5 Let P = MAN be a parabolic subgroup of a reductive group G. where ξ is an irreducible admissible representation of M + and ν ∈ a C . Let σ be a finite dimensional representation of G. Write for the decomposition into irreducibles of the restriction to the reductive group M + A. Then, after reordering the σ j ⊗ν j if necessary, there is a G-stable filtration of π ⊗ σ, Since G = P + K, the representation π ξ,ν has a compact model on the Hilbert space Ind K K∩M + (ξ) which is independent of ν. In the compact model, this filtration does not depend on ν. Proof: Highest weight theory implies that there is a P + -stable filtration Let ξ ν denote the representation of P + given by ξ ν (man) = a ν ξ(m). The map Ξ given by is a G-isomorphism between the representation π ξ,ν ⊗ σ and the induced has the desired properties. To see that the filtration does not depend on ν in the compact model, recall that the compact model lives on the space of all f : Using the construction above it turns out that F j (π ⊗ σ) coincides with the space of all such f with where χ runs through conjugacy classes of pairs (M 0 , π 0 ) consisting of a Qrational Levi subgroup M 0 and its cuspidal automorphic representation π 0 , the sum being absolutely convergent. The particular terms have expansions where the sum runs over all Q-rational Levi subgroups M of G containing a fixed minimal one (which we take to be the subgroup A 0 of diagonal matrices) and, for each M, over all discrete automorphic representations π of M(A) 1 . Explicitly, Here twisted by ν. If one starts the induction with the subspace of the π-isotypical component spanned by certain residues of Eisenstein series coming from χ, one gets a subrepresentation which is denoted by ρ χ,π (P, ν). We let ρ χ,π (P, ν, f ) act in the space of ρ(P, ν) by composing it with the appropriate projector. Further, there is a meromorphic family of standard intertwining operators M Q|P (ν) between dense subspaces of ρ(P, ν) and ρ(Q, ν) defined by an integral for Re ν in a certain chamber. The operator M(P, s) is M sP|P (0) followed by translation with a representative of s in G(Q). And finally, M L (P, ν) is obtained from such intertwining operators by a limiting process. The decomposition in terms of χ is only there for technical reasons. In general, it is unknown whether the sum over M and π can be taken outside the sum over χ in order to obtain an expansion in terms of the distributions which would be given by expressions that are analogous to J χ,M,π (f ) but with ρ χ,π (P, ν) replaced by This is a problem of absolute convergence, hence of the finiteness of In [22], this problem was reduced to certain conditions on local intertwining operators, which are known to be satisfied in some cases. We will check below that those conditions are satisfied in the situation of interest to us. Thus, we specialize to the linear algebraic group G = SL 3 . Let G = G(R) be the group of real points. We fix maximal compact subgroups K = SO 3 ⊂ G and K p = SL 3 (Z p ) ⊂ SL 3 (Q p ) for all primes p, and we set K fin = p K p . Here we have fixed a K-invariant norm on the dual of the real Lie algebra of G and denoted by ∆ the corresponding element of the universal enveloping algebra. The superscript + indicates that the trace has been replaced by the trace norm. Proof: Our first assertion, which concerns absolute convergence, would follow from Theorem 0.2 of [22] if we could verify conditions 1) and 2 ′ ) of that theorem. Once this is done, our second assertion will be a byproduct of the proof. Indeed, in the course of proving Lemma 6.2 of [22], equation (6.15) was used to estimate the operator norm of ρ χ,π (P, ν, (1 + ∆) N f ) in terms of the L 1 -norm of (1 + ∆) N f . If we omit that step and consider only the terms with (M, π) ∈ Π, the asserted bound will follow. The aforementioned condition 1) is a uniform bound on the derivatives of the local intertwining operators R Q|P (π p , ν) Kp for all automorphic representations π = π ∞ ⊗ p π p of M(A) 1 , all primes p and all open compact subgroups K p of G(Q p ), where the subscript K p indicates restriction to the subspace of K p -fixed vectors of the representation induced from π p . In contrast to Theorem 0.2 of [22], our claim concerns only a fixed test function f fin which is biinvariant under a particular maximal compact subgroup K p , and hence we need only verify condition 1) for that group. However, K p as chosen above is hyperspecial, and R Q|P (π p , ν) Kp is the identity, so that the condition is automatically satisfied. Condition 2 ′ ) is a uniform bound on the derivatives of the local intertwining operators R Q|P (π ∞ , ν) τ for all π as above and all K-types τ , where the subscript τ indicates restriction to that K-type in the representation induced from π ∞ . Since we consider a fixed K-finite function f ∞ , we need only verify the condition for finitely many K-types, and the uniformity of the required bound in τ is no issue. However, the bound does have to be uniform in π ∞ , which still allows us to split the set of those representations π ∞ into a finite number of subsets and check the condition for each of them. For the subset of tempered representations, condition 2 ′ ) follows from results of Arthur (cf. [22], Proposition 6.4). For our group G = SL 3 , we have either M = A 0 or M = G or M ∼ = S(GL 2 × GL 1 ). In the first case, π ∞ is just a character of A 0 (R), hence tempered. In the second case M = G, the induced representation ρ π (G) coincides with π, and the intertwining operator is the identity, so that condition 2 ′ ) is trivially satisfied. This leaves us with the case of the intermediate Levi subgroups. The map g → (g, det g −1 ) is a Q-rational isomorphism from GL 2 to S(GL 2 × GL 1 ). Thereby we may identify M with GL 2 , K ∩ M with O 2 and K p ∩ M(Q p ) with GL 2 (Z p ). Thus, our only remaining concern are the automorphic representations π of M(A) 1 ∼ = GL 2 (A) 1 occurring in L 2 (M(Q)\M(A) 1 ) for which π ∞ is non-tempered and which have a K fin ∩ M(A)-fixed vector. For such a representation, π ∞ must occur in the space of L 2 -functions on where the last isomorphism of right GL 2 (R) 1 -spaces is due to the fact that Q has class number one. The superscript 1 refers to the subgroup of elements with determinant of absolute value 1. Since GL 2 (Z) contains elements with determinant −1, this quotient is isomorphic to SL 2 (Z)\ SL 2 (R) as an SL 2 (R)space. As π ∞ is non-tempered, its Casimir eigenvalue does not exceed 1/4 in the usual normalization. It follows from Roelcke's eigenvalue estimate [27] (cf. [16], ch. 11, Prop. 2.1) that π ∞ lies in the space of constants, so it must be the trivial representation. For a single representation π ∞ and K-type τ , the norm of the derivative of R Q|P (π ∞ , ν) τ is certainly bounded by a polynomial in ν for ν outside a sufficiently large compact subset Ω of the line ia * M , because the operator is a rational function of ν. However, the function in question is smooth and therefore bounded on Ω, so condition 2 ′ ) is satisfied. Orbital integrals We will continue to focus on the group G = SL 3 (R). For later use we will fix some notation. Let P 0 = M 0 A 0 N 0 be the minimal parabolic subgroup of all upper triangular matrices in G. We fix A 0 to be the group of all diagonal matrices with positive entries and determinant one. Let P 1 = M 1 A 1 N 1 be the parabolic subgroup of all matrices in G with last row of the form (0, 0, * ). We fix A 1 to be the group of all diagonal matrices of the form diag(a, a, a −2 ), a > 0. The group M 1 is isomorphic to the group GL 2 (R) 1 consisting of all real two by two matrices with determinant equal to ±1. For j = 0, 1 let ρ j ∈ a * 0 be the modular shift of P j . So by definition for a ∈ A 0 we have det(a|n j ) = a 2ρ j , where n j is the Lie algebra of N j . Further let ρ M 1 ∈ a * 0 be the modular shift of the parabolic P 0 ∩M 1 . Then by definition det(a|n 0 ∩ m 1 ) = a 2ρ M 1 . Note that ρ = ρ 1 + ρ M 1 . The Killing form B on the real Lie algebra g R = sl 3 (R) is given by B(X, Y ) = tr ad(X) ad(Y ) = 6 tr XY for X, Y ∈ g R . We will use the same letter for its complexification as a symmetric bilinear form on g as well as for the corresponding quadratic form B(X) = B(X, X). Let θ be the Cartan involution fixing K. Then θ(x) = t x −1 for x ∈ G and θ(X) = − t X for X ∈ g R . The Killing form gives a natural identification between a and its dual a * . Thus it also gives an invariant form on the latter space. Note that the sum ρ = ρ 1 + ρ M 1 of the last paragraph is orthogonal with respect to B. Therefore, B(ρ) = B(ρ 1 ) + B(ρ M 1 ). We are going to apply Proposition 2.1 with a special choice of the number b and the representation τ of K ∼ = SO 3 . Recall that for each k = 0, 1, 2, . . . there is an irreducible representation δ 2k of dimension 2k + 1 and that this exhausts the setK of irreducible representations of K up to equivalence. For a virtual K-representation τ = τ + − τ − we define h φ,τ = h φ,τ + − h φ,τ − . We choose the virtual representation The reason for this choice will become transparent in the next lemma. We want to describe the action of our kernel in representations π ξ,ν of the principal series of G. Here P = MAN is a parabolic subgroup, ξ a representation in the discrete series of M and ν ∈ a * . Following [19] and M + 0 = M 0 . A K-finite function f ∈ L 1 (G) is called a pseudo-cusp form if tr π(f ) = 0 for every π ∈Ĝ which is induced from the minimal parabolic P 0 . Let g = k ⊕ p be the Cartan decomposition of the Lie algebra g, i.e. p is the orthocomplement of k = Lie C (K) with respect to the Killing form. Note in addition that as a representation of the spin group Spin(B| p M 1 ), where S ± are the half-spin representations. Proof: Since π can be considered as induced from M + AN, Frobenius reciprocity implies that in terms of the virtual dimension of a Z/2Z-graded vector space. In case (i), we have K ∩ M 0 1 ∼ = SO 2 , and it is straightforward to check that If Λ ∈ b * is the infinitesimal character of ξ, then Λ + ν is the infinitesimal character of π, hence as ν is imaginary and orthogonal to Λ. Lemma 2.4 of [20] says that tr π(h φ ) vanishes unless B(Λ) = B(ρ M 1 ), so we get the asserted formula. For the proof of (ii), observe that a group whose characters are ξ i (diag(ε 1 , ε 2 , ε 3 )) = ε i , i = 1, 2, 3, together with the trivial character ξ 0 . It is clear that (e.g., by dimensional reasons, using the fact that the ξ i are conjugate under the normalizer of M 0 in K). This implies that dim Hom M 0 (ξ i , τ 0 ) = 0. Let g ∈ G = SL 3 (R) be regular. Then the centralizer G g is a maximal torus of G, so either it is conjugate to the torus of all diagonal elements of G or to H = A 1 B, where B ∼ = SO 2 is the compact maximal torus of M 1 . We say that g is of splitrank 2 in the former case and of splitrank 1 in the latter. If g is of splitrank 1, then there is a g b g ∈ A 1 B and x ∈ G such that g = xa g b g x −1 . Here a g is uniquely determined, and so is xa g x −1 , the split part of g. For g ∈ G of splitrank one with split part exp X, we define its length by l(g) = B(X). Since θ acts trivially on A 1 , we have l(g) = d(a g ). We will follow the conventions of [15] about the normalization of Haarmeasures on G and its Lie subgroups. For g ∈ G, we use the notation D(g) = det(Id − Ad(g −1 )| g/g g ). Proposition 5.2 Let g ∈ G be regular, and let φ ∈ H r N , where r and N are such that h φ ∈ C 1 0 (G). If the splitrank of g is 2 then O g (h φ ) = 0. If the splitrank of g is 1, then O g (h φ ) = |D(g)| −1/2φ (l(g)). Proof: If φ is a Paley-Wiener function, then h φ ∈ C ∞ c (G) and in particular, h φ ∈ C 2 (G). Using Lemma 5.1 the proposition follows from Lemma 4.3 of [19]. For the general case one approximates φ by Paley-Wiener functions. Proposition 2.1 can be applied to the function in which case the operator φ N λ ( √ D τ ) equals (N − 1)! times the resolvent (D τ + λ) −N . The corresponding convolution kernel h φ,τ will be denoted by Thus, if | Im √ −λ| and N are large enough (e.g., if N and λ > 0 are large enough), then R N λ,τ ∈ C 1 0 (G) and, for all for π ∈Ĝ, Again, for a virtual representation Proposition 5.3 Let λ > 0 and N be large as above. If the splitrank of g is 2 then O g (R N λ ) = 0. If the splitrank of g is 1, then These orbital integrals are real and positive. Proof: The convergence follows from the fact that R N λ ∈ C 2 0 (G) for large λ and N. The formula from Prop. 5.2 can be specialised using If λ is real, then φ N λ is positive, because one can see by induction that for some polynomial p N of degree 2N −1 with nonnegative coefficients. Choice of the twisting character For the group SL 3 (R) we will now give an explicit example of a virtual character which is parabolically regular. For this let st : G → GL 3 (C) denote the standard representation of G, and let η = S 2 (st) be its symmetric square. Then the dimension of η is 6. Consider the virtual representation For x ∈ G we have tr ψ(x) = det(1 − η(x)). The group M 1 A 1 is the centralizer of A 1 , so M 1 is isomorphic to the group of real two by two matrices of determinant ±1. Again the claim follows. The third parabolic P 2 is obtained from P 1 by reflection along the second diagonal. The lemma is now clear. We now consider the Cartan subgroup H 0 of diagonal matrices in G. Then its Lie algebra a 0 , which equals the Lie algebra of the connected component A 0 , is the Lie algebra of diagonal matrices in g, i.e. Proof: For a, b, c ∈ C × we have An element g ∈ G of splitrank one is conjugate in SL 2 (C) to the element diag(e r+iθ , e r−iθ , e −2r ), for which l(g) = 6|r|. The claim follows by inspection. The geometric side Let ψ be the virtual representation of G given in Section 6. Let φ be a Paley-Wiener function and define where f fin is the characteristic function of K fin = p SL 3 (Z p ). Let E(Γ) denote the set of all conjugacy classes [γ] in Γ which are of split rank one. Note that, according to our definition, such γ are regular. Proposition 7.1 The geometric side of the trace formula for f φ is Proof: It follows from Lemma 6.1 that f φ ∞ is parabolically regular, so by Corollary 1.3 the geometric side of the trace formula takes the form The orbital integral of f φ ∞ can be computed as since tr ψ is invariant under conjugation. It remains to show that the sum can be reduced to the regular classes of splitrank one. For this let γ ∈ Γ with tr ψ(γ) = 0. Then, by the proof of Lemma 6.1, γ does not have ±1 as an eigenvalue. Lemma 7.2 Let γ ∈ SL 3 (Z). Suppose γ does not have ±1 as an eigenvalue. Then γ is regular and the Q-subalgebra Q(γ) generated by γ is a cubic field equal to the centralizer of γ in Mat 3 (Q). This field is complex iff γ has split rank 1. Proof: Suppose that γ has a rational eigenvalue ν. Since the characteristic polynomial is monic and has integer coefficients, ν is an algebraic integer. Being rational, ν must be an integer, and since γ −1 also has integer coefficients, ν = ±1. If we exclude this case, then the characteristic polynomial of γ is irreducible, hence its roots are distinct and Q(γ) is a cubic field. Complexification shows that the centralizer F of γ in Mat 3 (Q) is three-dimensional and commutative. By comparison of degree we see that Q(γ) = F . If ν is an eigenvalue of γ, then the map which assigns to any element of F its eigenvalue in the ν-eigenspace of γ is an isomorphism of F onto Q(ν). Since γ has split rank one iff it has only one real eigenvalue, the Lemma follows. For N, λ ≫ 0 the trace formula is valid for the resulting test function f N λ , and Proposition 7.1 remains valid. Proof: Given N ′ and r > 0, choose N and λ sufficiently large so that φ N λ ∈ H r N ′ according to Proposition 5.3. We want to approximate φ N λ in this space by a sequence of Paley-Wiener functions φ n such thatφ n does not change sign and tends to φ N λ monotonely. Thus, let χ ∈ C ∞ c (R) be even, monotonely decreasing on R + and such that χ(x) = 1 for |x| ≤ 1. It is easy to check that the function φ n whose Fourier transform is χ(x/n) φ N λ (x) does the job. Proposition 2.1 now implies that, given p > 0 and N ′′ , we may choose N and λ such that h φn converges to R N λ in C p N ′′ (G). Since tr ψ(g) grows only exponentially with d(g), we see that, if p was small enough, the sequence h φn tr ψ converges to R N λ tr ψ in C 1 N ′′ (G). It follows from Theorem 4.1 that f ∞ → J spec (f ∞ ⊗ f fin ) is a continuous linear functional on C 1 (G). It is clear that any continuous linear functional on C 1 (G) extends to C 1 N ′′ (G) for sufficiently large N ′′ . Thus, if we set f n = f φn , then J spec (f n ) → J spec (f N λ ) as n → ∞. The trace formula implies that J geom (f n ) → J spec (f N λ ) as n → ∞ as well. By Proposition 5.3 and Lemma 6.3, all the terms on the geometric side of the trace formula for f n have the same sign, and each term tends to the corresponding term for f N λ monotonically. Thus, we may pass to the limit n → ∞ on the geometric side by monotone convergence. An element γ of Γ = SL 3 (Z) is called primitive if it is not of the form δ n for any δ ∈ Γ and any natural number n = 1. For every regular γ ∈ Γ there is a primitive γ 0 ∈ Γ such that γ = γ µ 0 for some µ ∈ N. If γ is of splitrank two then γ 0 is uniquely determined. If γ is of splitrank one then the split part of γ 0 is uniquely determined. The normalization of Haar measures chosen [15] implies that for each regular γ ∈ Γ of splitrank 1 we have vol(Γ γ \G γ ) = l(γ 0 ). Corollary 7.4 The geometric side of the trace formula for f N λ equals The spectral side We have proved in Proposition 7.3 that the spectral side of the trace formula with the test function f N λ converges for sufficiently large positive λ and N. Now we want to show that it extends meromorphically as a function of λ to a sufficiently large subset of the complex plane. In the notation of section 4, J spec (f N λ ) is a sum of terms J M,π (f N λ ), where M is a Levi Q-subgroup of G and π is a square-integrable automorphic representation of M(A). The contributions with M = G are the easiest ones: For any prime p, we have π triv,p (f p ) = 1 due to our choice of f fin , while the factor at the infinite place can be computed using Lemma 3.3, giving We need the explicit result only for the trivial representation π triv of G(A). where σ runs through the irreducible representations of G occuring in ψ. The coefficients [ψ : σ] have been calculated in Lemma 6.2, so it remains to determine [σ| K : τ ] and σ(C). All of these numbers are unchanged if we replace σ by σ * . For each dominant weight λ occurring we determine the decomposition of W λ | K from its weights and compute Since B(ρ M 1 ) = 1 12 , the lemma follows from this. Let Π ∞ (τ 0 , ψ) be the set of all admissible irreducible representations η of G which are subquotients of π ⊗ ψ for some nontrivial π ∈Ĝ and such that η contains a K-type in τ 0 . Let This is a closed subset of C. Note that B(ρ 1 ) = 1 4 in our normalization. Let Ω(τ 0 , ψ) = C \ S(τ 0 , ψ). Proof: Let us write where Π consists of all pairs different from (G, π triv ). We want to apply Theorem 4.1 to show that the integral-series on the right-hand side converges normally for λ ∈ Ω(τ 0 , ψ) and hence represents a holomorphic function. Thus, we have to find a uniform bound on the operator norms of ρ π (P, ν, f ) for (π, M) = (G, π triv ), ν ∈ ia M , parabolics P with Levi component M and f = (∆ + 1) N f N λ , where λ runs through a compact subset of Ω(τ 0 , ψ). These operators are direct sums of copies of P(A fin ) (π fin , ν, f fin ), hence have the same operator norm as the latter. By our choice of f fin , the second factor is the projection onto the subspace of K fin -fixed vectors, hence of norm one. This leaves us with the norm of the factor at the infinite place. Thus, we focus attention on an irreducible component of Ind G P (π ∞ , ν), which is a nontrivial unitary representation of G. Our assertion will be a consequence of the following result, where we use the symbol π in a different sense for simplicity of notation. Lemma 8.3 There is a uniform bound on the operator norms of for all nontrivial π ∈Ĝ, all constituents σ of ψ, all constituents τ of τ 0 and λ in a compact subset of Ω(τ 0 , ψ). Proof: Recalling that ∆ = 2C K − C, we have Since R N λ is K-finite, here the operator C K can be estimated by a constant, so the second factor behaves like polynomial of degree N in π(C). Applying Lemma 3.4 to T = π ⊗ σ(R N λ,τ ), we get According to [30] every nontrivial π ∈Ĝ is a quotient of a representation which is parabolically induced from a unitary representation of a proper parabolic subgroup P = MAN, i.e., π = π ξ,ν , where ξ ∈M and ν ∈ ia * . Of course, it suffices to consider the standard parabolics P 0 , P 1 and P 2 . In the case of the maximal parabolics P 1 and P 2 , if ξ itself is parabolically induced from a unitary representation of a proper parabolic subgroup, we use induction in stages to regard π as induced from the minimal parabolic P 0 . In this way, the complementary series corresponds to nonunitary parameters ν ∈ a * with Re(ν) = tρ ∈ a * 0 , where 0 < t ≤ 1/2. Thus, let π = π ξ,ν be an induced representation. There is a natural Kstable grading Gr j of π ⊗ σ underlying the filtration from Lemma 3.5. The space Gr j is defined to be the set of all f : V → V ξ ⊗ V σ such that Let P j be the projection to Gr j . Then With respect to this grading the operator π ⊗ σ(−C − B(ρ 1 ) + λ) N P τ is a triangular matrix whose entries are polynomials in ν of degree ≤ 2N. The diagonal entries have the form with η being a subquotient of π ⊗ σ, and their leading term in ν is B(ν) N . Hence the inverse matrix is triangular and its entries are rational functions in ν which tend to zero as fast as B(ν) −N as ν → ∞. Moreover, these functions have no poles at points ν parametrising η ∈ Π ∞ (τ 0 , ψ) if λ ∈ Ω(τ 0 , ψ), as follows from the definition of the latter set. This implies that the norm π ⊗ σ(R N λ,τ ) times (1 + |π(C)|) N is bounded. The bound will depend on ξ, but for P being the minimal parabolic, the group M is finite and there are only finitely many ξ. For the maximal parabolics we may assume that ξ is not induced itself, so it is one-dimensional or a (limit of) discrete series representation. As there are only finitely many such ξ for which some ξ ⊗ σ j has a K ∩ M-type in τ | K∩M , we get a uniform bound on the operator norm in question, and the lemma follows. As noted above, Proposition 8.2 is thereby proved, too. Next we show that the set Ω(τ 0 , ψ), to which we have meromorphically continued the spectral side, is large enough for our goals, in particular, that it contains the main pole at λ = 9 4 . For the proof we will need two prerequisites. First, let S α be the set of all z ∈ C with | Re(z)| ≤ α, and let S 2 α = {z 2 | z ∈ S α }. A computation shows that Lemma 8.5 Let V R be a finite dimensional real vector space with a positive definite symmetric bilinear form B R . Let V , B be their complexifications. Then every v ∈ V can be written as For every v ∈ V and all β > 0, c ≥ 0 we have where α = β 2 + c. x + For the general case simply observe that S 2 β + c ⊂ S 2 α ⊂ S 2 α + c. We will also need some elementary facts about representations of SL 2 (R). For n ∈ Z, consider the character of the subgroup of upper triangular matrices which takes value sgn(a)a n on a matrix with upper left entry a, and let π n be the normalised induced representation of SL 2 (R). Then π n has a unique subrepresentation ξ + n (resp. ξ − n ) whose SO 2 -types are bounded only from below (resp. only from above). In fact, where ε r , r ∈ Z, are the characters of SO 2 . For n > 0 (resp. n = 0), the representations ξ ± n are irreducible and constitute the discrete series (resp. its limits). In the notation of [17], ξ ± n = D ± n+1 . Lemma 8.6 If ζ is a k +1-dimensional irreducible representation of SL 2 (R), then ξ ± n ⊗ ζ has a filtration whose subquotients are isomorphic to ξ ± n+m with |m| ≤ k and m ≡ k (mod 2). Proof: By Lemma 3.5, π n ⊗ ζ has a filtration with subquotients isomorphic to π n+m with |m| ≤ k and m ≡ k (mod 2). This induces a filtration on ξ + n ⊗ ζ of length at most k + 1 whose subquotients are subrepresentations of the subquotients of the previous filtration. Since the SO 2 -types ε r of ξ + n ⊗ ζ are bounded from below and have multiplicity k + 1 for large r. This identifies those subrepresentations of π n+m uniquely. The case of ξ − n ⊗ ζ is analogous. Let us start with the case that π is induced from the minimal parabolic P 0 = M 0 A 0 N 0 of all upper triangular matrices, where M 0 is the group of all diagonal matrices with entries ±1 and N 0 is the group of all upper triangular matrices with ones on the diagonal. We have In this case ν j runs through the weights of σ. For the principal series, we have ν ∈ a * 0 purely imaginary. Since we discuss the complementary series as induced from P 0 , we also have to consider ν ∈ a * 0 with Re(ν) = tρ, 0 < t ≤ 1 2 . In the same manner, we can at once handle the representations induced from the one-dimensional representations of P 1 or P 2 by embedding them into principal series for the parameter 1 2 ρ. Refering to Lemma 8.5, we see that it suffices to show B(tρ+ν j )− 1 12 ∈ S 2 β for 0 ≤ t ≤ 1 2 . On a Weyl orbit of weights ν j ∈ a * 0 , this function obtains its largest value for dominant weights. Those occurring in ψ are of the form ν j = aλ 1 + bλ 2 for 0 ≤ b ≤ a ≤ 3. The maximum is obtained for a = b = 3 and t = 1 2 , where we get B(3λ 1 + 3λ 2 + ρ/2) − 1 12 = 3 2 = β 2 . It remains to consider the case when π = π ξ,ν is induced from a maximal parabolic, ξ belongs to the (limit of) discrete series and ν is purely imaginary. Since all maximal parabolics are conjugate under the automorphism group of G, it suffices to consider P 1 = M 1 A 1 N 1 . Then M 1 ∼ = GL 2 (R) 1 has two connected components, and ξ = Ind M 1 M 0 1 (ξ + ), where ξ + is in the (limit of) discrete series of M 0 1 ∼ = SL 2 (R). Since conjugation by the nontrivial element of M 1 /M 0 1 switches holomorphic with antiholomorphic discrete series, we may assume that ξ + ∼ = ξ + n for some n ≥ 0 in the notation of Lemma 8.6. By induction in stages, we consider π as being induced from P 0 1 = M 0 1 A 1 N 1 , so η is a subquotient of π ξ + n ⊗ζ,ν+ν j for some irreducible constituent ζ of σ j , now denoting a representation of M 0 1 . The highest weight of ζ is of the form kρ M 1 ∈ a * M 1 , where k is a nonnegative integer because ρ M 1 = 1 2 (λ 1 − λ 2 ) happens to be the generator of the weight lattice of A M 1 . Now dim ζ = k + 1, and from Lemma 8.6 we conclude that η is a subquotient of π ξ + n+m ,ν+ν j for some |m| ≤ k. The condition η ∈ Π ∞ (τ 0 , ψ) means that η has a K-type in common with τ 0 , and by Frobenius reciprocity this implies that ξ + n+m has a K M 0 1 -type in common with τ 0 | K M 0 1 . Since the only constituents of τ 0 are δ 2k with |k| ≤ 2 and this imposes the restriction n + m ≤ 1. Remembering that n ≥ 0, we see that the infinitesimal character of ξ + n+m , which is (n + m)ρ M 1 , lies in the segment connecting ρ M 1 with the lowest weight ω j of σ j , hence is of the form uω j +tρ M 1 with |u| ≤ 1 and 0 ≤ t ≤ 1. But ±ω j +ν j is an a 0 -weight occurring in ψ, and so the infinitesimal character of η, which is (n + m)ρ M 1 + ν + ν j , can be written as ω + ν + tρ M 1 , where ω is in the convex hull of the weights occurring in ψ and 0 ≤ t ≤ 1. Thus η(C) + B(ρ 1 ) = B(ω + ν + tρ M 1 ) − B(ρ M 1 ). The prime geodesic theorem Note that if γ ∈ Γ is of splitrank one, then γ is not conjugate to its inverse γ −1 . So let E ± (Γ) be the set E(Γ) of conjugacy classes of split rank one modulo the equivalence relation [γ] ∼ [γ −1 ]. Let E ± 0 (Γ) be the subset of primitive classes. Theorem 9.1 (Prime Geodesic Theorem) For x → ∞ we have the asymptotic formula Our main result Theorem 0.1 can be deduced from Theorem 9.1 as follows. If [γ] ∈ E 0 (Γ), then by Lemma 7.2 the centralizer F γ of γ in Mat 3 (Q) is a complex cubic field and the set O γ = F γ ∩ Mat 3 (Z) is an order in F γ whose unit group is generated by γ. We claim that every order O occurs h(O) times in this way. The corresponding claim for a division algebra instead of Mat 3 (Q) is shown in [9], section 2. In that section the restriction to a division algebra was made to secure that the centralizer F γ would be a field. In the present paper this information is obtained from Lemma 7.2. Thus the proof goes through. Since γ is primitive, it is a generator of the unit group O × γ = ±γ Z . Comparing the metric given by the Killing form and the measure which defines the regulator [23], one finds that l(γ) = 3R(O γ ). Thus Theorem 0.1 follows from the Prime Geodesic Theorem. Proof: For λ ≫ 0 let M N (λ) be − 1 2 times the geometric side of the trace formula for f N λ . By Corollary 7.4, For Re(s) ≫ 0 we compute formally at first, where D is the differential operator D = ∂ ∂s 1 2s+1 . We deduce from the proof of Proposition 5.3 that there exists C > 0 such that, for λ > 0, which shows that M 1 (λ) is convergent for λ ≫ 0. From this and Propositions 8.2 and 8.4 we infer the claim on the analytic continuation. Since L(s) is a Dirichlet series with positive coefficients by Lemma 6.3 and Proposition 5.3, its analytic continuation also implies convergence. Proposition 9.2 is proven. The Prime Geodesic Theorem follows from the Proposition by the Wiener-Ikehara Theorem as in [7].
2014-10-01T00:00:00.000Z
2003-10-23T00:00:00.000
{ "year": 2003, "sha1": "dab72c6d96f0576c38c869f2faadfc73f24c1f96", "oa_license": null, "oa_url": "http://arxiv.org/pdf/math/0310372", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "dab72c6d96f0576c38c869f2faadfc73f24c1f96", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
10154575
pes2o/s2orc
v3-fos-license
Existence of isoperimetric regions in non-compact Riemannian manifolds under Ricci or scalar curvature conditions We prove existence of isoperimetric regions for every volume in non-compact Riemannian $n$-manifolds $(M,g)$, $n\geq 2$, having Ricci curvature $Ric_g\geq (n-1) k_0 g$ and being locally asymptotic to the simply connected space form of constant sectional curvature $k_0$; moreover in case $k_0=0$ we show that the isoperimetric regions are indecomposable. We also discuss some physically and geometrically relevant examples. Finally, under assumptions on the scalar curvature we prove existence of isoperimetric regions of small volume. Introduction If (M, g) is a compact Riemannian n-manifold, then standard techniques of geometric measure theory ensure existence of isoperimetric regions (roughly speaking Ω ⊂ M is an isoperimetric region if its boundary has least area among the boundaries of regions having the same volume of Ω; for the precise notions see Section 2). In case M is non-compact the question of existence of isoperimetric regions is completely non-trivial and the few known existence results are quite specific. A simple example where existence fails is the right hyperbolic paraboloid M λ defined by the equation z = λxy: here there is no isoperimetric region for any value of the area (see [54]). More dramatically, it can happen that isoperimetric regions exist just for some value of the area (see [17] where a complete study of isoperimetry in the case of quadrics of revolution is performed). Nevertheless there are some cases when the existence of isoperimetric regions for every volume is known: 1. (M, g) is complete non-compact but its isometry group acts co-compactly (see [46], [43], or [29] in the context of sub-Riemannian contact manifolds). 2. (M, g) is connected complete non-compact but with finite volume (this is an easy consequence of Theorem 2.1 in [55]). 3. The non-compact non simply connected surfaces constructed in [33]. 4. In several cases when (M, g) is a cone, the isoperimetric regions exist for every volume and are characterized (see [47], [55]); for warped products see [11]. 5. If (M, g) is a a complete plane with non-negative curvature (see [53]). The reason for the non-existence of isoperimetric regions for a fixed volume v > 0 is explained clearly by Theorem 2.1 in [55] (recalled in Theorem 4.1): the lack of compactness in the variational problem is due to the fact that the minimizing sequences might split into a part converging nicely to an isoperimetric region, and in another part of positive volume going to infinity. The diverging part of the minimizing sequences was studied by the second author in [50] using the theory of C m,α -pointed convergence of manifolds developed by Petersen [52] (see Section 2). In the present paper we adopt this second point of view. The main goal of the present work is to add, to the previous list, a class of manifolds admitting isoperimetric regions for all volumes. This is the content of the next theorem. Theorem 1.1. Let (M n , g) be an n ≥ 2 dimensional complete Riemannian manifold such that 1. (M n , g) is C 0 -locally asymptotic to the simply connected n-dimensional space form of constant sectional curvature k 0 ≤ 0, i.e., for every diverging sequence of points p j the sequence of pointed manifolds (M, g, p j ) converges in C 0 topology to (M n k0 , x 0 ) (x 0 is any point in M n k0 ), 2. Ric g ≥ (n − 1)k 0 g, 3. V (B(p, 1)) ≥ v 0 > 0 for every p ∈ M . Then for every v > 0 there exists an isoperimetric region Ω v of volume v such that Moreover if k 0 = 0 (i.e. Ric g ≥ 0 and (M, g) is C 0 -locally asymptotically euclidean) then the isoperimetric regions are indecomposable. Roughly speaking, the last sentence says that the isoperimetric regions are connected if k 0 = 0. For the precise notion of indecomposability see Section 2; see Definition 2.3 for the concept of C 0 -pointed convergence of manifolds. To our knowledge, this is the first existence result valid for all volumes and all dimensions in the non-compact case under just geometric curvature assumptions and asymptotic conditions on the ambient manifold. Remark 1.1. Notice that the class of manifolds satisfying the assumptions of Theorem 1.1 contains many geometrically and physically relevant examples: Eguchi-Hanson and more generally ALE gravitational instantons (these manifolds are the building blocks of the Euclidean quantum gravity theory of Hawking), asymptotically hyperbolic Einstein manifolds (these spaces play a crucial role in the AdS/CFT correspondence in quantum field theory) and Bryant type solitons (which are special but fundamental solutions to the Ricci flow). For a deeper discussion about these spaces see Section 5. In order to prove Theorem 1.1 in Section 4, in Section 3 we prove some general properties of the isoperimetric regions and of the isoperimetric profile function of a non-compact Riemannian manifold. Using the results of Section 3 we are also able to perform a finer analysis of the minimizing sequences for the perimeter under the volume constraint in case the manifold has non-negative Ricci tensor, Ric ≥ 0: roughly speaking either they converge to an isoperimetric region or they diverge, but they cannot split into a converging and a diverging part. For the precise statement see Theorem 4.2. The previous existence theorem is based on assumptions on the Ricci curvature; actually, as the following theorem points out, if one is interested in the existence of isoperimetric regions of small volume it is enough to ask conditions on the scalar curvature. Theorem 1.2. Let (M, g) be an n ≥ 2 dimensional Riemannian manifold of C 2,α -bounded geometry and let S ∈ R. Suppose that (M, g) satisfies the following assumptions: 1. for every ǫ > 0 there exists a compact subset K ǫ ⊂⊂ M such that the scalar curvature Then there exists a small v 0 > 0 such that for any 0 < v ≤ v 0 there exists an isoperimetric region of volume v 0 . Moreover such an isoperimetric region is a pseudo-bubble having center of mass in a pointp v which is converging in Hausdorff distance sense, as v → 0, to the set of points of global maximum of the scalar curvature Scal g . For the concept of C 2,α -bounded geometry see Definition 2.7, for the precise notions of pseudo-bubble and center of mass see Definitions 2.11 and 2.12. Theorem 1.2 is also interesting in connection with Theorem 1.1. Indeed, if the Riemannian manifold (M, g) satisfies the assumptions of Theorem 1.1 and moreover there exists a pointp ∈ M where Scal g (p) > n(n − 1)k 0 then Theorem 1.1 ensures existence of isoperimetric regions for every volume, and Theorem 1.2 says that these isoperimetric regions, for small volumes, are pseudo-bubbles centered near the points of maximal scalar curvature. For the existence and the characterization of isoperimetric regions of large volume in manifolds which are asymptotically globally Euclidean see [28] (see also [26] and [27]). The article is organized in the following way: in Section 2 we recall the notions and the known results used throughout the paper, in Section 3 we prove some general properties of the isoperimetric profile function of a non-compact Riemannian manifold, in Section 4 we prove the main theorems (we also give an alternative proof of Theorem 1.1 in the case k 0 = 0 using the second variation or using differential inequalities) and we conclude in Section 5 with a discussion of the examples of manifolds satisfying the assumptions of Theorem 1.1. Acknowledgment The project started when the first author was a Ph.D. student at SISSA and the second author visited SISSA thanks to the support of the M.U.R.S.T, within the project B-IDEAS "Analysis and Beyond" directed by Prof. Andrea Malchiodi. The first author acknowledges also the support of the ERC project "GeMeThNES" directed by Prof. Luigi Ambrosio; part of this work was written while the second author was a post-doctoral fellow at the University of São Paolo supported by Fapesp grant 2010/15502-3, he also thanks IME-USP. The authors want to express their deep gratitude to Frank Morgan, Pierre Pansu, Manuel Ritoré and Cesar Rosales for stimulating conversations at the first stages of this project. They also thank Michael Deutsch for reading the final manuscript. Notation and preliminaries Let (M n , g) be a smooth complete Riemannian n-manifold. The n-dimensional and k-dimensional Hausdorff measures of a set Ω ⊂ M will be denoted by V (Ω) and H k (Ω), respectively. For any measurable set Ω ⊂ M we denote with P(Ω) the perimeter of Ω defined by where X is a smooth vector field with compact support in M , |X| ∞ is the sup-norm, and divX is the divergence of X. A measurable subset Ω ⊂ M is of finite perimeter if P(Ω) < ∞ and we denote with τ M the family of all finite perimeter subsets of M . A finite perimeter set Ω is said indecomposable if there do not exist disjoint non-empty finite perimeter sets Ω 1 , Ω 2 of positive volume such that Ω = Ω 1 ∪ Ω 2 , and P(Ω) = P(Ω 1 ) + P(Ω 2 ) (for more details see [2]). The isoperimetric profile of M is the function I M : (0, V (M )) → [0, +∞) given by If there exists a finite perimeter set Ω ∈ τ M satisfying V (Ω) = v and I M (v) = P(Ω), such an Ω will be called an isoperimetric region, and we say that I M (v) is achieved. A minimizing sequence of sets of volume v is a sequence of finite perimeter sets {Ω k } k∈N such that V (Ω) = v for all k ∈ N and lim k→∞ P(Ω k ) = I M (v). Recall that a sequence {Ω k } k∈N converges in the finite perimeter sense to a set Ω if χ Ω k → χ Ω in L 1 loc (M ) and lim k→∞ P(Ω k ) = P(Ω), where χ Ω k and χ Ω denote the characteristic functions of Ω k and Ω, respectively. Of course the existence of isoperimetric regions does not always occur in general, but if an isoperimetric region does exist, then the following classical regularity theorem holds (for the proof see [44]). there exists a neighborhood U p ⊂ M such that ∂Ω ∩ U p is a smooth hypersurface of constant mean curvature. Moreover the Hausdorff dimension of ∂Ω s is less than or equal to n − 8. In particular, if n < 8 then ∂Ω s = ∅. 2. ∂Ω is orientable and ∂Ω r is equipped with a smooth outward pointing unit normal vector field ν. This result was first obtained in the Euclidean setting by Gonzalez, Massari and Tamanini [31] who treated interior regularity, and by Grüter [34], who studied regularity near boundary points. Morgan [45] generalized their results to the setting of Riemannian manifolds by using the paper of Almgren [1], which is Proposition 2.1. Remark 2.2. In case the manifold M n and the metric g are not smooth but regular enough there are still good regularity properties of isoperimetric regions. Indeed the standard interior Allard-type C 1,α regularity of (almost) minimizing boundaries away from a set of Hausdorff dimension at most 8 holds. This was shown by J. Taylor in [58] (this part of the discussion in her paper applies to n-dimensional manifolds). When the manifold is C 2 and the metric is Lipschitz, then this follows also from the work of R. Schoen and L. Simon [56] (for almost minimizing currents this was pointed out by B. White in [59] pag. 498). When the manifold is C 4 and the metric C 3 so that the Nash embedding Theorem provides an isometric embedding of (M, g) into a high dimensional Euclidean space, then this also follows directly from upon applying the Euclidean regularity theory as in [57]. Now, in order to state the generalized existence theorem of the second author (a tool used throughout the paper), we recall the basics of the theory of C m,α -pointed convergence of manifolds (for more details see [52]). ,α -manifold with the C m,α -metric g and let p ∈ M . A sequence of pointed smooth complete Riemannian n-manifolds is said to converge in the pointed C m,α -topology to the manifold (M, g, p), and we write ( Remark 2.4. Whitney proved (see for instance Theorem 2.9 in [37]) that if α is a C r differentiable structure on a topological manifold M , r ≥ 1, then for every r < s ≤ ∞ there exists a compatible C s differentiable structure β ⊂ α, and β is unique up to C s diffeomorphism. Therefore the assumption that M is a C m+1,α -manifold is somehow unnecessary, but we will keep it for coherence with the literature. Now let us recall the notions of bounded geometry and C m,α -bounded geometry. Definition 2.5. A complete Riemannian n-manifold (M, g) has bounded geometry if the following holds: 1. There exists a constant k ∈ R such that Ric g ≥ k(n − 1)g, The volume of unit balls is uniformly bounded below Remark 2.6. Notice that if (M, g) has positive injectivity radius, Inj M > 0, then the second condition above is satisfied. Indeed Croke proved (see Proposition 14 in [21] and the discussion at page 2 in [22]; see also [8] ) that there exists a constant C n (depending only on n = dimM ) such that if r ≤ InjM 2 then V ol(B(p, r)) ≥ C n r n for every p ∈ M . Definition 2.7. A complete Riemannian n-manifold (M, g) has C m,α -bounded geometry if it has bounded geometry and moreover the following holds: For every diverging sequence of points (p j ) j∈N there exists a subsequence (p j l ) l∈N and a pointed C m+1,α -manifold Now we recall the generalized existence theorem of the second author (Theorems 1 and 2 in [50]). Theorem 2.8. Let (M, g) be a Riemannian n-manifold with C 1,α -bounded geometry in the sense of Definition 2.7. Then for every volume v ∈]0, V (M )[ there are a finite number of limit manifolds at infinity (precisely the manifolds at infinity are C 2,α with C 1,α metric) such that their disjoint union with M contains an isoperimetric region of volume v and perimeter I M (v). More precisely for every volume The assumption about C 1,α bounded geometry was used in the proof of the previous theorem in [50] to ensure that the manifolds at infinity are at least C 2,α with C 1,α metric. Actually if one assumes a priori that the pointed C 0 -limits are smooth Riemannian manifolds, then the same generalized existence theorem holds. This is because the C 0 -convergence of the metric tensors ensures the converge of the volume and of the perimeter (this is clear on smooth sets, so by approximation it holds on all finite perimeter sets). Recall also the following useful result (see Theorem 3 in [50]). Now we recall the notion of pseudo-bubble which will be useful to study the existence of isoperimetric regions of small volume. Call U p M the fiber over p of the unit tangent bundle (also called the sphere bundle) of the Riemannian manifold (M, g). Definition 2.11. A pseudo-bubble is a hypersurface ΨB embedded in M such that there exists a point p ∈ M and a function w belonging to C 2,α (U p M ⋍ S n−1 , R), such that ΨB is the graph of w in normal polar coordinates centered at p, i.e. Recall also the notion of Riemannian center of mass. Notice that, since Σ is compact, by the Dominated Convergence Theorem, the function E Σ is continuous and coercive, hence the existence of a minimum is guaranteed. Notice also that although uniqueness of this minimum point does not hold in general, it does in the cases we are interested, namely pseudo-bubbles of small diameter. 3 Some general properties of the isoperimetric profile valid for (possibly non-compact) manifolds of bounded geometry Some classical properties of the isoperimetric profile for compact manifolds are also valid for non-compact manifolds (sometimes assuming bounded geometry) This section is devoted to prove some of them. Proof. See Corollary 1 in [50]. The following theorem is stated and proved in [46] (Theorem 3.4-3.5) in case of a compact ambient manifold but, as was pointed out to the authors by C. Rosales, the same proof holds for manifolds which are merely complete. Proposition 3.2. Let (M, g) be a smooth, complete, connected n-dimensional Riemannian manifold and assume the following lower bound on the Ricci curvature: Now, using geometric differential inequalities, we are going to prove two useful properties of the isoperimetric profile of a manifold with C 2,α -bounded geometry. First recall that given a function f : The following theorem for compact manifolds is due to Bayle (see [7] Theorem 2.2.1). Theorem 3.3. Let (M n , g) be a complete n-dimensional Riemannian manifold, of C 2,α -bounded geometry with n ≥ 2. Let us assume that Then the normalized isoperimetric profile Y (M,g) := I n n−1 M satisfies the following second order differential inequality Proof. Since in this case k 0 = 0, by the differential inequality (3) we get so the function Y M is concave (see [7] Proposition B.2.1 pag.181). Now observe that I M = Y n−1 n (M,g) ; since the exponent is n−1 n < 1, it follows that I M is strictly concave. Of course a continuous strictly concave function on ]0, ∞[ which is null at 0 is strictly subadditive (for the simple proof see for example [7], Lemma B.1.4). Now let Ω v be an isoperimetric region in volume v > 0. If by contradiction Ω v = Ω 1 ∪ Ω 2 is a decomposition of Ω, say 0 < v 1 = V (Ω 1 ) and 0 < v 2 = V (Ω 2 ), then by the assumed subadditivity of the isoperimetric profile we reach the contradiction 4 Existence and properties of isoperimetric regions 4.1 Proof of Theorem 1.1 Recall that the isoperimetric regions in the n-dimensional simply connected space form M n k0 of constant sectional curvature k 0 are metric balls (no matter where the center is). Therefore it is clear that, applying the generalized existence Theorem 2.8 (recall also Remark 2.9) to a manifold (M n , g) satisfying the assumptions of Theorem 1.1, there is at most one component of the generalized isoperimetric region D placed in the manifold M n k0 at infinity. More precisely, fixing a positive volume v > 0 and considering D as a generalized isoperimetric region for the volume v given in 8 of Theorem 2.8, we have that and D 1 ⊂ M are isoperimetric regions for their own volume, we have that where B M n k 0 (v ∞ ) is a metric ball in M n k0 of volume v ∞ and the second statement is ensured by Theorem 2.10. If D ∞ = ∅ the conclusion follows, so we can assume that D ∞ = ∅ and v ∞ := V M n k 0 (D ∞ ) > 0. Let us consider a metric ball B M (v ∞ ) ⊂ M of volume v ∞ placed at positive distance from D 1 (this is possible thanks to (6) and the assumed asymptotic behaviour of (M, g)). By formula (1) in the proof of Proposition 3.2, we have that Therefore if we move all the volume v ∞ which stays in the manifold at infinity M n k0 in any metric ball contained in the original manifold M , we do not increase the perimeter and where we used 7 of Theorem 2.8 for the first equality and the fact that D 1 and B M (v ∞ ) are at positive distance for the final equality. Since is an isoperimetric region in M for the volume v. Since v > 0 was arbitrary the theorem is proved. The indecomposability of the isoperimetric regions in case Ric g ≥ 0 is ensured by Corollary 3.4. ✷ The case Ric g ≥ 0 Since the asymptotically locally euclidean manifolds with non-negative Ricci tensor are particularly interesting for the applications (see Section 5), in this subsection we give an alternative proof of Theorem 1.1 in this case. Moreover, if Ric g ≥ 0, it is possible to do a finer analysis on the minimizing sequences: roughly speaking either they converge to an isoperimetric region or they diverge, but they cannot split into a converging part and a diverging part. An alternative proof of Theorem 1.1 in the case Ric g ≥ 0 Let (M n , g) satisfy the hypothesis of Theorem 1.1 with k 0 = 0, so that Ric g ≥ 0 and the manifold is C 0 -locally asymptotic to R n . For a fixed v > 0, we want to show that there exists an isoperimetric region in M of volume v. Theorem 2.8 (recall also Remark 2.9) ensures the existence of a generalized isoperimetric region is an isoperimetric region in M (resp. in R n ) for its own volume v 1 (resp. v ∞ ). The structure of the proof is the following: first we show that D is connected, so either D = D 1 or D = D ∞ , then we prove that it must be D = D 1 . Let us start assuming dim(M ) = n < 8, since in this case the proof is very short (later we will explain how to handle the general case). As D is an isoperimetric domain in M ∪ R n , its boundary is a smooth stable CMC hypersurface of finite area. If by contradiction D 1 = ∅ and D ∞ = ∅, then 0 < P M (D 1 ), P R n (D ∞ ) < ∞ and there exist c 1 , c ∞ ∈ R\{0} such that Denote by ν 1 and ν ∞ the outward pointing unit normal vectors to ∂D 1 and ∂D ∞ , and consider the variation of D composed by varying D 1 in the direction c 1 ν 1 and varying D ∞ in the direction of −c ∞ ν ∞ . Observe that (8) implies that this is an admissible variation (it has null mean value so it is volume preserving to first order). Since the first variation of the perimeter P of D with respect to null mean value deformations is null (recall that ∂D is union of smooth hypersurfaces of constant mean curvature), it is interesting to compute the second variation of P in the specified direction. The standard expression of the second variation of the area (see for example [5], Proposition 2.5) gives where σ 1 (resp. σ ∞ ) is the norm of the second fundamental form of ∂D 1 (resp. ∂D ∞ ). Now observe that ∂D ∞ is an (n − 1)-dimensional Euclidean sphere of radius r ∞ , so where ω n−1 is the perimeter of unit sphere in R n . Since we are assuming that Ric g ≥ 0, we can conclude that which contradicts the stability of ∂D. An alternative proof of STEP 1 : Since the enlarged manifold M ∪ R n has non-negative Ricci curvature and C 0 -bounded geometry, by Corollary 3.4 (notice that we asked C 2,α -bounded geometry just to ensure that the manifolds at infinity were smooth enough to carry the regularity of isoperimetric regions and for ensuring that the lower bound on the Ricci tensor is preserved in the limit; both facts are clearly true if the limit manifolds are isometric to the Euclidean n-dimensional space) the isoperimetric regions are indecomposable, so either where we used (11) in the last equality. Therefore is an isoperimetric region in volume v for every p 0 ∈ M , and the theorem follows by the arbitrarity of v > 0. We remark that in the latter case M is locally isometric to R n . and Ω c k ∩ Ω d k = ∅, such that the following hold: there exists a finite perimeter set Ω ⊂ M such that, passing to a subsequence {k j } j∈N , {Ω c kj } j∈N converges to Ω in the sense of finite perimeter sets. In particular, lim j→∞ P(Ω c kj ) = P(Ω) and lim kj →∞ V (Ω c k ′ ) = V (Ω), 5. Ω is an isoperimetric region (possibly empty) for the volume it encloses. The aim of the present section is to prove the following theorem, which says that if Ric g ≥ 0 then for every v > 0 any minimizing sequence {Ω k } k∈N for the volume v cannot split into a convergent part {Ω c k } k∈N and a divergent part {Ω d k } k∈N such that lim inf k V (Ω c k ) > 0 and lim inf k V (Ω d k ) > 0, or in other words, any minimizing sequence for the volume v > 0 either converges to an isoperimetric region of volume v or diverges, up to a part whose volume converges to zero and up to subsequences. Moreover, either {Ω 1 kj } j∈N diverges or there exists an isoperimetric region Ω ⊂ M for the volume v such that {Ω 1 kj } j∈N converges to Ω in the sense of finite perimeter. Proof. Applying Theorem 4.1 to the minimizing sequence {Ω k } k∈N we obtain the sequences of sets of The conclusion follows if we prove that either v 1 = v and v ∞ = 0 or that v 1 = 0 and v ∞ = v. Assume by contradiction that both v 1 and v ∞ are strictly positive. Combining items 2, 4 and 5 of Theorem 4.1, we infer Using the trivial inequality P(Ω d k ) ≥ I M (V (Ω d k )) we can continue the chain above, obtaining where, in the last equality, we used that lim j→∞ V (Ω d kj ) = v ∞ together with the continuity of the isoperimetric profile ensured by Proposition 3. Existence of isoperimetric regions of small volume under assumptions on the scalar curvature In this section we prove Theorem 1.2, the existence of isoperimetric regions of small volumes in noncompact manifolds of any dimension under assumptions on the scalar curvature alone. PROOF OF THEOREM 1.2: From Lemma 3.6 in [49], there exists a small v 0 > 0 such that for any 0 < v < v 0 , the isoperimetric profile I M (v) is achieved in the enlarged manifold M ∪ M ∞ , where M ∞ is given by a compactness argument in the theory of pointed convergence of manifolds (see [49], and note that we have changed the notation a bit from that in the cited paper, where M ∞ may coincide with M , while here M denotes the original manifold and M ∞ denotes the manifold we are attaching at infinity in case a minimizing sequence is diverging). From Lemma 3.7, the minimizer is a pseudo-bubble (for the precise notion see Definition 2.11) ΨB v contained either in M or in M ∞ . We now show that ΨB v must be contained in M , from which the theorem follows. Suppose for contradiction that ΨB v ⊂ M ∞ . Then the expansion of the isoperimetric profile I M∞ for small volume (see formula (2) in Theorem 2 in [49]) is (12) I where c n is the Euclidean isoperimetric constant, S ∞ := sup M∞ Scal g∞ , and ω n is the volume of the n dimensional ball of radius 1. Notice that since (M, g) has C 2,α -bounded geometry, the asymptotic bounds on the curvature of M are transferred to the C 2,α -limit manifold M ∞ , so under our assumptions we have that S ∞ ≤ S. On the other hand, taking a pointp ∈ M where Scal g (p) > S, the same computations show that on small geodesic balls Bp ,v of volume v centered atp we have (13) P Since Scal g (p) > S ≥ S ∞ , the combination of (12) and (13) gives the contradiction Finally, from Theorem 1 in [49], the isoperimetric regions of small fixed volume v are pseudo-bubbles with center of massp v converging in Hausdorff distance to the set of points of global maximum of the scalar curvature Scal g as v → 0. with appropriate decay in the derivatives of g ij (in particular, these metrics are C 0 locally asymptotic, in the sense of Definition 2.3, to the Euclidean 4-dimensional space). The first example of such manifolds was discovered by Eguchi and Hanson in [24]; the authors, inspired by the discovery of self-dual instantons in Yang-Mills Theory, found a self-dual ALE instanton metric. The Eguchi-Hanson example was then generalized by Gibbons and Hawking [30] who constructed for each integer k ≥ 2 a family of ALE 4-dimensional gravitational instantons depending on 3k − 6 parameters, which have self dual curvature and are asymptotic to a quotient of R 4 by a cyclic group of order k; these "multi-Eguchi-Hanson" metrics constitute the building blocks of Euclidean quantum gravity theory (see [35], [36]) and were obtained also by Hitchin [38], who derived them through an application of Penrose's non-linear graviton construction. The ALE Gravitational Instantons were classified in 1989 by Kronheimer (see [40], [41]). For the reader's convenience, in order to give at least one explicit example, we briefly describe the Eguchi-Hanson metric following [25]. Let ds 2 = dt 2 + dx 2 + dy 2 + dz 2 the Euclidean metric in R 4 and observe that the flat metric can be written in polar coordinates as (14) ds 2 = dr 2 + r 2 (σ 2 where r 2 = t 2 + x 2 + y 2 + z 2 and Then the Eguchi-Hanson metric can be written (15) where a is a real constant. The metric is singular at r = a in R 4 , but this singularity disappears if one identifies (t, x, y, z) ∼ (−t, −x, −y, −z), after which we obtain a smooth, geodesically complete, Ricci flat metric on R 4 / ∼. The global topology of the manifold is the following: near r = a the manifold has the topology of R 2 × S 2 (more precisely, at every point of S 2 there is an R 2 attached which shrinks to a point as r → a) while for large r the metric approaches the flat metric. Notice that because of the identification ∼, the boundary at infinity is not S 3 but RP 3 ∼ = SO (3), for which S 2 ∼ = SU (2) is the double cover. So, as remarked before, the manifold is locally asymptotically Euclidean, but the global topology at infinity differs from the that of R 4 . For completeness let us also recall that the entire manifold just described can be seen also as the cotangent bundle of the complex plane CP 1 ∼ = S 2 . Open Problem 1: By the direct application of Theorem 1.1 to the Eguchi-Hanson space, we get existence of isoperimetric regions for every value of the area. It is an interesting open problem to characterize such regions. Since the metric is radially symmetric, we expect that (at least for large volumes) the isoperimetric regions are the 3-dimensional projective spaces described by {r = const}. Open Problem 2: Clearly Theorem 1.1 can be applied as well to the other more general ALE Gravitational Instantons mentioned above; the description of the isoperimetric regions is again an interesting open problem. We expect that for large volumes they are normal graphs of the quotient of large centered spheres. We remark that the existence and description of isoperimetric regions is an important issue in general relativity. To name a few examples, D. Christodoulou and S.T. Yau proved in [20] that the Hawking mass of isoperimetric spheres is non-negative (provided the scalar curvature of the ambient manifold is non-negative); H. Bray in [10] gave a proof of a special case of the Riemannian Penrose inequality using isoperimetric techniques; G. Huisken in [39] proposed a definition of mass using just isoperimetric concepts; H. Bray and F. Morgan in [11] characterized isoperimetric regions in certain spherically symmetric manifolds, in particular in Schwarzshild; M. Eichmair and J. Metzger in [26], [27] and [28] described the isoperimetric regions of large volume in initial data sets for the Einstein's equations; J. Corvino, A. Asymptotically Hyperbolic Einstein manifolds In this subsection we discuss the importance and existence of Einstein manifolds which are locally C 0asymptotic to a negatively curved space form (and hence satisfy the assumption of Theorem 1.1). Let M be the interior of a compact n-dimensional manifoldM with non-empty boundary ∂M ; a complete metric g on M is C m,α conformally compact if there is a defining functionρ onM such that the conformally equivalent metricg = ρ 2 g extends to a C m,α metric on the compactificationM . A defining function ρ is a smooth, non-negative function onM with ρ −1 (0) = ∂M and dρ = 0 on ∂M . The induced metric γ =g |∂M is called the boundary metric associated to the compactificationg. There are many possible defining functions and hence many compactifications of a metric g, so only the conformal class [γ] of γ on ∂M is uniquely determined by (M, g). If the metric g is C 2 conformally compact and Einstein normalized so that Ric g = −(n − 1)g, then it is asymptotically hyperbolic in the sense that |K g + 1| = O(ρ 2 ), where K g is the sectional curvature of g (see for example the Appendix in [4]). The relationship with the hyperbolic space can be made even more explicit by constructing special coordinate charts near the boundary (see for example Chapter 3 in [42]). In recent years, interest in asymptotically hyperbolic Einstein metrics has risen dramatically, also thanks to their physical relevance. Indeed the previous described notion of conformal infinity for a (pseudo)-Riemannian manifold was introduced by Penrose [51] in order to analyze the behaviour of gravitational energy in asymptotically flat space times. More recently, asymptotically hyperbolic Einstein metrics have begun to play a central role in the "AdS/CFT correspondence" of quantum field theory: broadly speaking the correspondence states the existence of a duality equivalence between gravitational theories (such as string theory or M theory) on M and conformal field theories on the boundary at conformal infinity ∂M (see for example [3]). Regarding the existence of such metrics, Graham and Lee [32] have proved that any metric γ near the standard metric γ 0 on S n−1 in a sufficiently smooth topology may be filled with an asymptotically hyperbolic Einstein metric g on the n-ball B n having prescribed boundary metric γ, and moreover such metrics have a conformal compactification with a certain degree of smoothness. More precisely, they prove that for any m ≥ 2, there is an open neighborhood U γ0 of γ 0 in the space of C m,α metrics on S n−1 such that any metric γ ∈ U γ0 is the boundary metric of an asymptotically hyperbolic Einstein metric g on the n-ball B n , i.e. γ =g |∂M . Furthermore, the metric g is C n−2,α -conformally compact for n > 4 and C 1,α for n = 4. Biquard [9] and Lee [42] independently extended this result to boundary metrics in an open C m,α -neighborhood of the boundary metric γ 0 of an arbitrary non-degenerate asymptotically hyperbolic Einstein manifold (M, g). Anderson [4] gave other existence results using degree arguments, under the assumption that the boundary metric γ has positive scalar curvature. Bryant soliton and its generalizations Another class of Riemannian manifolds satisfying the assumptions of Theorem 1.1 is given by Ricci solitons of Bryant type. These metrics have non-negative Ricci curvature and are locally C 0 -asymptotically Euclidean. R. Bryant in [15] proved that it is possible to find a function φ : R + → R such that the warped product metric g = dr 2 + φ(r) 2 g S n−1 on R n , where g S n−1 is the standard metric on S n−1 and r = (x 1 ) 2 + . . . + (x n ) 2 is the radial coordinate, is a complete metric with positive curvature operator (hence positive Ricci curvature), whose sectional curvatures decay at least inverse linearly in r (Bryant's proof is in dimension three, but analogous arguments give the general case; see for example Section 4.6 in [19]). This metric plays a crucial role in the analysis of Ricci flow, being the only example in dimension three of a non-flat and k-non-collapsed steady gradient Ricci soliton (see [12] and [13] for higher dimension). Other soliton examples fitting our assumptions are given by Catino-Mazzieri in [16].
2012-10-01T20:34:46.000Z
2012-10-01T00:00:00.000
{ "year": 2012, "sha1": "f3819960642ce6e1e584bd4e96cd0a84fa9725fc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1210.0567", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "756ff41ec71073a602540e41d678437c8780420e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
254975817
pes2o/s2orc
v3-fos-license
Chromatin accessibility is a two-tier process regulated by transcription factor pioneering and enhancer activation Chromatin accessibility is integral to the process by which transcription factors (TFs) read out cis-regulatory DNA sequences, but it is difficult to differentiate between TFs that drive accessibility and those that do not. Deep learning models that learn complex sequence rules provide an unprecedented opportunity to dissect this problem. Using zygotic genome activation in the Drosophila embryo as a model, we generated high-resolution TF binding and chromatin accessibility data, analyzed the data with interpretable deep learning, and performed genetic experiments for validation. We uncover a clear hierarchical relationship between the pioneer TF Zelda and the TFs involved in axis patterning. Zelda consistently pioneers chromatin accessibility proportional to motif affinity, while patterning TFs augment chromatin accessibility in sequence contexts in which they mediate enhancer activation. We conclude that chromatin accessibility occurs in two phases: one through pioneering, which makes enhancers accessible but not necessarily active, and a second when the correct combination of transcription factors leads to enhancer activation. Introduction Cellular transitions during embryonic development are driven by cis-regulatory DNA sequences, or enhancers, that instruct genes to become expressed at the right time and place. Each enhancer contains a distinct combination and arrangement of sequence recognition motifs for transcription factors (TFs) such that only a specific combination of TFs, present at the right time and place in development, can stimulate activation 1,2 . How exactly combinations of TFs read out the cis-regulatory code to foment enhancer activation is a fundamental question in biology. An important layer of the cis-regulatory code is chromatin accessibility 3 . Chromatin accessibility both informs and is impacted by the binding of TFs and thus is an integral part of the process by which enhancers become activated. Before activation, developmental enhancers are maintained in a state of intrinsically high nucleosome occupancy such that they are inaccessible to most TFs 4-8 . The first step towards activation is to make the enhancer accessible, which is accomplished by the so-called "pioneer" TFs. Pioneer TFs are typically expressed early during cellular transitions and can bind their motifs within nucleosomal DNA [9][10][11] . Once the chromatin is accessible, additional TFs may bind to and activate enhancers, leading to the expression of target genes. However, TFs frequently cooperate in modulating chromatin accessibility [12][13][14][15] , making it hard to differentiate between pioneer TFs and non-pioneer TFs, and raising the possibility that any TF may function as a pioneer TF [16][17][18] . Distinguishing between motifs of TFs that actively drive chromatin accessibility and those of TFs that follow it more passively is computationally challenging. A motif may be statistically overrepresented in accessible regions, but whether it facilitates chromatin accessibility or is present in these regions and subsequently contributes to enhancer activation once the region is already accessible is not clear. Identifying pioneer TFs experimentally is also challenging. In in vitro experiments, pioneer TFs have an affinity for nucleosomes and tend to be structurally capable of binding their motif on nucleosomal DNA [19][20][21][22] . Thus, pioneers may read out nucleosomal DNA sequences differently than when binding to naked DNA 19,22,23 , but the general rules of these interactions are unknown. To distinguish pioneer TFs from non-pioneer TFs, one possibility is to model chromatin accessibility data in a highresolution and quantitative fashion, while taking motif combinations and arrangements into account 18 . This approach is even more powerful when combined with interpretable convolutional neural networks (CNNs), which can learn complex DNA sequence rules embedded in the cisregulatory code de novo 24 . In this learning paradigm, the CNN learns to predict the experimental data directly from genomic sequence, which allows it to learn motifs in their combinatorial context. These rules are general since the performance is evaluated based on a withheld subset of the data that the model does not train on. If the model can accurately predict these test data, the learned sequence rules are extracted from the model using interpretation tools 25 . This approach has been successfully used to predict ATAC-seq chromatin accessibility data [26][27][28][29][30] , revealing TF motifs predicted to contribute to chromatin accessibility in different experimental systems. However, since not all TFs and their binding motifs are known under these conditions, it is very difficult to evaluate whether the discovered motifs belong to known TFs with characterized properties 31 . Likewise, the models can predict synergistic effects between TF motifs 28,29 , but the exact rules and the underlying mechanisms are not known. This makes it very challenging to connect the rules extracted from deep learning models with known TF biology. To better leverage this approach, we set out to learn both TF binding data and chromatin accessibility data in the early Drosophila embryo, a well-studied model system with a wide range of data from classical genetics, biochemistry, and modern imaging experiments. Studying early embryogenesis has the added advantage that chromatin accessibility is established de novo as the zygotic genome is activated and the first gene expression programs are established along the anteroposterior and dorsoventral axes [32][33][34] . The TFs and enhancers involved in this process have been thoroughly characterized by molecular genetics 35 , making it an ideal system to test and validate the learned rules of a CNN model. The major driver of the Drosophila zygotic genome activation is the maternally-provided zinc-finger TF Zelda, which begins to bind one hour into development, during the embryo's eighth nuclear cycle 36,37 . From then on, Zelda binds the majority of its motifs genome-wide, which are highly enriched among developmental enhancers 36,38,39 . At these regions, Zelda binding is required for nucleosome depletion and increased chromatin accessibility 6,40,41 . This in turn facilitates the binding of patterning TFs, including the binding of the dorsoventral patterning TFs Dorsal 42,43 and Twist 44 , as well as the anteroposterior patterning TFs Bicoid [45][46][47] and Caudal 5 . Furthermore, in vitro experiments suggest that Zelda can bind in the presence of nucleosomes 19,48 . Taken together, Zelda has all the characteristics of a pioneer TF. While Zelda is a well-studied pioneer TF, whether it cooperates with other early-acting TFs in the embryo to induce chromatin accessibility is not known. GAGA Factor (GAF) and CLAMP are additional pioneer TFs important for zygotic genome activation, but whether they synergize with Zelda is not clear, because they regulate largely distinct sets of regions from Zelda and tend to be more promoterspecific [49][50][51][52][53] . Patterning TFs, on the other hand, strongly overlap in binding with Zelda, but it is unknown whether they cooperate with Zelda and can function as pioneer TFs 36,38,39,54,55 . Bicoid has been reported to play a pioneering role at a subset of its bound regions 56 , but the sequence rules underlying this behavior have not been characterized. Likewise, whether other patterning TFs can increase chromatin accessibility is unknown. To learn DNA sequence rules at the highest possible resolution, we have previously developed a CNN called BPNet and applied it to high-resolution chromatin immunoprecipitation (ChIP-nexus) data in mouse embryonic stem cells 57,58 . BPNet directly predicts genomics data at baseresolution, allowing it to learn the precise rules by which TFs cooperate in binding in vivo. A modified BPNet model, ChromBPNet, has been applied to predict ATAC-seq data at base-resolution 28 , allowing us to use the BPNet approach for both data types. We generated high-resolution TF binding data and timecourse chromatin accessibility measurements in the early Drosophila embryo and leveraged both the unique strengths of the CNN models and our ability to test and validate the learned rules experimentally. We uncovered a clear directional relationship in binding between Zelda and the patterning TFs and found that Zelda and the patterning TFs both increase chromatin accessibility. Through genetic experiments in Drosophila mutant strains, we found that Zelda and the patterning TFs increase accessibility through distinct modes. While Zelda acts as a bona fide pioneer TF, even at low-affinity motifs, the patterning TFs increase accessibility through transactivation. These results show that chromatin accessibility during zygotic genome activation follows complex sequence rules and is driven both by pioneers and transcriptional activators in distinct steps. Neural networks predict Zelda's role in helping other transcription factors bind in the early Drosophila embryo To determine the binding and cooperativity of TFs in the early embryo, we performed high-resolution ChIP-nexus experiments in staged embryos on the most well-studied TFs during early embryogenesis. We chose the two best known pioneers, Zelda and GAF, the main dorsoventral patterning TFs Dorsal (Dl) and Twist (Twi), as well as the main anteroposterior patterning TFs Bicoid (Bcd) and Caudal (Cad) (Figure 1a). ChIP-nexus maps genome-wide TF binding footprints at base-resolution by virtue of a strand-specific exonuclease, and has previously uncovered TF cooperativity in vivo [57][58][59] . Replicates for each TF showed high concordance (Supplemental figure 1). We trained a BPNet model to predict the ChIP-nexus data from DNA sequence and interpreted the sequence rules as previously described 57 . This approach is uniquely suited to learn the sequence rules of TF binding and cooperativity because it models cis-regulatory sequences in their native genomic contexts and learns TF binding motifs in an inherently combinatorial way. Motifs that are mapped in genomic sequences are defined not just by a sequence match but also by a contribution score towards the binding predictions. To maximize the accuracy of the model's learned sequence rules, we optimized the model to achieve high prediction accuracy and confirmed the results through crossvalidation (Supplemental figure 2). We next inspected the de novo learned motifs, represented either as a classic frequency-based position weight matrix (PWM) or as the novel contribution weight matrix (CWM), which is the model's extracted contribution of each base for TF binding. This confirmed that we discovered the known motifs for all BPNet-modeled TFs ( Figure 1b) and that these motifs showed the expected sharp ChIP-nexus binding footprints from the bound TFs ( Figure 1c). We also manually inspected well-studied enhancers to compare how the ChIP-nexus predictions matched the experimental data and that experimentally validated motifs were mapped accurately ( Figure 1d, Supplemental figure 3). For example, we confirmed that the well-studied neuroectodermal sog shadow enhancer had the expected motifs for Zelda and Dorsal 42,43,60,61 and for Twist and Bicoid [62][63][64] . Since this enhancer is part of the withheld data set that was never seen by the model during training, this example highlights how the model correctly predicts TF binding from DNA sequence alone and that it did so by using the expected TF binding motifs (Figure 1d). We then extracted the rules of TF cooperativity from the model. We first measured the average contribution of each motif towards the binding of each TF ( Figure 1e). As expected, all motifs strongly contributed towards their own TFs, but some motifs also contributed to the binding strength of other TFs, suggesting that there is binding cooperativity between TFs. Most prominently, the Zelda motif is predicted to be important for the binding of all other TFs (Figure 1e). This includes Bicoid, Caudal, Dorsal, and Twist, which have been shown in previous genetic experiments to depend on Zelda binding to its motif and thus agrees with Zelda's established role as a pioneer TF 5,6,40,42,44,45 . In addition, BPNet predicts that Twist binding depends on the Dorsal motif. Dorsal and Twist have previously been reported to cooperate 61,65-68 , but our result suggests that this cooperativity is directional, i.e., the Dorsal motif is more important for Twist binding than the Twist motif is important for Dorsal binding. This is also reflected in the experimental ChIP-nexus average profiles, which show Twist accumulation over the Dorsal motif but not vice versa (Figure 1c). Interestingly, the motif for GAF did not strongly contribute to the binding of TFs other than GAF itself, even though GAF is known to promote chromatin accessibility 49,52,53,69,70 . To internally validate that BPNet learned different rules of cooperativity for Zelda and GAF, we used the trained model to predict TF binding when motif pairs are injected into randomized sequences (Figure 1f). For each TF motif, we measured the average fold-change increase in binding when a Zelda or GAF motif was added at a given distance (up to 400 bp). Consistent with our initial results, injecting a Zelda motif generally boosted the binding of all TFs, while the GAF motif only had a strong boosting effect on another GAF motif ( Figure 1f). Notably, all observed cooperativity occurred when the motifs were spaced within nucleosome-range distances, consistent with an effect on nucleosomes. Finally, we tested the derived cooperativity rules on known enhancers. We computationally mutated the sequence of each TF motif and predicted the effects on TF binding with BPNet. As expected, mutating Zelda motifs consistently had a strong effect on the binding of other TFs (Figure 1g; Supplemental figure 4). In contrast, the effects of mutating patterning TF motifs tended to be more enhancer-specific. At ChIP-nexus was used to map the high-resolution, strand-specific binding of Zelda (Zld), GAGA factor (GAF), Bicoid (Bcd), Caudal (Cad), Dorsal (Dl), and Twist (Twi) in staged syncytial blastoderm embryos. These data were used to train a multi-task BPNet model that predicts TF binding from DNA sequence alone. (b) BPNet identified and mapped the known motifs for each TF. The position weight matrix (PWM) is a frequency-based motif representation, while the contribution weight matrix (CWM) is the novel BPNet motif representation, where base height reflects the importance for predicting TF binding. PWM and CWM motif representations are highly similar for all TFs. (c) Average TF binding footprints at all BPNet-mapped motifs from the experimentally generated ChIP-nexus data. Sharp binding footprints indicate that a motif is directly bound by a particular TF. Profiles are centered on motifs and binding signals are normalized (RPM). ChIP-nexus provides strand-specific information, with the positive strand represented by positive values and the negative strand represented by negative values. (d) Comparing experimentally generated TF binding with BPNet-predicted TF binding at the sog shadow enhancer illustrates BPNet's predictive accuracy. Each color is a different TF, where the top track is the experimental ChIP-nexus data, and the bottom track is the predicted binding. Motifs were identified and mapped by BPNet. This enhancer was withheld from BPNet during training, making it an ideal locus to test how well BPNet has learned the cis-regulatory rules that predict TF binding. (e) The counts contribution score for each motif was calculated and averaged for all mapped motifs for the binding of each TF. Darker colors indicate that a motif (y-axis) has a higher contribution to the binding of the associated TF (x-axis). The Zelda motif has a high contribution for the binding of all TFs, but not the reverse, indicating a hierarchical relationship. (f) The Zelda motif is predicted to boost the binding of all TFs, while the GAF motif boosts only GAF's binding. All TF motifs were injected into randomized sequences and their binding was predicted by BPNet when each motif was alone and when a Zelda motif was injected at a given distance, up to 400 bp away. The same procedure was repeated by injecting GAF motifs at distances up to 400 bp away from all other injected motifs. Fold-change binding enhancements were calculated from predicted TF binding in the presence of Zelda/GAF and when these motifs weren't injected for every distance between motifs (x-axis). (g) BPNet predicts TF binding at the wildtype (wt) sequence of the sog shadow enhancer and when individual motifs are computationally mutated. Shaded colors represent the wt predicted binding for each of the six TFs across the entire enhancer. Gray-filled profiles represent the predicted TF binding in response to mutating either a Zelda motif (left), Dorsal motif (middle), or Twist motif (right). Blue bars highlight the mutated motifs in each of the three predictions, while gray bars are all other mapped motifs across the enhancer. Mapped motifs are the same as those highlighted in Figure 1d. Mutating the Zelda motif reduced all TF binding across the enhancer, while mutating the Dorsal motif had a smaller but notable effect on TF binding. the dpp enhancer, mutating Dorsal motifs affected Dorsal and Twist binding, as expected (Supplemental figure 4). However, at the sog shadow enhancer, mutating a Dorsal motif also had an effect on the binding of other TFs, including Bicoid ( Figure 1g). Likewise, mutating a Twist motif not only affected Twist binding, but also had a weak effect on Dorsal binding. These results suggest more complex rules at some enhancers and raise the question of whether chromatin accessibility plays a role in the observed cooperativity. To understand the relationship between TF binding and chromatin accessibility, we performed ATAC-seq experiments 71,72 in a developmental time course of 30-minute intervals during the maternal-to-zygotic transition. This allowed us to measure how enhancers transition over time from a naturally closed state to a more accessible, primed state 50,73-77 . The first embryo collection (1-1.5 h after egg laying, AEL) covers the time when Zelda begins to bind throughout the genome in the 8th nuclear cycle 36 , as well as the earliest stages of embryonic patterning. During the second collection (1.5-2 h AEL), patterning TFs become active and the major burst of zygotic transcription begins and continues into the third (2-2.5 h AEL) and fourth (2.5-3 h AEL) collections 34,78 . All experiments were performed in triplicate, with highly correlated replicates (Supplemental figure 5). In agreement with previous studies, we find that genome-wide chromatin accessibility increases over the four time points 50 (Figure 2a). The sequence rules for chromatin accessibility reveal motif-driven pioneer transcription factors In order to understand the cis-regulatory sequence rules that guide these chromatin accessibility data, we used ChromBPNet, a variation of BPNet that predicts ATAC-seq data at the highest resolution 28,79 . Rather than training on whole fragment coverage, the model predicts the cut sites made by the Tn5 transposase, which more accurately represent accessibility (Figure 2b). Since the Tn5 transposase possesses a strong sequence bias in its cut position 80,81 , ChromBPNet is designed to remove this experimental bias by explicitly learning its sequence rules in a separate BPNet model trained on closed genomic regions (i.e., with low-count, non-peak ATAC-seq signal) (Figure 2b). Then a second ChromBPNet model learns how sequence influences the ATAC-seq accessible regions beyond the bias that is already captured by the frozen bias model (Supplemental figure 6ab). After training, the bias model is removed, and the second model is interpreted to extract the biologically relevant sequence rules that predict chromatin accessibility. We trained separate ChromBPNet models for each of the ATAC-seq time points, omitting regions with annotated promoters to ensure that the sequence rules learned were specific for enhancers, and not strongly driven by core promoter motifs. As with BPNet, we computed performance metrics, conducted hyperparameter tuning, and trained crossvalidation models to validate that model training was successful (Supplemental figure 6c-e). To visually inspect ChromBPNet's predictions, we used the sog shadow enhancer as example (Figure 2c; additional enhancers in Supplemental figure 7). The observed cut site coverage from the ATAC-seq data were spiky and without discernible footprints around the known motifs. This pattern closely matched the cut site coverage predicted by the model, consistent with its high performance metrics (Supplemental figure 6c-e). After removing the bias, the predicted chromatin accessibility was more evenly distributed over the entire enhancer, suggesting that the Tn5 cut site bias was successfully removed (Figure 2c). As with BPNet, we extracted base-resolution contribution scores for all sequences and summarized the de novo learned motifs. The motifs for Zelda and GAF were robustly re-discovered at all four time points, consistent with them being pioneer TFs that open chromatin (Supplemental figure 6f). Additionally, the accessibility model discovered Caudal-like, Dorsal-like, and Twist-like motifs, which deviated from those learned by the TF binding model but nevertheless showed the expected ChIP-nexus binding footprints, confirming their identity (Supplemental figure 6f). We did not identify the Bicoid motif, which seems to contradict a previous study suggesting a role for Bicoid in chromatin accessibility, however this role was context-dependent, and thus the underlying sequence rules were unclear 56 . We next critically evaluated whether the learned sequence rules were compatible with previous knowledge from known enhancers. When we inspected the contribution scores, we found that the Zelda motifs typically stood out with high scores, but some Dorsal and Caudal motifs also had contribution, confirming that these motifs were learned ( Figure 2c, Supplemental figure 7). As another way of internally validating the importance of these motifs, we performed in silico mutagenesis (Figure 2d; Supplemental figure 8). As expected, mutating a Zelda motif in the sog shadow enhancer strongly reduced the predicted chromatin accessibility for all time points, but mutating a Dorsal motif also weakly reduced the predicted accessibility, especially at the later time points when patterning TFs bind most strongly 5,78 . Taken together, the interpretations agree with the TF binding model and our understanding of patterning TF binding dynamics in the Drosophila embryo, suggesting that it is a useful model to probe the rules of chromatin accessibility. We next set out to systematically compare the rules of binding with those of accessibility. We selected regions that are accessible and contain TF motifs mapped by the binding model, which ensures that the motifs are high-quality and unambiguously mapped to the TF through a direct sequenceto-binding relationship. We confirmed that the Zelda and GAF motif instances had high contribution to accessibility at all time points, while those of the patterning TFs had a much smaller contribution ( Figure 2e). Similar effects were observed when we injected each TF motif into randomized sequences in silico (Supplemental figure 6g). Using these mapped motif instances, we then plotted the predicted contribution to accessibility as a function of the predicted binding contribution (Figures 2f). If the role of patterning TFs is more context-dependent than bona fide pioneer TFs, we would expect pioneer TFs to have a more consistent relationship between the TF's binding and the generated chromatin accessibility. Indeed, a correlation between total Zelda binding and chromatin accessibility has previously been reported 40,42 , but it is unknown how well this holds for individual motifs and how this compares to other TFs. Strikingly, we observed a strong correlation for both Zelda and GAF motifs between accessibility and binding contributions, in spite of being learned by different models on different types of data ( Figure 2f). Moreover, when we derive a simple score for motif strength (rank percentile of the PWM match scores), we see that binding and accessibility contributions increase as motif strength increases. This three-way association suggests that the accessibility generated by Zelda and GAF is motif-driven and not heavily reliant on the surrounding enhancer context, which agrees with the conventional model that pioneer TFs come first and mediate the initial step in enhancer activation. In contrast, when we plot the same correlations for the patterning TFs, we find much weaker relationships between TF binding and chromatin accessibility ( Figure 2f). Here, stronger measures of motif strength are associated with stronger binding contribution but not accessibility contribution. One exception is Dorsal at the last time point, where we find an increased correlation between binding and accessibility contribution (Pearson correlation 0.32), as well as an association with motif strength. Notably, this occurs when Dorsal's binding has been reported to be strongest during development 5 . Likewise, Caudal also has a time point-specific correlation that is highest at the two latest time points, when its binding is also strongest 5 . For Twist and Bicoid motifs, the binding and accessibility contribution correlation is the poorest, consistent with the difficulty of the model discovering their canonical motif representations. Taken together, our binding and accessibility models suggest an operational definition of pioneer TFs in which pioneer TFs open chromatin in a motif-driven fashion, while other TFs may also play a role in increasing chromatin accessibility but do so in a more context-dependent manner. Zelda's effect on opening chromatin extends to lowaffinity motifs The correlation between motif strength, TF binding, and ability to open chromatin implies that motifs of lower affinity can also pioneer chromatin accessibility but do so proportionally less than high-affinity motifs. This is surprising since pioneering is expected to occur through TF binding on nucleosomes, where sequence recognition is structurally more constrained than on naked DNA 10,19,20,[82][83][84] . Given previous evidence that pioneering events identified in vivo were associated with degenerate motifs 22,23 , we set out to validate the prediction that pioneering by Zelda can involve low-affinity motifs. We first examined whether the BPNet models correctly learned motif affinities from the Zelda ChIP-nexus binding data. We took all bound Zelda motifs mapped by BPNet and plotted their sequences ordered by contribution to Zelda binding ( Figure 3a). The motif that contributed most to binding (sequence logo from the top quartile) was the canonical CAGGTAG motif, while low-affinity binding motifs (sequence logo from bottom quartile) included motifs where the last base was not a G (CAGGTAH), or the first base was a T (TAGGTAG). These results are consistent with the Zelda motif affinities determined previously by gel shift studies and mutant data [36][37][38]85,86 and correlate with the observed chromatin accessibility across these motifs (Supplemental figure 10a). To more comprehensively test how well the BPNet models learned relative Zelda motif affinities, we performed in vitro protein binding microarray (PBM) experiments 87,88 for Zelda ( Figure 3b). PBM-extracted affinities have been shown to correlate with Kd affinity measurements [89][90][91] . We calculated the median Z-score of the binding signal and its corresponding median E-score for all relevant Zelda motif heptads, as well as a negative control sequence (TATCGAT) used previously in gel shift experiments 38 . Strikingly, the simple BPNet-derived motif strength scores we used earlier closely matched the in vitro PBM binding signal (Figure 3b; Supplemental figure 10b). For example, both the experimental data and the BPNet-derived motif strength scores showed on average a three-fold difference in affinity between the CAGGTAG and TAGGTAG sequences. These results are consistent with the recent finding that accurate predictions of relative motif affinities can be extracted from a BPNet model trained on ChIP-nexus or ChIPseq data 92,93 . Such relative motif affinities can be derived without using their motif representations by simply predicting TF binding on motif instances that are stripped from the surrounding genomic context. To test this, we "marginalized" each Zelda motif by injecting it into randomized sequences and measured the effects on binding and chromatin accessibility. The log-transformed measurements were very similar to our previous BPNet-derived motif strength scores, and closely matched the in vitro PBM binding Z-scores ( Figure 3b). These results collectively confirm that the models have accurately learned relative Zelda binding affinities. Having confirmed that the BPNet and ChromBPNet models correctly learned Zelda motif affinities, we next performed experiments on Zelda-depleted embryos 6 to test whether low-affinity motifs contribute to accessibility in vivo. We confirmed that the zldembryos had no detectable Zelda by immunostaining ( Figure 3c) and performed ATAC-seq time-course experiments, with replicates that were highly correlated (Supplemental figure 9). Consistent with previous observations 40,56 , Zelda-bound regions showed a global decrease in accessibility compared to wildtype (p < 2e-16, Wilcoxon rank-sum test), while regions without a Zelda motif remained unchanged (Figure 3d; Supplemental figure 10c). We then asked whether individual low-affinity Zelda motifs by themselves influence chromatin accessibility. We selected regions with either a single high-affinity (CAGGTAG) or a single low-affinity (TAGGTAG) Zelda motif, with no other BPNet-mapped motif nearby. At regions with the high-affinity Zelda motif, a clear reduction in chromatin accessibility was observed in zldembryos. This reduction became more prominent over time as these regions became more accessible in wildtype embryos (example in Figure 3e, left). At regions with the low-affinity TAGGTAG motifs, we observed the same effect but weaker (example in Figure 3e, middle). To quantify this difference, we selected the genomic regions with the 250 highest and lowest affinity Zelda motifs. To minimize confounding effects, these regions had no other mapped motifs nearby and did not overlap promoters. As expected, the regions with the high-affinity Zelda motifs had more Zelda binding in the ChIP-nexus data than those with the low-affinity motifs (Supplemental figure 10d). Using these regions, we which facilitated the precise hand-sorting of different stages during embryo collections. Across ATAC-seq peaks there is a general increase in normalized ATAC-seq fragment coverage over time. (b) ChromBPNet is a modified BPNet deep learning model that predicts chromatin accessibility using DNA sequence as an input. ChromBPNet's architecture is similar to the BPNet architecture, however training relies on the simultaneous use of two models. The first is a Tn5 bias model, which was pre-trained on closed and unbound genomic regions to explicitly learn only Tn5 sequence bias and is then frozen. The second is a standard, randomly-initialized BPNet model which learns the unbiased cis-regulatory information predictive for chromatin accessibility. Following model training, the Tn5 bias model is removed, and the unbiased model is interpreted free of Tn5 bias. (c) ChromBPNet accurately predicts chromatin accessibility information at the sog shadow enhancer during the last time point. The top two tracks represent experimentally generated ATAC-seq coverage, with the top being the conventional fragment coverage and the bottom being Tn5 cut site coverage. The third track is ChromBPNet's ATAC-seq cut site prediction at this time point. While it mirrors the observed cut site coverage very closely, this track contains Tn5 bias. The fourth track is ChromBPNet's prediction after removing Tn5 bias, which is more evenly distributed across the enhancer. The fifth track is the counts contribution scores for each base across the enhancer, which spikes at BPNet-mapped motifs, particularly at Zelda motifs but also at Dorsal motifs. (d) ChromBPNet predicts chromatin accessibility at the wildtype (wt) sog shadow enhancer and in the presence of individual motif mutations across time. The same Zelda (left), Dorsal (middle), and Twist (right) motifs that were mutated previously ( Figure 1g) are mutated here, and ChromBPNet predicted time course chromatin accessibility in response to those mutations. Mutation of the Zelda motif had the largest predicted effect on chromatin accessibility, while the Dorsal mutation is predicted to lower accessibility to a lesser extent and only at later time points. Shaded colors are the wt predicted accessibility for each time point, and the gray profiles are the predictions in response to motif mutation. Blue bars are mutated motifs, gray bars are all other motifs mapped to this enhancer. (e) Average counts contribution scores for each BPNet-mapped motif (y-axis) are shown for all time points (x-axis) to represent how important a particular motif is to the chromatin accessibility prediction across time. Pioneering motifs contribute to chromatin accessibility robustly at all time points, while patterning TF motifs have a lesser contribution that is limited to later time points. (f) Pioneer TF motifs show a clear three-way correlation between binding contribution, accessibility contribution, and motif strength. Patterning TFs show much weaker, time point-specific relationships, suggesting context-dependent behavior. For each bound and accessible motif for all TFs, the binding counts contribution scores (x-axis) and accessibility counts contribution scores (y-axis) are plotted. Motif strength was extracted from the trained BPNet model by ranking motifs for each TF by their match score to each TF's PWM and taking the rank percentile. Pearson correlation values (r) and coefficient of determination R 2 values were calculated. Red lines are shown for plots with an r > 0. 3. found that the low-affinity Zelda motifs had on average a fivefold weaker effect on chromatin accessibility than the highaffinity Zelda motifs, while control regions with a single GAF motif were unchanged (Figure 3f; Supplemental figure 10e). These differences were very similar to those predicted by the ChromBPNet upon mutating the Zelda motifs ( Figure 3g). These results demonstrate that low-affinity Zelda motifs can promote accessibility, but to a lesser extent than high-affinity CAGGTAG motifs, and that the extent of chromatin opening correlates with the motif's affinity. Since the low-affinity Zelda motifs have a smaller effect on chromatin accessibility, we expected them to also have a weaker effect on promoting the binding of patterning TFs. To test this hypothesis, we performed in silico motif injections and measured the average predicted binding of each TF with and without the presence of different Zelda motif variants. For all TFs, the resulting fold-change binding enhancement was indeed higher for the high-affinity CAGGTAG motif than for the low-affinity TAGGTAG motif, but the latter still had a measurable effect (Figure 3h). Likewise, the accessibility model predicted that both high-and low-affinity Zelda motifs boosted the effect of patterning TF motifs on chromatin accessibility, but to a different extent (Supplemental figure 10f-g). These effects are consistent with the experimentally observed effect of low-affinity motifs on chromatin accessibility and corroborate the role of low-affinity Zelda motifs in opening chromatin and helping patterning TFs bind. Patterning transcription factors contribute to chromatin accessibility Thus far, the results suggest that patterning TFs do not have the same pioneering capabilities as Zelda, but could increase chromatin accessibility in some contexts, perhaps dependent on which other motifs are present within that region. To systematically investigate motif combinations, we used a "motif island" approach in which genomic regions are grouped according to their motif combinations. An island is initially defined as 200 bp centered on a motif, but if this region overlaps with another motif island, the islands get merged (Figure 4a). We then classified the motif islands by their motif combinations without taking motif number or order into account (islands provided in Supplemental file 2). These multi-motif islands are the size of typical enhancers 94 , with the majority of them being between 200 and 300 bp wide (Supplemental figure 12b). To better characterize enhancer states for different motif combinations, we used staged embryos and performed micrococcal nuclease digestion with sequencing (MNase-seq) and ChIP-seq experiments for the histone modifications H3K27ac and H3K4me1, with highly correlated replicates (Supplemental figure 11). We then analyzed the properties of each island combination ( Figure 4b, individual examples in Figure 4c). The results are consistent with Zelda's role in pioneering, but also reveal the role of patterning TFs. Islands without a Zelda motif typically have very low accessibility and histone modifications, coupled with higher nucleosome occupancy. Islands that only have Zelda motifs and no other motif ( Figure 4b, red box) show an increase in chromatin accessibility over time, with an effect proportional to the number of Zelda motifs (Supplemental figure 12d). Although overall, the effect is modest, and these islands have low levels of histone modifications and are not enriched for known developmental enhancers active in blastoderm embryos 73 . By contrast, the highest levels of enhancer accessibility are found at islands that also have motifs for patterning TFs and have the properties of active enhancers. Islands containing motifs for both Zelda and patterning TFs (e.g., Dorsal and Twist) show much higher levels of accessibility, nucleosome depletion, and histone modifications than Zelda-only islands. Interestingly, H3K4me1 correlates better with chromatin accessibility, while H3K27ac correlates better with activity ( Figure 4b, Supplemental figure 12c). Taken together, these results suggest that it is the combination of Zelda motifs and patterning TF motifs that generates the highest levels of accessibility, which would explain why it has been challenging to causally link individual TFs such as Bicoid to increased levels of chromatin accessibility beyond those generated by pioneer TFs 56 . To detect the effect of patterning TFs on chromatin accessibility experimentally, we took advantage of our zld -ATAC-seq data. Since the patterning TFs require Zelda for binding, any effects that they have on chromatin accessibility should also be lost in zldembryos, in addition to the loss of chromatin accessibility caused by Zelda depletion. Thus, we expect that depleting Zelda has a stronger effect on regions with motifs for both Zelda and patterning TFs compared to those with only Zelda motifs. This was indeed the case ( Figure 4d). For example, islands with Zelda, Dorsal, and Twist motifs had a much more pronounced fold-change loss in accessibility than Zelda-only islands (p < 2.2e-16, Wilcoxon rank-sum test). These experimental results confirm a model by which high levels of chromatin accessibility are established in a hierarchical manner by a combination of motifs for the pioneer Zelda and the downstream patterning TFs. Patterning transcription factors contribute to accessibility when mediating activation Our results suggest that patterning TFs increase chromatin accessibility when their motifs are present in specific combinations that include Zelda motifs. Enhancers with such motif combinations also tend to be active enhancers, raising the question whether enhancer activity and accessibility are directly functionally coupled. This would be consistent with previous observations that the highest levels of accessibility and TF binding are often found at active enhancers [73][74][75]77,95,96 . Alternatively, it is possible that the binding of patterning TFs also consistently contributes to the accessibility, but that their dependence on Zelda motifs for binding creates the requirement for motif combinations. The poor correlation between the binding of patterning TFs and their contribution to accessibility argues against this hypothesis (Figure 2f), but Figure 3. The pioneer TF Zelda reads out motif affinity to drive chromatin accessibility (a) BPNet binding contributions reflect the known Zelda motif affinities. All BPNet-mapped Zelda motifs were ordered by their counts contribution scores to Zelda binding, with the highest contribution motifs on top and the lowest contribution motifs on the bottom. Motif logos were generated for the highest and lowest contributing sequence quartiles. (b) Zelda motif affinities can be accurately extracted from the trained BPNet and ChromBPNet models. Mapped Zelda motifs were separated by their heptad sequences, and the known Zelda heptads were extracted and ordered by their rank percentile of their PWM match scores (orange). The negative control, non-mapped TATCGAT heptad was included. Protein binding microarray (PBM) experiments were performed using the Zelda C-terminal region. 8-mers from PBM experiments were grouped based on their 7mer sequences, median Z-score (green) values were calculated for the 7-mers, and the 7-mers matching Zelda heptads were extracted. The effects of each Zelda heptad were marginalized from the effects of genomic background sequences using the trained BPNet (blue) and ChromBPNet (gold) models to extract model-determined motif affinities. The experimentally derived and model-derived Zelda motif affinities strongly correlate. (c) Zelda depleted embryos (zld -) show a clear reduction in the Zelda protein. Confocal images of nuclear cycle 14 wildtype (wt) and zldembryos were collected, maximum intensity projected, and processed using the same settings. (d) Chromatin accessibility is significantly reduced at ATAC-seq peaks containing mapped Zelda motifs. Differential chromatin accessibility between wt and zldembryos was calculated as the log2 fold change for each peak region using DESeq2. The median values of the four time points are shown. Peaks containing Zelda motifs are significantly different from control peaks without Zelda motifs (Wilcoxon rank-sum test, p < 2e-16). (e) Chromatin accessibility is reduced at high-and low-affinity Zelda motifs in zldembryos. Individual examples of normalized chromatin accessibility in wt (shaded profile) and zld -(black line) embryos are shown at a highaffinity Zelda motif (CAGGTAG, left) and a low-affinity Zelda motif (TAGGTAG, middle), with the GAF motif (right) as a control. No other BPNetmapped motifs are within these windows. (f) Average chromatin accessibility profiles at the 250 highest and lowest affinity Zelda motifs in wt and zldembryos show that low-affinity motifs facilitate Zelda's pioneering, but to a lesser extent than the high-affinity motifs. Islands that only contain a single Zelda motif were extracted and separated into high-and low-affinity categories based on the rank percentile of their PWM match scores (high = high affinity, low = low affinity), while 250 GAF motifs were seed-controlled, randomly selected. Motif logos were generated from these motif instances. The colored lines are the wt, normalized, ATAC-seq data, and dotted black lines are the same but in zldembryos, with profiles anchored on the Zelda motifs. Motifs mapping to promoters were excluded, as in ChromBPNet training. (g) ChromBPNet model predictions at the same highand low-affinity Zelda motifs as in Figure 3f. ChromBPNet predicted bias-corrected cut site coverage at the wt high-and low-affinity Zelda motif regions and when the Zelda motifs were computationally mutated. The similarity to Figure 3f shows that ChromBPNet has accurately learned the effects of Zelda motif affinity. (h) Low-affinity Zelda motifs are predicted to boost TF binding. TF motifs were injected into randomized sequences with either a high-affinity Zelda motif (CAGGTAG), low-affinity Zelda motif (TAGGTAG), or no Zelda motif injected at a given distance away for up to 200 bp, and TF binding was predicted (y-axis). The fold change binding enhancement averaged across the window was calculated using predicted TF binding at motifs with a high-or low-affinity Zelda motif injected nearby and predicted TF binding without a Zelda motif injected nearby. we cannot rule out that this is due to limitations of the ChromBPNet model. To distinguish whether patterning TFs mediate increased accessibility through their binding or through their effect on enhancer activity, we leveraged the strengths of Drosophila genetics to experimentally test the context-dependent role of Dorsal in chromatin accessibility. Dorsal is present in the early embryo as a ventral-todorsal nuclear concentration gradient that is set up by maternal Toll signaling on the ventral side. At high levels of nuclear Dorsal, the nuclei acquire mesodermal identity; at low levels of Dorsal, they acquire neuroectodermal identity; in the absence of Dorsal, they acquire dorsal ectodermal identity 61 (Figure 5a). The key to Dorsal's ability to specify three tissue types is its ability to function as a dual transcription factor that can activate mesoderm and neuroectoderm genes and repress dorsal ectoderm genes. This switch in function is possible because the repressed enhancers have Dorsal motifs that are flanked by low-affinity motifs for the repressor Capicua (Cic) 59,97-99 . ChIP-seq signal, and H3K4me1 ChIP-seq signal were calculated. ATAC-seq and MNase-seq coverage was calculated across a 250 bp window centered on the island, while the H3K27ac and H3K4me1 signals were calculated in a 1.5 kb window centered on the island since these marks are typically on the enhancer flanks. A list of enhancers active in 2-4 h AEL embryos was used to calculate an overlap percentage for each island type 73 . The red bar highlights islands that contain only Zelda motifs, and islands are ordered by total ACAT-seq signal. (c) Individual examples for Zld, Dl_Zld, and Dl_Twi_Zld islands. Colored bars indicate BPNet-mapped motifs (blue = Zld, magenta = Dl, green = Twi), and no other BPNet-mapped motifs are within these windows. (d) Chromatin accessibility is most significantly reduced at motif islands containing Zelda and patterning TF motifs. Differential accessibility between wt and zldembryos was calculated using DESeq2, shown for each island as median log2 fold change values from all time points. Island types that contain more than Zelda motifs show significantly more changes than those with Zelda motifs alone, e.g., the difference between Zld and Dl_Zld islands (p = 8.3e-11, Wilcoxon rank-sum test) and Zld and Dl_Twi_Zld islands (p < 2.22e-16, Wilcoxon rank-sum test). If Dorsal consistently contributes to chromatin accessibility by binding to target enhancers, we would expect that loss of Dorsal leads to decreased chromatin accessibility at all its target genes. To test this, we used gastrulation defective (gd 7 ) mutant embryos, which are defective in maternal Toll signaling and thus Dorsal remains cytoplasmic and inactive in the entire embryo. As a result, these embryos acquire entirely dorsal ectoderm fate 77,100-102 . After validating the gd 7 mutant embryos (Supplemental figure 14a), we performed ATAC-seq time course experiments, producing replicates that were highly correlated (Supplemental figure 13). Using DESeq2 103 , we analyzed the differential accessibility upon loss of Dorsal (gd 7 ) as compared to wildtype (last time point in Figure 5b, earlier times points in Supplemental figure 14b). When we examined known Dorsal target enhancers, we noticed a striking difference in accessibility between enhancers that are activated by Dorsal versus those that are repressed. Mesoderm enhancers (e.g., twi, sna) and neuroectoderm enhancers (e.g., sog, brk), which are activated by Dorsal, show significantly decreased accessibility upon loss of Dorsal (purples in Figure 5b). Conversely, the Dorsal-repressed enhancers do not show decreased accessibility and even show a slight increase, even though they lost Dorsal binding (orange in Figure 5b). These results suggest that Dorsal's ability to increase chromatin accessibility is tied to its role as transcriptional activator. To confirm this effect more broadly and over time, we used a set of previously identified enhancers that have differential H3K27ac levels in gd 7 mutant embryos and show appropriately regulated target genes nearby 102 . We plotted the ATAC-seq signal for each time point and found that the mesoderm enhancers showed decreased chromatin accessibility in both zldand gd 7 embryos (Figure 5c). Neuroectodermal enhancers activated by Dorsal show a similar loss in chromatin accessibility (Supplemental figure 14c). Dorsal ectoderm enhancers on the other hand also lose accessibility in zldembryos, but instead gain accessibility in gd 7 embryos, where they gain activation (Figure 5d). This further corroborates that loss of Dorsal does not always lead to loss of accessibility at Dorsal-bound enhancers, but rather depends on whether Dorsal functions as an activator at these enhancers. One could argue that loss of Dorsal at dorsal ectoderm enhancers did not lead to a loss of accessibility because other TFs are bound to these regions in gd 7 embryos. However, the effect was observed from the earliest time point on, when the primary mechanism of dorsoventral patterning occurs through Dorsal. Enhancers such as tld, zen, and dpp are well studied and known to be regulated by Dorsal repression with the help of Capicua. In gd 7 embryos, these enhancers lose both Dorsal and Capicua binding and become de-repressed 59,98,99 . Since we observe a subtle increase in chromatin accessibility, this suggests that chromatin accessibility is tied to enhancer activity, not Dorsal binding. To test this hypothesis more directly, we specifically manipulated the ability of Dorsal to repress without affecting its ability to activate. In cic 6 mutant embryos, Capicua has a small deletion in its interaction domain (N2) with the corepressor Groucho and no longer functions as a repressor 59 (Figure 5e). As a result, Dorsal can still activate mesoderm and neuroectoderm enhancers but it can no longer function as a repressor at dorsal ectodermal enhancers, where it is now expected to function as a weak activator 59 . Thus, in cic 6 embryos, the Dorsal-activated enhancers should be unchanged compared to wildtype, while enhancers normally repressed by Dorsal should have higher chromatin accessibility. Indeed, when we performed ATAC-seq experiments in cic 6 mutant embryos (Supplemental figure 14d), we found that dorsal ectoderm enhancers showed statistically significant increased accessibility (Figure 5e, orange), while mesoderm and neuroectoderm enhancers not regulated by Capicua generally remained unchanged ( Figure 5e, purples). These results demonstrate that the chromatin accessibility at Dorsal target enhancers depends on the activation state induced by Dorsal rather than the binding of Dorsal. Interestingly, the results also suggest that repressors such as Capicua could decrease chromatin accessibility at their target enhancers. Enhancers that are repressed by Capicua independently of Dorsal through high-affinity Capicua motifs (e.g., hkb, tll, hb, and ind) 59,105-107 also increased in accessibility in cic 6 mutant embryos, while control enhancers (e.g., cnc, oc, ems, and gt) 59 remained unchanged (Supplemental figure 14e, f). Whether Capicua directly decreases chromatin accessibility or whether it counteracts the activity of other TFs such as Bicoid and Caudal remains to be tested. In summary, our results suggest that chromatin accessibility levels depend on both pioneering and enhancer activation. Pioneering by Zelda consistently contributes to accessibility, while the effect of patterning TFs such as Dorsal is context-dependent. This is well illustrated at the Dorsalrepressed enhancer tld 59 and the Dorsal-activated sog shadow enhancer 42 (Figure 5f). In both cases, the chromatin accessibility is dramatically reduced in zldembryos due to the loss of pioneering (Figure 5f, second panel). Loss of Dorsal (gd 7 ) led to a modest but significant decrease in accessibility across the Dorsal-activated enhancer, while the Dorsalrepressed enhancer showed little change (Figure 5f, third panel). Converting Dorsal from a repressor into an activator (cic 6 ), caused a significant increase in chromatin accessibility across the Dorsal-repressed enhancer, while accessibility was essentially unchanged across the Dorsal-activated enhancer (Figure 5f, fourth panel). The same patterns were observed at other enhancers, including those for dpp and sna In Dorsal-containing tissues (i.e., mesoderm and neuroectoderm), Dorsal is an activator of mesodermal and neuroectodermal target genes but a repressor of dorsal ectodermal genes. Dorsal repression occurs through a cooperative relationship with Capicua, whose low-affinity motifs flank Dorsal motifs in dorsal ectoderm target enhancers. Capicua binding at these regions depends on Dorsal, and it then recruits the corepressor Groucho to repress the dorsal ectoderm genes. (b) Chromatin accessibility is specifically reduced at Dorsal-activated enhancers but not at Dorsal-repressed enhancers in embryos lacking nuclear Dorsal. ATAC-seq time course experiments were performed in gd 7 embryos, in which Dorsal is not activated and thus represent entirely dorsal ectoderm. Differential accessibility was conducted between wt and gd 7 embryos for all time points and the MA plot for the 2.5-3 h AEL time point is shown. Red dots represent statistically significant differentially accessible ATAC-seq peaks (FDR = 0.05) and known dorsoventral enhancers are colored by the tissue type in which they are active. Chromatin accessibility is significantly reduced at Dorsal-activated enhancers. Dorsal-repressed enhancers do not lose accessibility in gd 7 embryos. (c) Mesoderm enhancers lose chromatin accessibility in gd 7 embryos. Normalized ATAC-seq fragment coverage from wt, zld -, and gd 7 embryos was calculated at previously determined mesoderm enhancers (n = 416) 102 across a 1 kb window. Statistical significance was determined between wt and zldembryos and wt and gd 7 embryos using Wilcoxon rank-sum tests, where four asterisks is p < 0.0001. In gd 7 embryos, mesoderm enhancers are inactive. (d) Dorsal ectoderm enhancers gain chromatin accessibility in gd 7 embryos. The same analysis used in Figure 5c was performed at dorsal ectoderm enhancers (n = 380). In gd 7 embryos, dorsal ectoderm enhancers are active. (e) Chromatin accessibility is increased at Dorsal-repressed enhancers upon gaining Dorsal activation. ATAC-seq experiments were performed in cic 6 embryos, where Capicua's interactions with Groucho are abrogated, thus eliminating Dorsal-mediated repression and converting Dorsal into an activator at these enhancers. Differential accessibility analysis between wt and cic 6 embryos was performed as in Figure 5b. Dorsal-repressed enhancers, which now gain Dorsal activation, show a significant increase in chromatin accessibility while mesoderm and neuroectoderm enhancers are not differentially accessible. Pioneering and enhancer activation do not simply differ because of different effect sizes, but rather appear to be distinct processes. While chromatin accessibility is more dramatically affected by the loss of Zelda than the loss of Dorsal, the inverse is true for the effect on gene expression. In the absence of Dorsal, the expression of sog is completely abolished 104 , while in the absence of Zelda, sog expression is delayed and narrowed but still occurs with high concentrations of Dorsal 42,60 . Thus, Zelda has a stronger effect on chromatin accessibility, while Dorsal has a stronger effect on activation, arguing that they involve functionally separable processes that both have effects on chromatin accessibility. Discussion Here, through combining TF binding data, chromatin accessibility data, deep learning models capable of learning both datasets independently of one another, and using classic Drosophila genetics as a validation tool, we asked how TFs mediate chromatin accessibility in the Drosophila embryo. We investigated whether the role of opening chromatin is restricted to TFs axiomatically classified as pioneers or if TFs more generally contribute to chromatin accessibility. We uncovered the cis-regulatory sequence rules and distinct underlying mechanisms of this process. Our results suggest a hierarchical two-tier model, where chromatin accessibility is established first through pioneering but is further increased during enhancer activation ( Figure 6). Importantly, the sequence rules for chromatin accessibility during activation are distinct from those that mediate pioneering. Pioneers like Zelda are the first to bind to their motifs genome-wide and consistently bestow basal accessibility by reading out motif affinity, thereby creating a more permissive landscape for other TFs. In contrast, the patterning TFs require an already accessible state for their binding and increase chromatin accessibility in a contextdependent manner since they only increase chromatin accessibility when mediating enhancer activation. For example, when Dorsal motifs are flanked by motifs for the repressor Capicua in dorsal ectoderm enhancers, no increase in chromatin accessibility is observed. These enhancers do however show an increase in chromatin accessibility when Capicua is mutated such that Dorsal can no longer repress and instead becomes an activator. This demonstrates that the increase in accessibility is not dependent on Dorsal binding per se but on the total effect that the TFs have on the activation of the enhancer, and thus is governed by the cisregulatory rules of activation. This contrasts with Zelda, which consistently increases chromatin accessibility in the absence of enhancer activation. The functional separation between pioneering and activation is consistent with previous observations in the early Drosophila embryo. Zelda unambiguously generates chromatin accessibility very early on, but is insufficient for the activation of most enhancers and functions together with patterning TFs during zygotic genome activation 40,76,[108][109][110] . At many enhancers, Zelda is not even strictly required for enhancer activation since many patterning genes eventually become expressed in zldembryos 38 . Zelda is however a strong potentiator of transcription 5,42,44,45,60 . This suggests that Zelda's effect on chromatin accessibility is not required for activation but boosts the effect of activators. A similar potentiating effect of Zelda has been observed at the level of transcriptional bursting. Dorsal mainly affected the burst frequency, while Zelda had an additional effect on the burst size 108 . These functional differences are consistent with pioneering and activation being physically separate processes. Zelda binds its motifs in the presence of nucleosomes 19,48 , while Dorsal, Twist, Caudal, and Bicoid require accessible DNA for binding 5,6,42,44,45,47,60 . Consistent with Zelda binding nucleosomes in vivo, Zelda has a broad binding footprint in ChIP-nexus data (Figure 1c), which could be mediated by indirect contacts to DNA through nucleosomes. In contrast, patterning TFs have sharper and narrower footprints consistent with their binding primarily accessible genomic DNA (Figure 1c). While Zelda could also bind to accessible regions, this may not occur to a large extent since Zelda binds to chromatin in a rapid and transient manner 46 and does not co-localize with Pol II or at sites of active transcription 46,109 . Thus, pioneering appears to be the process associated with nucleosome removal, while enhancer activation occurs on accessible DNA. How could motifs mediate pioneering? Studies in vivo all point to a constant involvement of ATP-dependent chromatin remodeling 111,112 , but how pioneer TFs recognize their motifs on nucleosomal DNA and interact with chromatin remodelers is not clear. TF binding to nucleosomes in vitro tends to be structurally restricted and can be preferred at certain positions on the nucleosome 19,20,82,83 , but it is unclear whether these structural restrictions are relevant in vivo. Our finding that Zelda very precisely reads out motif affinity and commensurately increases chromatin accessibility is therefore remarkable. While we cannot rule out that the influence of nucleosome position was not learned by ChromBPNet, our results argue against a strong dependence on the motif's relative position on the nucleosome. This is also consistent with our previous study, where we did not find a preferred position for Zelda motifs on in vivo nucleosomes 6 . Instead, our results argue that pioneer TFs recognize their motifs in vivo more efficiently than in vitro, perhaps aided by chromatin remodelers. How could enhancer activation occur dependent on sequence contexts? Since enhancer activation depends on the motif combination and accessible DNA, we propose that it occurs through DNA-mediated hub formation ( Figure 6). When DNA with a set of motifs become accessible and bound by TFs and co-factors, the DNA serves as a seed to induce surface condensation, which locally concentrates the proteins into hubs [113][114][115] . In support of this model, hubs have been observed via imaging studies for multiple TFs in the early Drosophila embryo, including Zelda, Dorsal, and Bicoid 46,47,60,109 . Hubs containing either Dorsal or Bicoid were dependent on Zelda, which is consistent with DNA accessibility being a requisite for hub formation. Furthermore, Dorsal and Bicoid have been reported to recruit the co-factor Nej, the Drosophila CBP 104,116-118 , which could promote hub formation. Lastly, if hubs regulate transcriptional bursting, this could explain why Dorsal and Zelda have different effects. Dorsal may determine the burst frequency by regulating the speed of hub formation on already accessible DNA, while Zelda also facilitates chromatin accessibility and thus may affect the burst size by providing more time and space for hub formation. While this hub model fits well with current data, it does not explain how activation increases the accessibility further. Further studies are needed to better understand the role of hubs. Our results suggest that the relationship between accessible DNA, TF binding, and enhancer activation is more complex than previously thought. Notably, we found that our deep learning models correctly identified the motif for the pioneer TF GAF to play a strong role in chromatin accessibility, but our models also predicted that GAF does not play the same role as Zelda in helping other TFs bind. While GAF is predicted to boost its own binding, it does not seem to strongly promote the binding of the patterning TFs. One explanation for the difference may be the residence time on DNA. While Zelda binds DNA only transiently on the order of seconds 46 , GAF multimerizes on DNA and remains on chromatin on the order of minutes [119][120][121][122] . Such stable binding makes sense in the light of GAF's role in 3D genome structure [122][123][124][125][126] and transcriptional memory 120,127,128 . Thus, GAF could generate accessible chromatin but by binding to the newly opened DNA itself, it could partially occlude the binding of additional TFs. These results suggest that accessibility does not necessarily make the region accessible for all other TFs and further highlight that accessibility is not always a perfect proxy for activation. A separate contribution of pioneering and enhancer activation towards chromatin accessibility likely applies to mammals. In mammals, the highest accessibility is typically also found at active enhancers [129][130][131][132] , yet chromatin accessibility is often only a mediocre predictor for enhancer activity [133][134][135] . Without TF binding data and prior knowledge, it can however be difficult to deduce from accessibility data alone whether a TF bestows chromatin accessibility as a pioneer TF, as an activator, or both 16,17,136 . For example, in our later time points where Dorsal binding is highest, Dorsal more consistently promotes chromatin accessibility (Figure 2f), thus behaving more like a pioneer. It might therefore initially require a combined approach, which includes TF binding, chromatin accessibility, deep learning, and additional experiments, to better distinguish between the mammalian TFs that drive chromatin accessibility and those that follow it. Supplemental files Supplemental file 1: BPNet-identified and mapped motifs for Zelda, GAF, Bicoid, Caudal, Dorsal, and Twist. Motif coordinates come from the Drosophila melanogaster dm6 genome assembly. Supplemental file 2: Motif islands, with provided coordinates aligned to the Drosophila melanogaster dm6 genome assembly. Islands were tested for overlaps with known active enhancers 73 . The normalized ATAC-seq signal calculated 250 bp across each island is provided from wildtype, gd 7 , zld -, and cic 6 embryos. Island types with fewer than 30 instances were excluded. Data and code availability The raw and processed data for ChIP-nexus, ChIP-seq, ATAC-seq, MNase-seq and protein binding microarray experiments are available from GEO under series accession number GSE218852. All code used to process and analyze the data can be accessed at https://github.com/zeitlingerlab/Brennan_Zelda_2023. The ChIP-nexus protocol and the data processing description can be found at https://research.stowers.org/zeitlingerlab/protocols.html. Trained BPNet and ChromBPNet models will be available at Zenodo and Kipoi following review. Original data, including microscopy images, can be accessed from the Stowers Original Data Repository at http://www.stowers.org/research/publications/libpb-2357. Figure 6. Pioneering and enhancer activation increase chromatin accessibility Chromatin accessibility at enhancers is established in a two-tier process that involves pioneering and activation. First, the pioneer Zelda bestows basal chromatin accessibility at enhancers, without activating them, by reading out its motif affinity on nucleosomal DNA. Zelda's pioneering is a consistent effect that is not dependent on the combination of motifs in the enhancer. The pioneering then allows the binding of patterning TFs such as Dorsal, which require an accessible state of the DNA to bind to their motifs. Hubs may then form on accessible regions when sufficient concentrations of patterning TFs bind and interact with each other and co-factors through multivalent weak interactions. In this way, hub formation is DNA-templated and facilitated by Zelda's global pioneering. Whether or not Zelda is also present in these hubs is unclear. During enhancer activation, chromatin accessibility is further increased, perhaps by hubs recruiting Nej, the Drosophila CBP, which mediates histone acetylation at enhancers. Since the TF hubs appear dynamic, they could leave the DNA and make the region more accessible. generated by crossing UAS-shRNA-zld females to MTD-Gal4 males as previously described 6 and tested for embryonic lethality 38 and Zelda depletion using immunostaining ( Figure 3). Embryos lacking nuclear Dorsal were laid by gd 7 /gd 7 mothers generated from a gd 7 /winscy, P{hs-hid}5 stock that was heat-shocked at the larval stage at 37°C for 1 hour on two consecutive days to eliminate heterozygous mothers 6 . Loss of the hs-hid sequence was confirmed using PCR on genomic DNA extracted from heat-shock survivors. The cic 6 /TM3, Sb 1 stock was generated using CRISPR/Cas9 as previously described 59 . Cic6 embryos were collected from cic 6 /cic 6 mothers identified by wt bristles and were confirmed to be embryonic lethal. Embryo collections, fixation, and sorting All embryos were collected from population cages using apple juice plates with yeast paste, following two pre-clearings as previously described 58,137 . For ChIP-nexus, ChIP-seq, and MNase-seq experiments, embryos were collected for 1 h and aged for 2 h at 25°C, yielding collections of 2-3 h after egg laying (AEL). For ATAC-seq, embryos were collected in 30minute windows and aged accordingly to generate the 1-1.5, 1.5-2, 2-2.5, and 2.5-3 h AEL time points. All embryos were dechorionated using 50% bleach for 2 minutes and sufficiently rinsed with water afterwards. For ATAC-seq, embryos were hand-sorted based on morphology in ice-cold PBT immediately following dechorionation using an inverted contrasting microscope (Leica DMIL) as described 137 . For ChIP-nexus, ChIP-seq, and MNase-seq, embryos were first fixed with 1.8% formaldehyde in heptane and embryo fix buffer (50 mM HEPES, 1 mM EDTA, 0.5 mM EGTA, 100 mM NaCl) while vortexing for 15 minutes. For ChIP-nexus and ChIP-seq, the vitelline membrane was removed using methanol/heptane and embryos were stored in methanol at -20°C until use. For these experiments, embryos were rehydrated using PBT and sorted to remove out-of-stage embryos using either hand-sorting or cytometry (Copas Plus, macroparticle sorter, Union Biometrica). For MNase-seq, embryos were spun down at 500 x g, 4°C, for 1 minute, and fixation was quenched by adding 10 mL PBT-glycine (125 mM glycine in PBT) and vortexing for 2 minutes. Embryos were hand-sorted based on morphology in ice-cold PBT and then used in MNase-seq experiments. ChIP-nexus and ChIP-seq experiments For each ChIP, 10 µg of antibody was coupled to 50 µL of Protein A Dynabeads (Invitrogen) and incubated overnight at 4°C prior to ChIP. All ChIP-nexus experiments were performed using antibodies custom generated by Genscript: Zelda (aa 1117-1327), Dorsal (aa 39-346), Twist (Cterminus), Bicoid (C-terminus), Caudal (aa 1-214), GAF (aa 1-382). ChIP-seq experiments were performed with the following commercially available antibodies: H3K27ac (Active motif, 39133) and H3K4me1 (Active motif, 39635). For all TFs, at least three biological replicates were performed using embryos from different collections. For ChIP-seq, at least two biological replicates were performed in the same way. Approximately 0.2-0.4 grams of fixed 2-3 h AEL embryos were used for all ChIP experiments. Chromatin extracts were prepared by douncing embryos in Lysis Buffer A1 (15 mM HEPES pH 7.5, 15 mM NaCl, 60 mM KCl, 4 mM MgCl2, 0.5% Triton X-100, 0.5 mM DTT (add fresh)), washing nuclei with ChIP Buffer A2 (15 mM HEPES pH 7.5, 140 mM NaCl, 1 mM EDTA, 0.5 mM EGTA, 1% Triton X-100, 0.5% Nlauroylsarcosine, 0.1% sodium deoxycholate, and 0.1% SDS), and sonicating with a Bioruptor Pico (Diagenode) for six cycles of 30 seconds on and 30 seconds off. ChIP-nexus library preparation was performed as previously described 58 , except that the ChIP-nexus adapter mix contained four fixed barcodes and PCR library amplification was performed directly after circularization of the purified DNA fragments (without addition of the oligo and BamHI digestion). ChIP-seq was performed as previously described and included a whole cell extract (WCE) 68,77 . Single-end sequencing was performed on an Illumina NextSeq 500 instrument (75 or 150 cycles). The full ChIP-nexus protocol can be found on the Zeitlinger lab website at https://research.stowers.org/zeitlingerlab/protocols.html. MNase-seq experiments For each MNase digestion, 100 hand-sorted 2-3 h AEL Drosophila embryos were used. Nuclei were extracted by douncing in PBS with 0.1% IGEPAL CA-630. The nuclei were harvested by centrifugation and resuspended gently in MNase Digestion Buffer (PBS with 0.1% Triton X-100 and 1 mM CaCl2). MNase digestion was performed with 100 U MNase (NEB, M0247S) for 30 minutes at 37°C. The reaction was stopped with 20 mM EGTA. The nuclei were treated with 50 µg/ml RNase A (Thermo Scientific, EN0531) for 1 hour at 37 °C and 1000 rpm, and subsequently incubated overnight at 65 °C and 1000 rpm with 200 µg/ml Proteinase K (Invitrogen, 100005393) and 0.5% SDS for reverse crosslinking. DNA was extracted using phenol-chloroform (VWR, K169). Libraries were constructed from 10 ng purified DNA using the High Throughput Library Prep Kit from KAPA Biosystems (KK8234) according to the manufacturer's instructions. Three experimental replicates were performed. Paired-end sequencing was performed on an Illumina NextSeq 500 instrument (2x 75 bp cycles). Antibody staining and microscopy experiments Embryos were collected and aged to be 2-3 hours old, fixed with 1.8% formaldehyde, and stored in 100% methanol at -20°C prior to immunostaining. Embryo aliquots were rehydrated in an ethanol:PBT gradient and blocked for 30 minutes using the Roche Western blocking reagent (11921681001) and PBT. Primary antibody incubation occurred at 4°C overnight with a 1:200 antibody dilution in PBT/blocking reagent with the same Zelda, Dorsal, and Twist antibodies used for ChIP-nexus experiments. Embryos were then washed six times with PBT, blocked again, and incubated with a donkey anti-rabbit IgG Alexa Fluor 568 secondary antibody (Thermo Fisher, A10042), 1:500, at 4°C overnight. After eight washes with PBT, embryos were mounted with ProLong Gold Antifade Mountant with DAPI (Invitrogen, P36931). Images were acquired on a Zeiss LSM-780 laser scanning confocal microscope with a 32 channel GaAsP detector and a plan-apochromat 10x objective lens, N.A. 0.45, using the ZEN Black 2.3 SP1 software by Zeiss. The Alexa Fluor 568 track used a DPSS 561 nm laser excitation at 6.5%, and the DAPI track used a Diode 405 nm laser excitation at 6.0%. Images were collected using a frame size of 1024 x 1024, a zoom of 1.5, and a pixel dwell time of 3.15 µs. Confocal z-stacks were maximum intensity projected and all image processing steps were performed using FIJI 139 . All microscopy and processing settings were kept the same when comparing wt to zldor gd 7 embryos. Protein binding microarray experiments For all PBM experiments, the C-terminal region of Zelda, which includes the four zinc fingers (#3-6) that are known to bind CAGGTAG motifs, were used 37,38 . These zinc fingers were cloned into a T7-driven GST expression vector, pTH6838. The TF sample was expressed by using a PURExpress In Vitro Protein Synthesis Kit (New England BioLabs) and analyzed in duplicate on two different PBM arrays (HK and ME) with differing probe sequences. PBM laboratory methods including data analysis followed the procedures previously described 140,141 . PBM data were generated with motifs derived using Top10AlignZ 88 . Z-scores and E-scores were calculated for each 8-mer as previously described 87,88 . Octamers were grouped together based on their heptad sequences while also considering reverse complements, and the median E-score and Z-score was calculated for each 7-mer. The heptad sequences matching BPNet-mapped Zelda motifs were then extracted and the two PBM replicates were averaged for each Zelda motif. ChIP-nexus data processing ChIP-nexus single-end sequencing reads were preprocessed by trimming off fixed and random barcodes and reassigning them to FASTQ read names. ChIP-nexus adapter fragments were trimmed from the 3' end of the fragments using cutadapt (v.2.5 142 ). ChIP-nexus reads were aligned using bowtie2 (v.2.3.5.1 143 ) to the Drosophila melanogaster genome assembly dm6. Aligned ChIP-nexus BAM files were deduplicated based on unique fragment coordinates and barcode assignments. Normalized ChIP-nexus coverage was acquired through reads-per-million (RPM) normalization, where the ChIP-nexus sample coverage was scaled by the total number of reads divided by 10 6 . ChIP-nexus peaks were mapped using MACS2 (v.2.2.7.1 144 ) with parameters designed to resimulate the full fragment length coverage rather than the single stop base coverage (--keep-dup=all -f=BAM --shift=-75 --extsize=150). ChIP-nexus peaks were filtered for pairwise reproducibility using the Irreproducible Discovery Rate framework (IDR) (v.2.0.3 145 ). Peaks used for downstream analysis were selected from the largest pairwise comparison using the IDR framework. ATAC-seq data processing ATAC-seq paired-end sequencing reads were aligned using bowtie2 (v.2.3.5.1 143 ) to the Drosophila melanogaster genome assembly dm6. Aligned ATAC-seq BAM files were marked for duplicates using Picard (v.2.23.8 146 ) based on unique fragment coordinates, deduplicated, reoriented according to a Tn5 enzymatic cut correction of -4/+4 on fragment ends, filtered to contain fragment lengths no greater than 600 bp, and corrected for dovetailed reads. Normalized ATAC-seq coverage was acquired through reads-per-million (RPM) normalization, where the ATAC-seq sample coverage was scaled by the total number of reads divided by 10 6 , as performed previously 50,76 . Cut site ATAC-seq coverage was acquired by treating each of the fragment ends as a "cut event" and generating coverage based on only these "cut events". ATAC-seq peaks were mapped using MACS2 (v.2.2.7.1 144 ) with default paired-end parameters using ATACseq fragment coverage. ATAC-seq peaks were filtered for pairwise reproducibility using the Irreproducible Discovery Rate framework (IDR) (v.2.0.3 145 ). Peaks used for downstream analysis were selected from the largest pairwise comparison using the IDR framework. MNase-seq data processing MNase-seq paired-end sequencing reads were aligned using bowtie2 (v.2.3.5.1 143 ) to the Drosophila melanogaster genome assembly dm6. Aligned MNase-seq BAM files were deduplicated based on unique fragment coordinates and filtered to contain fragment lengths no greater than 600 bp. Normalized MNase-seq coverage was acquired through reads-per-million (RPM) normalization, where the MNase-seq sample coverage was scaled by the total number of reads divided by 10 6 . BPNet model training and optimization BPNet architecture and software was applied as previously described 57 . Model inputs were 1000 bp genomic sequences centered on the ChIP-nexus peaks of TFs of interest. Model outputs were the predicted counts (total reads across each region) and predicted profile (coverage signal across each region) for Zelda, Dorsal, Twist, Caudal, Bicoid, and GAF ChIP-nexus experiments. 95,282 IDR-reproducible peaks from Zelda, Dorsal, Twist, Caudal, Bicoid, and GAF ChIPnexus experiments were pooled and used as model inputs. Validation datasets were peaks located across chr2L (~18% of peaks), test datasets were peaks located across chrX (~19% of peaks), and peaks located across chrY and nonstandard chromosome contigs were excluded from analysis. The remaining regions were used for model training. Hyper-parameters were optimized by selected testing of parameter values deviating from the default BPNet architecture (number of dilational convolutional layers, number of filters in each convolutional layer, filter length of the first convolutional layer, filter length of the deconvolutional layer, learning rate, and counts-to-profile loss balancing). Model optimality was assessed based on counts and profile performance of each task, with a focused emphasis on the Zelda task performance, as this was our key TF of interest. After optimization, the final BPNet model architecture contained 9 dilated convolutional layers, 256 filters in each convolutional layer, a filter length of 7bp for both the input convolutional layer and output deconvolutional layer, a learning rate of 0.004, and a counts-to-profile weighting value (lambda) of 100. Final optimized model performance was assessed through comparing (1) area under the Precision-Recall Curves (auPRC) for profiles over different bins of resolution between observed ChIP-nexus profiles and predicted BPNet profiles (Supplemental figure 2a) and (2) counts correlations of observed ChIP-nexus signals to predicted BPNet signals for each TF (Supplemental figure 2b) as previously described 57 . The auPRC values were benchmarked alongside replicate-replicate, observedrandom, and observed-average observed profile comparisons to establish an in-context understanding of predicted profile accuracy. In order to test the stability of this optimized model architecture (fold 1), we trained two additional models with shuffled training, validation, and test sets (three-fold validation). The stability of the performance metrics as well as the stability of the returned downstream motif grammar was compared to the original optimized model training event (Supplemental figure 2c). All BPNet models were implemented and trained using Keras (v2.2.4 148 ), TensorFlow1 backend (v.1.7 149 ), the Adam optimizer 150 . Training was performed using a NVIDIA® TITAN RTX GPU with CUDA v9.0 and cuDNN v7.0.5 drivers. Motif extraction, motif curation, and motif island generation DeepLIFT (v0.6.9.0, derived from the Kundaje Lab fork of DeepExplain (https://github.com/kundajelab/DeepExplain) 151 was applied to the trained BPNet model to generate the contribution of each base across a given input sequence to the predicted output counts and profile signals. Contribution scores for counts and profile outputs were generated for all 6 TF tasks. TF-MoDISco (v.0.5.3.0 152 ) was then applied across each TF separately. For each TF, regions of high counts contribution were identified, clustered based on within-group contribution and sequence similarity, and consolidated into motifs. The Zelda, Dorsal, Twist, Caudal, Bicoid, and GAF motifs were manually identified based on similarity to previous literature and validation of ChIP-nexus binding from the pertinent TF. Once motifs were characterized and confirmed, they were remapped back to their TF-specific peaks based on both Jaccardian similarity to the TF-MoDISco contribution weight matrix (CWM) and sufficient total absolute contribution across the mapped motif. This mapping approach is previously described 57 . However, as we were interested in lower affinity motif representations than were previously identified by BPNet, mapping thresholds were lowered to mapping the motif if the CWM Jaccard similarity percentile was equal to or greater than 10% and if the total absolute contribution percentile was equal to or greater than 0.5%. After mapping, motifs were filtered for redundant assignment of palindromic sequences and overlapping peaks. Mapped and bound motifs were next clustered into 'motif islands' based on their proximity. Each island initially starts as a 200 bp region centered on the motif and gets clustered and merged with another nearby motif island if they overlap. In this manner, islands get extended as long as there is a motif within less than 200 bp. In the end, the vast majority of islands are still between 200-400 bp in width (Supplemental figure 12b). Island types with fewer than 30 genomic instances were filtered out (Supplemental figure 12a). These island clusters were then grouped for downstream analysis. ChromBPNet model training and optimization ChromBPNet is a modification of BPNet, designed to explain the relationship between genomic sequence and baseresolution ATAC-seq cut site coverage 28,79 . ChromBPNet possesses similar model architecture to BPNet, but the training process contains extra steps to accommodate for the Tn5 sequence bias that influences the positions of the ATACseq cut sites. If the Tn5 sequence is not accounted for, the positional information of the cut sites cannot be reliably interpreted. ChromBPNet handles this during the training step by simultaneously passing sequence information through (1) a frozen, pre-trained model that has already learned Tn5 sequence bias and (2) an unfrozen, randomly-initialized model that will learn the unbiased sequence rules associated with ATAC-seq cut site coverage. During training, the sequence information will pass through both of these models and their respective outputs will be added together to represent training loss. By adding the two model outputs, ChromBPNet is evaluating both Tn5 sequence bias and sequence rules of accessibility, which can be compared to the actual ATAC-seq cut site coverage (which also possesses both of these features). After the training step has been completed, we remove the frozen Tn5 bias model and apply downstream interpretations only to the second model which contains the unbiased sequence rules that explain accessibility coverage of ATAC-seq cut sites. To train the highest-quality set of models in the Drosophila genome, we trained a custom Tn5 bias model to represent the Tn5 sequence bias in our data. The Tn5 bias model architecture followed ChromBPNet defaults 79 . The Tn5 bias model output was the pooled coverage of the 2.5-3 h ATAC-seq experiments. This time point was chosen for the bias model because it was the most likely time in which this model could have learned underlying sequence grammar of interest and therefore the most optimal to validate against. The Tn5 bias model inputs were genomic regions that met the following critters: (1) closed (non-peak ATAC-seq regions across all time points), (2) unbound (non-peak ChIP-nexus regions across all TFs described above), (3) low-coverage regions (containing less than five times the cut sites as the lowest coverage 2.5-3 h ATAC-seq IDR-reproducible peak region), (4) 2114 bp in width, and (5) at least 750bp away from an annotated fly TSS. These criteria were applied in order to ensure that Tn5 sequence bias was only learned at regions that were closed, inactive, and representative of noise-based cut site coverage. After application of these criteria, the Tn5 bias model was trained on 2,326 training regions and 883 validation regions. Training, validation, and test regions were determined based on the chromosomes reported above for BPNet. In order to validate that the Tn5 bias model learned only Tn5 sequence bias and no other grammar rules, particularly motif-driven rules, we collected Tn5 counts and Tn5 profile contribution scores using the DeepSHAP implementation of DeepLIFT (https://github.com/kundajelab/shap) 151 and ran TF-MoDISco (v.0.5.16.0 152 ). For profile contribution, the Tn5 sequence bias was returned as multiple different logos (Supplemental figure 6b), but no motif consensus logos were returned. For counts contribution, neither Tn5 nor motif consensus logos were returned. This confirmed that our Tn5 bias model was only learning positional Tn5 sequence bias information. In order to follow-up this validation, we injected the sequences of likely canonical motifs into 256 genomic sequences from the test chromosome (chrX) and averaged the effects to observe that the Tn5 bias model did not predict an increase in coverage magnitude (Supplemental figure 6a). After Tn5 bias model training, ChromBPNet architecture and software (https://github.com/kundajelab/chrombpnet) was applied as described 79 . Model inputs were 2114 bp genomic sequences centered on IDR-reproducible ATAC-seq peaks. In order to fairly compare the results between four ChromBPNet models for each developmental time point measured using ATAC-seq (1-1.5 h, 1.5-2 h, 2-2.5 h, 2.5-3 h), we sought to train each of the models with the pooled IDRreproducible ATAC-seq peaks from every time point measured. Additionally, because we wished to characterize enhancer accessibility rules, we removed peaks that were within 750 bp of an annotated TSS, as we know that accessibility at promoters can be dictated by different sequence rules than at enhancers. After the time points were pooled and promoter-proximal peaks removed, 41,497 ATACseq peaks were included. In order to train more robust models, we also included curated non-peak regions (described above) sampled to 10% of the ATAC-seq peaks for training (4,150 non-peak regions). The inclusion of both peak and non-peak ATAC-seq regions allows the model to better differentiate between accessible and inaccessible sequences. In total, 45,647 regions were used as ChromBPNet model inputs. Validation datasets were peaks located across chr2L (~16% of peaks), test datasets were peaks located across chrX (~19% of peaks), and peaks located across chrY and nonstandard chromosome contigs were excluded from analysis. The remaining regions were used for model training. In addition to shared peaks across different ChromBPNet models to maintain inter-model stability, we also sought to train each of the models with the same ChromBPNet architecture. For this, an optimization search was required, and we again decided to optimize on the pooled coverage of the 2.5-3 h ATAC-seq experiments through selected testing of parameter values deviating from the default ChromBPNet architecture (number of filters in each convolutional layer, filter length of the first convolutional layer, and filter length of the deconvolutional layer). Model optimality was assessed based on the counts and profile performance of the bias-removed predictions, as well as prioritizing model depth to avoid overdistribution of motif grammar within sequence representations. After optimization, the final ChromBPNet model architecture contained 128 filters in each convolutional layer and a filter length of 7 bp for both the input convolutional layer and 75 bp for the output deconvolutional layer. We then trained ChromBPNet models on the pooled cut site coverage of the four developmental time point ATAC-seq experiments (1-1.5 h, 1.5-2 h, 2-2.5 h, 2.5-3 h). Final optimized model performance was assessed through comparing (1) the ability of the model to differentiate peak and non-peak regions using area under the receiver operating characteristic curve (ROC AUC) (Supplemental figure 6c), (2) counts correlations of observed ATAC-seq cut sites to ChromBPNet predictions (Supplemental figure 6d), and (3) profile prediction accuracy of observed ATAC-seq cut sites to ChromBPNet predictions using Jensen-Shannon distances benchmarked by randomly shuffled region profiles (Supplemental figure 6e). In order to test the stability of these different ChromBPNet models, we trained two additional models across each ATAC-seq time point with shuffled training, validation, and test sets (three-fold validation). The stability of the performance metrics as well as the stability of the returned downstream motif grammar was compared to the original optimized model training event (fold 1). All ChromBPNet models were implemented and trained using Keras (v2.5.0 148 ), TensorFlow2 backend (v.2.5.1 149 ), and the Adam optimizer 150 . Training was performed using a NVIDIA® TITAN RTX GPU with CUDA v11.0 and cuDNN v8.3.0 drivers. ChromBPNet contribution score generation and validation DeepLIFT (v0.6.13.0, derived from the Kundaje Lab fork of DeepSHAP (https://github.com/AvantiShri/shap) 151 ) was applied to the trained ChromBPNet model to generate the contribution of each base across a given input sequence to the predicted output counts and profile signals. Contribution scores for counts and profile outputs were generated for each trained ChromBPNet model across all time points (1-1.5 h, 1.5-2 h, 2-2.5 h, 2.5-3 h). TF-MoDISco (v.0.5.16.0 152 ) was then applied for each trained ChromBPNet model in order to identify regions of high counts contribution, cluster based on within-group contribution and sequence similarity, and consolidate these clusters into motifs. Pertinent motifs (Zelda, GAF, Caudal, Twist-like, and Dorsal-like) were manually identified based on similarity to previous literature and ChIPnexus binding was measured across these accessibilityidentified motifs to validate that they were indeed relevant binding sites that also contribute towards explaining the ChromBPNet models across the designated time points (Supplemental figure 6f). Using binding and accessibility models to examine motif effects in silico In order to internally measure the "marginalized" effects of motifs without the surrounding genomic context, we adopted an in silico approach by which we injected motifs into many seed-controlled randomized sequences and generated BPNet and ChromBPNet predictions of these sequences with and without the motifs. We used 64 randomized sequences for BPNet predictions and 512 for ChromBPNet predictions (accessibility predictions contain greater sequence complexity and therefore required more trials to establish stable predictions across randomly generated sequences), averaging predictions across each of these randomized sequence sets. After performing in silico injections of a single motif, we visualized the output profiles generated from randomized sequence alone or motif-injected sequences for the Tn5 bias model, ChromBPNet models, and BPNet across all TF motifs. It has been previously described that accurate predictions of relative motif affinities can be extracted from a BPNet model trained on ChIP-nexus data 92,93 . We then summarized the "marginalized" effects of motifs above to compare how motif affinity changes Zelda's influence at the level of both binding and accessibility. After performing in silico injections of a single motif described above, we summed the values of the output profiles generated from randomized sequence alone or motif-injected sequences for both ChromBPNet and BPNet. These sums were then subtracted in log-space and referred to as "marginalized" scores, characterized as: where is the predicted sum of the counts when a motif is injected into the random sequence and is the predicted sum of the counts of the averaged random sequences without injections. These "marginalized" scores were computed for each Zelda motif variant for all ChromBPNet models and BPNet. In order to test the effects of motif pairs on cooperativity for binding and accessibility without surrounding genomic context, in silico motif interaction analysis was performed as described previously 57 . In brief, this involved injecting two motif sequences (motif A and motif B) across motif pair distances (d) ranging up to 400 bp into random sequences. Binding predictions and accessibility predictions were measured in these different simulation scenarios from BPNet (where represents the sum of the counts predicted across a 200 bp window, centered on motif A) and ChromBPNet (where represents the sum of the counts predicted across the entire 1000 bp window), respectively. We measured four different cases: (1) neither motif A nor motif B were injected into the sequence (hØ), (2) motif A only was injected into the sequence (hA), (3) motif B only was injected into the sequence (hB), and (4) motif A and motif B were both injected into the sequence at a designated distance (hAB). These cases were measured and averaged across 64 trials for BPNet predictions and 512 trials for ChromBPNet predictions (accessibility predictions contain greater sequence complexity and therefore required more trials to establish stable predictions across randomly generated sequences). After all measurements were collected across all motif combinations and distances, then averaged across trials, the in silico motif pair cooperativity for each was calculated using the following equation: where ( ) is the predicted pseudocounts represented by the 20th percentile quantile cutoff value for both binding and accessibility predictions across each window when motif A and motif B are present and when only motif A is present (case 4 and 2, respectively, described above). The motif pairs considered were combinations of the highest affinity representations of Zelda (CAGGTAG), Dorsal (GGGAAAACCC), Twist (AACACATGTT), Caudal (TTTTATGGCC), Bicoid (TTAATCC), and GAF (GAGAGAGAGAGAGAGAG). For both BPNet and all ChromBPNet models, these high-affinity motifs were also tested alongside an additional lower affinity representation of Zelda (TAGGTAG) in a pairwise fashion with all other motifs to investigate Zelda's changing influence on other TFs based on motif affinity. Using binding and accessibility models to examine motif effects in genomic sequences In order to measure the in-context effects of a motif within its surrounding genomic sequence, we computationally generated genomic sequences with this motif's sequence mutated by randomly shuffling the bases that belong to this motif. We generated 16 randomized mutation sequences per motif instance to establish mutation stability, averaging predictions across each of these randomized mutation sets. We performed this genomic perturbation for all mapped TF motifs across our curated set of genomic enhancers (described above) and visualized the output profiles generated for both BPNet and all ChromBPNet models. In order to summarize the accessibility effects of mutating high-and low-affinity Zelda motifs, the 250 highestand lowest-affinity Zelda motif-containing-only islands were identified. Using the procedure described above for all Zelda motifs in these genomic islands, accessibility profiles from unmodified island sequences and Zelda-mutated island sequences were predicted using the ChromBPNet models. After generating the profiles for each island, we summed the profiles into a single scalar value for WT sequences ( ) and Zelda-mutated sequences ( ). Relative accessibility effects of high-and low-affinity Zelda motifs were characterized by the log2 fold-change measured effect, represented as . Differential chromatin accessibility analysis To determine the differential chromatin accessibility between wt embryos with mutant zld -, gd 7 , and cic 6 embryos, we used DESeq2 with default parameters and FDR = 0.05 103 . Briefly, for each comparison between wt and mutant ATAC-seq data sets, we calculated ATAC-seq cut site coverage at the same pooled IDR-reproducible ATAC-seq peaks from all time points that were used for ChromBPNet prior to promoter removal (see "ChromBPNet model training and optimization"). For all time points we used three replicates and built one DESeq model encompassing ATAC-seq counts from all time points. In order to compute the differential chromatin accessibility, we then used each DESeq2 model to conduct pairwise comparisons between between wt and mutant conditions within each time point and computed the log2(mutant/wt) values. In this way, log2(mutant/wt) < 0 represent a loss in chromatin accessibility in the mutant, while log2(mutant/wt) > 0 represent a gain in chromatin accessibility in the mutant, while p-adjusted < 0.05 loci are highlighted. We performed this differential chromatin accessibility approach for all wt-tomutant comparisons. Enhancer collection The bulk set of mesodermal and dorsal ectodermal enhancers used in this study were previously defined based on differential histone acetylation 102 . More limited sets of validated neuroectodermal enhancers, as well as mesoderm and dorsal ectoderm enhancers, were collected from previous work 77, 153 . All anterior-posterior patterning enhancers were collected from earlier studies 74,75 . Additional enhancer lists that were consulted include a list of active blastoderm enhancers 73 and REDfly 154 . Supplementary figures Supplemental figure 1. ChIP-nexus replicates for all TFs are highly correlated Pearson correlation values were determined for the three replicates of (a) Zelda, (b) GAF, (c) Bicoid, (d) Caudal, (e) Dorsal, and (f) Twist ChIP-nexus experiments. Coverage for each replicate was calculated across a 400 bp window centered on the MACS2called peaks for each TF. Because ChIP-nexus provides strand-specific information, the absolute value of the counts from the negative strand, which would otherwise be negative, was taken and added to the counts across the positive strand to determine the total region counts for a given replicate. Figure 4b. Here, islands are ordered by H3K27ac signal. (d) Greater chromatin accessibility is associated with regions with more mapped Zelda motifs. All Zelda-containing islands were collected and separated based on how many Zelda motifs they contained. The observed normalized ATAC-seq fragment coverage for each time point was calculated across a 250 bp window anchored on the island center. Statistical significance was determined using the Wilcoxon rank-sum test (* = p < 0.05; ** = p < 0.01; *** = p < 0.001; **** = p < 0.0001). These results show that more Zelda motifs across a genomic region correlates with increased chromatin accessibility. This is consistent with previous results showing higher levels of nucleosome depletion for more Zelda motifs 6 .
2022-12-23T14:10:59.293Z
2022-12-20T00:00:00.000
{ "year": 2022, "sha1": "68261c2054e0bb19e426f23b8f02e0d5ea42b9f4", "oa_license": "CCBYNC", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2022/12/20/2022.12.20.520743.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "68261c2054e0bb19e426f23b8f02e0d5ea42b9f4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
9322776
pes2o/s2orc
v3-fos-license
Experimental Study on the Mechanical Properties of Porcine Cartilage with Microdefect under Rolling Load Objectives. To investigate the mechanical responses of microdefect articular cartilage under rolling load and find out the failure rule. Methods. Rolling load was applied to the porcine articular cartilage samples with rectangular notches of different depths. The displacement and strain near the notches were obtained by the noncontact digital image correlation technique. Results. The strain value and peak frequency around the notch increased; the maximum equivalent strain value could be observed at both bottom corners of the notch; the equivalent strain value first increased and then decreased at the points in the superficial and middle layers with the increase of rolling velocity; the points in the deep layer were less affected by rolling velocity; the equivalent strain value of the points in the superficial layer declined after rising with the increase of defect depth, while a decreased trend could be found for the points in the middle and deep layers. Conclusions. The shear strain, which rose with the increase in defect depth, was the main factor in cartilage destruction. The cartilage tended to be destructed firstly at the bottom corner of the defect. Rolling velocity showed significant effects on superficial and middle layers. Cartilage had the ability to resist destruction. Introduction The articular cartilage is an important part of the human bone joints, which plays critical roles in reducing vibration and bone protection in daily activities of the human body. Studies suggested that the articular cartilage has a quite complex composition and specific material properties like nonlinearity and viscoelasty [1]. The articular cartilage may be damaged to varying degrees due to excessive exercise or other sudden causes [2][3][4]. However, lots of patients present no symptom in the early stage of cartilage damage, and if let to accumulate, it could evolve into osteoarthritis which will then affect daily life [5,6]. The articular cartilage can rarely be repaired if damaged; due to the absence of blood vessel, this usually results in osteoarthritis. Therefore, the study on mechanical properties of cartilage with the defect has started drawing attention in the scientific literature. Repeated load deformation can cause fatigue wear, which alters the mechanical properties of the cartilage. Generally, the greater the surface roughness, the faster the wear is. Delamination damage is the major form of damage for a cartilage under a friction load [7]. Damage will change the mechanical properties of the cartilage [8], causing reduced rigidity [9,10] or increased permeability [11] for instances. Cyclic loading experiment of cartilage showed that cyclic loading at 7-17 MPa could lead to cartilage cell death but not to damage on the surface structure [12]. By using α-chymotrypsin to label collagenous fibers, the cartilage was found to be deeply stained and marked fibrosis could be observe under acyclic load of 5 MPa for 20 minutes [13]. Gratz et al. [14] have explored changes in a notch angle in full-thickness-defected cartilage under a compressive load. It was found that the closed injury was more likely to slip than the open one. The experimental results of Stok and Oloyede [15,16] showed that the crack propagation mechanism of the cartilage crack was quite different from the open mode of traditional fracture mechanics. Dabiri and Li [17] analyzed data from a knee joint model and found a gradually decreased osmotic pressure but an increased shear strain with the increase in cartilage degradation. Based on a 3-D ankle model, Hua et al. [18] observed that the peak stress values increased significantly as the articular cartilage defect area increased; joint functions will be remarkably affected when defect diameter exceeded 11 mm in the distal tibial articular cartilage. Hosseini et al. [19] showed the interactions between softening of bone matrix and damage of fibers. Numerical model also proved that the damage grew preferentially along the tangent direction of the fibers. Fibers played an important role in cartilage construct [20]. The distribution mode of collagen fibers in the cartilage plays a key role in the mechanical properties of cartilage. Based on the distribution and arrangement mode of the collagen fibers, the articular cartilage can be roughly divided into three layers [21]: superficial (tangential layer), middle (transitional layer), and deep layers (radiation). It was reported that changes in the displacement derived from different compressive levels; the loading velocity and the loading times gradually decreased from the surface to the deep layer [22]; and the superficial layer has a vital role in maintaining the biomechanical properties of the cartilage. Previous studies have focused on the analysis of the mechanical properties of intact cartilage; however, various degrees of articular cartilage damages can be observed even in the early stage of osteoarthritis [23]. It often takes more than ten years or even decades from the initial damage to the loss of active ability. Therefore, it appears especially important to conduct researches on the mechanical properties of injured cartilage. According to the analysis of bone and joint mechanics, a rolling load is the major force among cartilage pressures [1]. In the present study, the noncontact digital image correlation (DIC) technique [24,25] was used to study changes in the mechanical properties of the articular cartilage related with defect depth under rolling load, in order to summarize the mechanical properties of the injured cartilage. This study may provide a reliable reference for the prevention and the treatment of bone-joint diseases. Sample Preparation. Fresh knee joint cartilage of the distal femoral end was obtained from a 6-month-old pig. Cartilage slices (length = 8 mm; height = 18 mm; thickness = 3 mm) were cut along the normal direction of the cartilage surface. The defect of cartilage was made of machine tools. The thickness of the circle blade was 0.5 mm, and the circle blade rotated with the main axis. The sample of the cartilage was fixed on the knife rest. The depth of notch was controlled by the feed of knife rest, and the feed precision is 0.1 mm. Notches with a width of 0.5 mm and a varying depth were prepared ( Figure 1). Three different notch depths were obtained as follows: 0.2 ± 0.02 mm, 0.5 ± 0.02 mm, and 0.7 ± 0.02 mm which were reached into superficial layer, middle layer, and deep layer, respectively. For each notch depth, 6 samples were prepared and iron oxide nanoparticles, which were treated as pixels, were embedded on the side surface of the slices. Figure 2(a) shows the experimental apparatus which were composed of the mechanical loading system, the image acquisition system, the computer control system, and the image processing software. Figure 2(b) shows the mechanical loading system including the rolling control device and the compression quantity adjusting device. The rolling control device was driven by a stepping motor, that is, the rotary motion was converted into a linear motion through the screw, then through the connecting rod to drive the cylindrical roller, to thus perform constant reciprocating rolling. The compression quantity adjusting device was regulated by the screw on both sides of the fixture portal frame. This system had a maximum rolling distance of 30 mm and a maximum rolling velocity of 10 mm/s. The diameter of the indenter was 40 mm, and the surface roughness was 0.05. A temperature-controlled liquid tank was implemented on this equipment to simulate in vivo environment. The image acquisition system mainly consisted of a charge-coupled device (CCD) camera, which helped us Journal of Healthcare Engineering obtain images with a 1376 × 1035 resolution. Images were then analyzed and postprocessed by an image processing software, producing data about the displacement and strain of the mark points of cartilage samples. Experimental Methods. The cartilage samples were fixed on the fixture clamp of the portal frame and then placed in a saline tank. After that, saline was heated to 37°C so as to reduce the experimental errors. The indenter rolled onto the surface of the cartilage sample 50 times back and forth, with a compression quantity of 0.1 mm and a rolling velocity set as 1 mm/s, 2 mm/s, 4 mm/s, and 6 mm/s, respectively. Images were acquired continuously by the CCD camera with 2 frames/s frequency ( Figure 3). Then, the displacement and strain fields were obtained after image processing. Results In order to facilitate the analysis of stress and strains near the notch, regions near the notch were divided by uniformed grid partition. The interval between two longitudinal lines was set at 0.125 mm. The selected horizontal lines were located 5%, 25%, 45%, 65%, and 85% away from the cartilage surface ( Figure 4). Effect of Defects on the Mechanical Properties of Cartilage. The iron oxide nanoparticles as mark points and pixels were embedded on the side surface of the sample before the experiments. The speckle image of the sample including mark points in its load-free state was first acquired and used as the reference image. The continuous and instantaneous speckle images including the mark points were also obtained at the different stages of loading. The images in random half cycle that the roller rolled above the sample from the left to the right were selected from all the acquired images. Using the computer to identify the mark points and pixels, the displacements of mark points and pixels were calculated by comparing the coordinates of current pixel images with reference image. And then, the strain values were obtained according to the relationship between the displacement and strain. Figure 5 shows the comparison between intact and injured (notch depth of 0.5 mm) cartilages under a compression quantity of 0.1 mm and a rolling velocity of 4 mm/s. The equivalent strain significantly decreased with the increase of the cartilage notch depth when the roller went through the surface of the cartilage from the left to the right. The cartilage strain values could be changed by defects. Injured cartilage had a larger strain value as well as a higher frequency of strain peak value than the intact cartilage. At the A3 point, which (Figure 7(c)). For the points at both sides of the notch (B3, C3, D3, A7, B7, and C7), their shearing strain directions changed, and the values gradually decreased with increase of the defect depth. Figure 7 shows changes in the equivalent strain across the varying defect depth at points A3, C3, and D3 under a compression quantity of 0.1 mm and a rolling velocity of 4 mm/s. For each point, changes in the strain followed a periodic pattern; for the points in the middle and deep layers, the equivalent strain value significantly decreased with the increase in the defect depth; at which the points were located near the superficial layer, the equivalent strain value was very small when damage was relatively small, and the maximum equivalent strain values could be observed with a notch depth of 0.5 mm. Figure 8 illustrates the fluctuation curves of the equivalent strain at A3, C3, and D3 across different rolling velocities, with a notch depth of 0.5 mm and when the roller moved from the left to the right. For point A3, the minimum peak value was achieved at a rolling velocity of 2 mm/s. At 6 mm/s, the peak value became larger and a maximum of 0.30 was observed at 4 mm/s, which represented 1.5 times of the value measured at the rolling velocity of 2 mm/s. For point C3, a single distinct peak could be found for the equivalent strain. Peak values presented an increasing trend and then a reduction with the increase in the rolling velocity. A maximum value of 0.16 was observed at a rolling velocity of 4 mm/s, which was 1.5 times of the value recorded at 2 mm/s. Effect of Rolling Velocity on the Mechanical Properties of Defect Cartilage. In contrast, the case was rather complicated for point D3. There were two peaks with in half rolling period, and the peak frequency appeared enlarged with the increase in the rolling velocity. The first peak was located in the same position across the different rolling velocities, that is, when the indenter was right above the D3 point, while the second peak was located differently and decreased at velocities of 2 mm/s Discussion In this paper, we used the noncontact digital image correlation (DIC) technique to focus on the mechanical responses of cartilage with different defect depths under rolling load. As the distribution and content of the major components varied with defect depth, fibers inside the cartilage can be divided into three parts: superficial, middle, and deep layers (accounts for 5%, 45%, and 50% of the cartilage thickness, resp.). Collagen fibers in the superficial layer distribute densely and parallel with the articular surface, which has maximum moisture content and for which deformation can easily occur under normal stress. Collagen fibers in the middle layer are irregularly arranged with large interspaces and crisscross at a certain angle with joint surface, which has less moisture content and deforms slowly under normal stress. The deep layer fibers are nearly perpendicular to the articular surface and have the lowest water content and the smallest deformation under normal stress [26]. The method used in our study has the advantages of noncontact, low requirement for the experiment environment and light source, as well as high measurement accuracy, and allows full-field measurement. It has thus been widely used in biomechanical studies [24,25]. Due to high toughness and different distributions of fibers inside the cartilage, it was very difficult to obtain a smooth border when creating the defect, leading to certain errors in the measurement. In this paper, the left side, which was relatively smooth, was chosen for further analysis in order to reduce errors. As can be seen from Figure 5, the peak values of the equivalent strain decreased gradually with the increase of the notch depth, which was consistent with the strain law obtained in intact cartilage experiments [22,27]. When the indenter moved on a cartilage surface from the left to the right, each point was subjected to cyclic loading, leading to cartilage fatigue failure. When a defect existed, the cycle frequency enlarged and the strain value of each point increased around the defect. These results are consistent with those reported by Gratz et al. [14] and are likely due to the stress concentrations produced by the notch. Timely repairing of damaged cartilage can reduce strain value around the notch [28]. When notch is shallow, the maximum equivalent strain appeared at the bottom corner of the notch; with the increase of notch depth, the strain value rose quickly because the supporting structure became loose. Figure 6 showed that the shear strain increased significantly when the notch reached Journal of Healthcare Engineering the deep layer, which is in agreement with the results of Dabiri and Li [17] obtained by using a knee joint model. This means that the shear strain of the cartilage increased gradually with cartilage degradation. It can be inferred that the shear strain is the key factor in cartilage destruction, and this destruction began from the bottom corner of the defect. The positive and negative strain variations at points C3 and D3 existed in our study. These points were located in the vicinity of 50% of the cartilage height, which may be the interface between the middle and the deep layers. The distribution of cartilage fibers may vary in this position, resulting in different strain change rules compared with other points. This result indicated that fiber distribution has an important influence on the mechanical properties of cartilage [29,30]. The interface between the middle and the deep layers will be easily destroyed when the notch will deepen. Compared with the other points of the location, the points near the superficial layer showed different responses to the defect depth ( Figure 7). Indeed, points located in the superficial layer had greater responses to the defect with a moderate depth, while deep points were sensitive to a small defect. This is considered to be determined by the fiber structure and the viscoelastic properties of the cartilage. Besides, superficial layer has a higher water content, low modulus of elasticity as well as greater deformability. Accardi et al. [31] also confirmed that the fiber orientation of cartilage can resist the shear failure, which indicated that cartilage has a selfprotective function against certain damage. This way, the destruction process can be significantly delayed. The rolling velocity has a certain impact on the cartilage strain. It was reported that the friction coefficient increased first then became lower with the increase in the rolling velocity [32][33][34], leading to a rising and falling cartilage strain. This may be due to the cartilage viscoelasticity, which ensures that water can be continuously extruded under loading, causing changes in the friction coefficient and the strain values. The main problem is that the experimental model of defect cartilage was a plane model using digital correlation technology to investigate mechanical properties of defect cartilage. The difference between experimental model and the cartilage model in vivo is that the confining pressure conditions could not be considered. However, the authors think that it is possible to obtain the confining pressure condition of the experimental model by using the method both numerical simulation and experiment. The displacement field and stress strain field of the model of integral cartilage and femur 7 Journal of Healthcare Engineering could be obtained by using the numerical simulation method, then the plane model of the experiment was taken into account from the integral numerical model, and the boundary condition was applied to the cutting surface of the plane model base on the results of the integral cartilage and femur. Modified boundary conditions decreased the error between the strains of the experiment and strains of the numerical simulation. The confining pressure conditions could be obtained by the numerical simulation. The confining pressure conditions could be used to conduct the experimental defect cartilage model. In addition, the analyzed images were from the random half cycle in this paper. Due to the viscoelastic properties of the cartilage, its strain is correlative with rolling number, which could result in experimental errors. As a result of the limitation of cartilage samples taken from the position, the experimental samples from different pig femoral cartilages could also cause error. The physical load of the knee joint involves rolling, sliding, and a combination of rolling and sliding. This paper focused on the defect cartilage subjected to single load such as rolling, and the research work will be carried out in the future to understand the mechanical properties of the defect cartilage under other loads. Conclusion In this paper, we used a noncontact DIC technique to measure the displacement and the strain fields near the notch of a defected cartilage under rolling load. Based on our study, we can conclude that the cartilage damage may increase the strain values and strain peak frequency around the defect. The shear strain, which serves as the main factor causing cartilage destruction, increased with the increase in the defect depth. The cartilage would be destructed firstly at the bottom corner of the defect, and when the defect reached the certain depth, it might be destroyed along the interface between the middle and deep layers. The rolling velocity showed a significant effect on the superficial and middle layers. The equivalent strain increased first and then decreased with the increase in the rolling velocity. Changes were not obvious in the deep layer except for the rising strain peak frequency. The special structure of the cartilage exhibited a selfprotective function against destruction, which may slow down this destruction process. Our results can provide a basis for the clinical treatment of osteoarthritis and cartilage repair. It is also of great significance for the mechanical analysis of artificial cartilage. Conflicts of Interest The authors declared that they have no conflicts of interest to this work.
2018-04-03T00:05:48.589Z
2017-06-12T00:00:00.000
{ "year": 2017, "sha1": "ff5dc796e658524b6eacf2da27f817dfc197cdcf", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jhe/2017/2306160.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ff5dc796e658524b6eacf2da27f817dfc197cdcf", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
258363195
pes2o/s2orc
v3-fos-license
The experiences of currently and formerly incarcerated women in a time of pandemic: Implications for life-giving communities This article assesses the adequacy of the church’s responses to women currently and formerly in conflict with the law in the Philippines and offers feminist theological reflections on the need for gender-and culturally sensitive pastoral services for them in a time of pandemic. Drawing upon case studies and interviews, this paper examines the lived experiences and social worlds of women who currently occupy or formerly held the status of persons deprived of liberty. The researcher discusses the common themes and nuances in the issues and challenges they confront from behind bars and in free society, and their struggles for survival throughout the pandemic. This paper also examines their service needs and, in the case of those released from the penitentiary, the salient factors that contribute to the risk of recidivism. The researcher discusses the implications of the issues and service needs of justice-involved women for building life-giving communities. . Based on the recommendations made by the IATF, the government subsequently transitioned to less restrictive forms of quarantine such as modified enhanced community quarantine (MECQ) and general community quarantine (GCQ), which local government units (LGUs) implemented accordingly, depending on the extent of COVID-19 cases in their respective jurisdiction (Al Jazeera 2020a; Calonzo 2020;CNN Philippines 2020a;Madarang 2020a;Romero, Medez & Chrisostomo 2020). That said, this did not lead to notable reductions in the number of new COVID-19 cases after the pandemic broke out (See 2021; Stormer 2020; Yee 2020). The Duterte administration also relied heavily on the military and the police to manage checkpoints, ensure order, and implement health protocols, even to the point of authorising the use of force against people tagged as "troublemakers" for protesting against the restrictions while under community quarantine (Amnesty International 2020; Hapal 2021;See 2020). The extended lockdown, which led to prolonged restrictions in people's mobility and employment, inevitably resulted in adverse economic impacts on the general population. Policies such as "no work, no pay" and restrictions in the operation of businesses beyond those providing essential services, among others, led to massive job losses and the displacement of economic opportunities, aggravated by limited or fragmented safety nets (Madarang 2020b;See 2020). As of August 2020, the unemployment rate in the Philippines rose to 46 per cent, impacting on 27.3 million people (Hapal 2021). While the effects of the pandemic on lowincome communities and the informal sector have been discussed in media reports, academic research, and public debate, its consequences for other vulnerable populations have not been explored as much. In particular, the impact of the pandemic on individuals who currently occupy or formerly held the status of persons deprived of liberty (PDL or PsDL), the term for inmates in Philippine corrections parlance has received minimal attention. Existing information about this population group is limited to the reports of the media and non-government organisations (CNN Philippines 2020a; Human Rights Watch 2020; International Committee of the Red Cross 2020). First-hand accounts of prison life in the time of the pandemic -and in public discourse, in general -focus on men, while women are treated as an afterthought and thus receive scant attention (CNN Philippines 2020b;CNN Philippines 2020c;See 2020). This is consistent with the historic invisibility of justice-involved women 1 in academic and 1 A "justice-involved" person pertains to an individual who has been involved with the justice system, such as through incarceration or being on parole. For the purposes of this paper, the term "justice-involved women" will be used interchangeably with "women in conflict with the law" or "women formerly in conflict with the law". public discourse prior to the pandemic because of the focus on men in the criminal justice system, despite the advancements in feminist criminological research over the past 30 years (Chesney-Lind & Pasko 2004;Howe 1994;Nagel & Johnson 2004;Steffensemeier & Allan 2004;Thomas 2003). Research on the experiences of justice-involved women in the global south remains limited, due to the greater attention given to their counterparts in Western, industrialised nations ( Ransley 1999;Veloso 2016;Veloso 2022;Villero 2006). Studies on the reentry experiences and challenges of women formerly in conflict with the law are likewise limited (Davies & Cook 1999;Renzetti & Goodstein 2001;Richie 2001), as the literature often focused on the experiences of men (Petersilia 2003). Information about the experiences and challenges of currently and formerly incarcerated women in the time of the pandemic is still emerging. It is thus crucial to examine the issues and service needs of justice-involved women and the impact of an unprecedented emergency such as the COVID-19 pandemic on their situation. STATEMENT OF THE PROBLEM This study assesses the adequacy of the church's responses to currently and formerly incarcerated women. It also offers feminist theological reflections on the need for gender-and culturally sensitive pastoral services for this marginalised population group. This study examines the social worlds and experiences faced by women who currently hold or once held the status of persons deprived of liberty in the Philippines. It assesses the impact of the COVID-19 pandemic on their situation. This paper also examines the role of the church in serving as an agent of restorative justice and in providing interventions that promote the dignity of women currently or formerly in conflict with the law. The researcher discusses the implications of the issues and service needs of currently and formerly incarcerated women, for building life-giving communities. To assess the adequacy of the church's responses to women in conflict with the law and the need for gender-and culturally sensitive pastoral services for them, this research seeks to answer the following questions: • What are the experiences of incarcerated women in a time of pandemic? How did the COVID-19 pandemic impact on their situation in prison? • What are the re-entry experiences of formerly incarcerated women in a time of pandemic? How did the COVID-19 pandemic impact on their transition to free society? • What are the service needs confronting justice-involved women? • How can the church provide pastoral services for justice-involved women in a time of pandemic? • What are the implications of the women's issues and service needs for building life-giving communities? METHODOLOGY This article uses case studies of women with the status of persons deprived of liberty. The material for the case studies was culled from observations during regular visits to women deprived of liberty as part of the researcher's long-term volunteer work in prison, as well as telephone calls with women deprived of liberty after the pandemic escalated. Meanwhile, the researcher conducted interviews with 12 formerly incarcerated women, who resided in Metro Manila or provinces in the northern and southern Philippines. The researcher also engaged in field observation at the places of residence of some informants, whenever possible. Informed consent was obtained from the informants. The names of the informants are withheld to protect their privacy. The researcher incorporated principles from Catholic Social Teachings (CST) in examining the situation of currently and formerly incarcerated women and the implications thereof for the promotion of life-giving communities. This paper uses the lenses of feminist and intersectional theology. As such, it reflects the dialogue of theology with feminist theories, intersectionality theory, and other perspectives in gender studies and sociology. SIGNIFICANCE OF THE STUDY This study is intended to benefit currently and formerly incarcerated women, by promoting awareness about their experiences and issues. This paper also seeks to benefit pastoral groups and civil society organisations that seek to serve persons deprived of liberty, as well as individuals transitioning to independent living, as part of their ministries. Profile of women in prison For the purposes of the case study, the researcher focused on two women with the status of persons deprived of liberty at the Correctional Institution for Women (CIW), the main penitentiary for women, located in Mandaluyong City, Metro Manila, the capital of the Philippines. She selected these women on the basis of their regular contact as part of her long-term volunteer work in prison; she also had more contact with these women after the pandemic struck. Both women were in their early to mid-40s. In terms of their ethnic background, one woman identified with a Christianised ethnolinguistic group, particularly Waray, and another identified with an Islamised ethnic group, particularly Maguindanaon. 2 One woman identified as a Muslim, while the other identified as a Born-Again Christian; that said, the latter woman embraced Islam (balik-Islam or revert to Islam, in Filipino parlance) upon marrying a Muslim, but converted to Christianity three years prior to the pandemic. With respect to their civil status, one woman was married and the other widowed. Both women had children, be these biological or adopted. One woman had an adopted child, while the other had three biological children and three stepchildren. In terms of their educational attainment, one woman was a high-school graduate, while the other had completed some years of elementary school. Both women were enrolled in classes as part of the Alternative Learning System (ALS) of the prison. 2 In the Philippines, people are distinguished according to race, ethnicity, and religion. Race and ethnicity tend to overlap with religion because the Spanish and American colonial eras historically led to the institutionalisation of racial categories, namely Christian, Muslim, and indigenous peoples (IPs), depending on whether their ancestors had converted to Christianity and assimilated to colonialism, or embraced Islam or maintained their indigenous religion and thus resisted colonialism. Even people's ethnic heritage is associated with ethnic groups that had historically converted to Christianity or Islam or retained their ancestral religion. The stereotypes that are associated with being a Christian and a Muslim, as well as an indigenous person, tend to overlap with racial and ethnic stereotypes. In addition, people's ethnic groups are also termed ethnolinguistic groups in the Philippine context, in that their ethnicity is distinguished by the language/s they speak (Berkley Center for Religion, Peace and World Affairs [2013]; Ty [2010:6-9]). Both women were serving time in prison for drug charges. One woman had a daughter, who was incarcerated at the same penitentiary, also for drug charges, although they were not co-defendants. Demographic profile of formerly incarcerated women The women who formerly held the status of persons deprived of liberty were aged between 33 and 61 years at the time of the interviews; the median age of the women was 44. In terms of ethnic background, half of the informants (six women) belonged to Christianised ethnolinguistic groups that were considered minorities in the nation. Of this, one woman identified as Ilocano, one woman as Ilonggo, two women as Visayan, and two women as Waray. Meanwhile, three women identified as Tagalog, a Christianised ethnolinguistic group associated with the dominant culture in the context of Philippine society. The remaining three women traced their roots to more than one racial and/ or ethnic group. One woman identified as having Ilocano and Chinese ancestry. One woman was of Ilocano and Ilonggo descent, while another was of Ilocano and Tagalog descent (see Table 1). In terms of religion, all women were part of Christian denominations, in that five women identified as Born-Again Christians, while seven women identified as Catholics (see Table 2). In terms of their educational attainment, one woman had completed elementary school, two women had completed some years of high school, and four women had completed some semesters or years of college education. Moreover, two women had completed two-year courses such as a secretarial course and a course in hotel and restaurant services. Three women were college graduates. Their majors included courses such as psychology, public administration, and tourism (see Table 3). With respect to their civil status, two women were single; five women were partnered, and two women were married. One woman was separated and two women were widowed (see Table 4). Seven out of the 12 informants were mothers, who had between three to seven children; the average number of children was three. Of this, three women had children who were minors. In the case of two informants, their dependent children lived with them, although one also lived with relatives, on occasion. One woman, who identified as a single parent, disclosed that her young children lived with their father's side of the family. Four women had no children at all. The researcher found that the majority of the formerly incarcerated informants ended up in prison for drug-related offences (6 women). A small minority were convicted of other crimes such as illegal recruitment (4 women); slight illegal detention 3 (1 woman), and child abuse and falsification of public documents (1 woman). Upon the appeal of their cases with the higher courts, two informants were eventually acquitted of the criminal charges of which they had been convicted at the lower court. Five women insisted on their innocence and claimed to have been implicated in the offences of close associates or other significant networks through pathways of deception and betrayal (see Table 5). The women PsDL faced multiple vulnerabilities in prison, which the COVID-19 pandemic only exacerbated. Before the lockdown took effect, the researcher had visited CIW five times, between 13 January and 1 March 2020, and noted that low-income women, especially those without regular visitors, faced multiple difficulties in meeting even their basic needs in prison. The women included in this case study were not above this trend. Aside from performing odd jobs in prison, such as doing laundry for other PsDL with more resources and serving in a now-defunct organisation for PsDL who assisted prison staff on duty, these women often relied on donations from faith-based organisations, non-government organisations, and other benefactors. As persons deprived of liberty, these women immediately felt the impact of the lack of safety nets when the pandemic struck, and President Rodrigo Duterte placed the entire Luzon on lockdown (Al Jazeera 2020b). During the second week of March 2020, a few days before the Luzon-wide lockdown took effect, visits to the penitentiary were suspended for security and health reasons. Some duty-bearers were earlier quoted generalising jails as "safe" and "COVID-free" (CNN Philippines 2020a). Yet the virus spread in both prisons and jails, and even claimed the lives of some PsDL (CNN Philippines 2020b; CNN Philippines 2020c). To their credit, the prison staff immediately conducted disinfection activities and continued to do so regularly. The prison administration also permitted the delivery of care packages and donations, subject to strict inspection and sanitation protocols. Yet the women still felt the consequences of having limited resources from behind bars, exacerbated by the suspension of visits and the limited number of donations that initially arrived. One of the women, who frequently contacted the researcher, gave regular updates about the turn of events and the impact of the pandemic on their situation. She once disclosed that even her toiletries were severely limited. She revealed: "Even shampoo here is hard to come by. I haven't used shampoo in a month." The other woman, who had been diagnosed with hypertension prior to the pandemic, disclosed that she held on to the medicine donated to her by a religious volunteer. Although the infirmary had medicine for hypertension, the said woman PDL was worried about their supplies running out. While the prison administration was able to obtain donations of face masks and alcohol, among other necessities, visible space constraints in prison made it difficult to observe health protocols implemented in free society. The policies relating to "social distancing" or "physical distancing" are a case in point, in that these are hard to enforce, due to the congestion of the penitentiary, which is linked to the increased rate of women's incarceration, mainly due to drug-related charges. Prior to the pandemic, it was common for two persons deprived of liberty in CIW to share a single bed. This meant that four people, instead of two, occupied a bunk bed. The same set-up continued, with the exception of PsDL, who had been isolated for demonstrating COVID-19 symptoms or who had been placed in quarantine as a result of contracting the virus. Such living conditions led to the imminent risk of the spread of diseases such as COVID-19. One of the women, who was an officer at her dormitory and who was also part of the aforementioned organisation that assisted corrections officers on duty, spoke at length about her fear of contracting COVID-19. During one of her telephone calls to the researcher, she shared that those who had a cough were immediately moved to another comparatively "distant" spot in their dormitory, although the limited space made it unrealistic to refrain from close contact; this trend was also reflected in one of the documentaries released about the correctional facility in 2020. Moreover, the woman related how she avoided being confined at the prison chapel, which had been allotted for any PDL showing COVID-19 symptoms. She shared that medicine for fever, such as paracetamol, was more difficult to obtain, due to the restrictions surrounding the pandemic. As such, she took it upon herself to exercise whenever she felt feverish, which she deemed preferrable to being confined at the chapel, where she feared she would get even more ill upon exposure to other PsDL with fever, cough, and other COVID-19 symptoms. The other woman, who washed the clothes of other PsDL and did other odd jobs to earn a living in prison, contracted COVID-19. She also contacted the researcher and disclosed that she was among the women PsDL transferred to a quarantine facility called "Site Harry"; the facility, created to decongest penal and detention facilities, was housed at the New Bilibid Prison (NBP), the main penitentiary for men, located in Muntinlupa City, Metro Manila (International Committee of the Red Cross [ICRC] 2020). She had stayed there for roughly one month. When asked about the treatment she received, she stated that she did not take any medicine, except for the maintenance medicine for hypertension given to her by a prison volunteer a few weeks before the pandemic broke out in 2020. Other than that, she said she simply rested and ate the food given to her during her time in recovery. She also disclosed that she got tired more easily and was often short of breath, even after she had been declared COVID-free and returned to CIW. One of the women had a daughter in her early 20s, who was serving time at the same penitentiary. Prior to the pandemic, she had already taken it upon herself to support her daughter, by providing for her food and other necessities in prison, as both of them did not have any visitors to rely on. It can be inferred that the pandemic caused more economic hardship for herself and her daughter. The other woman likewise encountered more economic difficulties caused by the greater limitations in the supply of donations. Although she continued to engage in odd jobs in prison, this was halted for some time when she contracted COVID. Re-entry issues and experiences of formerly incarcerated women Most of the former justice-involved women informants came from lowincome and working-class communities, to which they returned after their release from prison. For some women, particularly those who got involved in drug-related crimes, their histories of gendered violence and abuse, poverty, and family problems led to their involvement in illegal activity. As formerly incarcerated women, the informants in this study encountered multiple challenges in their reintegration into free society. The COVID-19 pandemic only exacerbated their re-entry challenges. Some of the re-entry challenges encountered by women in this study include dealing with the stigma associated with being a former PDL; obtaining employment and/or education; accessing public transportation; finding affordable housing, and securing healthcare. It was not uncommon for the informants to have strained relations with their families, some of whom had even disowned them. Those who were acquitted of their criminal convictions were pressured to file claims with the Victims Compensation Program of the Department of Justice (DOJ) on the basis of their unjust imprisonment. That said, the maximum amount that could be given to claimants was Php 10,000 and there was no guarantee that they would immediately receive the funds in full; the failure to file a claim in a timely manner -that is, within a year from their release from prison -led to its forfeiture. This was the experience of one woman, who had been acquitted of drug charges but immediately left for her hometown in the southern Philippines upon her release from prison. She only learned about the Victims Compensation Program years after her release, at which point it was too late to file a claim. Another woman, who stayed at the houses of her siblings in Metro Manila and in Cavite province, was aware of this and claimed that she would immediately file a claim within roughly one month after her release. Five informants, who had been released on parole, checked with their respective parole officers. One obtained an affidavit from the DOJ that declared that she was not among those mandated to "voluntarily surrender" to the nearest penal or detention facility. Yet this dilemma triggered immense anxiety on the part of the women. The fear of losing their jobs and/or dropping out of school, the trauma that they and their families had faced during their arrest, and the prospect of having no visitors if they would report back to prison were among the concerns they faced. One informant did report back to CIW, even if her crime was a drug offence and she had served her sentence, because she feared being targeted as part of President Duterte's "shoot-to-kill" orders. As such, she experienced the restrictions associated with being a "returnee" -that is, a former PDL under the custody of a penal institution -for a prolonged period. She also contracted COVID because she was still among the returnees who remained in CIW when the pandemic broke out, for which it took several months for her to receive clearance to go home. Some informants faced tensions arising from the verbal instructions given by President The women faced other persisting unmet needs, namely further education and skills training, drug rehabilitation, and family reunification and family-related responsibilities. Others faced challenges such as the lure of the underground economy and the risk of recidivism. For instance, one informant experienced being pressured to return to illegal activity through the threat of blackmail by higher echelons in the underground economy. The women commonly faced intensified challenges relating to their survival as a result of the COVID-19 pandemic. This is related to nationwide trends regarding the difficulties in finding work, particularly in the informal labour market and the service sector. True to form, some informants experienced unemployment or sporadic employment. Losing a job or a job offer also impacted on some informants. One informant, who worked at a chocolate shop and as a research assistant to a professor, lost her job when the knowledge of her background as a former PDL surfaced three months or so prior to the pandemic. Her partner, who had worked as a driver for an online shopping company, was laid off within less than two weeks after the lockdown took effect. The recession on account of the pandemic led to immense difficulties in their ability to find work. Another informant, who had just been released from prison six months or so prior to the pandemic, had started working as a nanny for her neighbour. Her neighbour's financial hardships led to her losing her job. Meanwhile, an informant, who had a pending job offer in a nearby province, had to forego this, due to travel bans. She experienced being unemployed for several months before landing a job as a security guard at a school in her hometown. As for another informant, who once operated an internet café, government-imposed restrictions on the operation of non-essential businesses led to difficulties in sustaining her previous business. She and her partner later opened a food franchise, although its operation was also impacted by government regulations as to the level of community quarantine. Other informants, particularly those who were middle-aged or senior citizens, also experienced difficulties in finding work, aggravated by fears about their greater vulnerability to contracting COVID-19; as such, they relied on family members to support them and their daily needs. The informants who were employed still faced challenges. For instance, one informant, who worked in a business process outsourcing (BPO) company, encountered difficulties in purchasing an extra laptop, which she could use in order to work from home, as her children used the laptop owned by her family for their online classes. Another woman, who was about to start working at a BPO before the pandemic struck, disclosed that she had to leave in less than a year, due to high blood pressure, among other health issues that were brought about by the pandemic, in large part. Others, such as a woman working as a security guard, were constrained by the lack of access to public transportation, leading some to walk to their workplace at the height of the lockdown. Meanwhile, a student-informant experienced the disruptions impacting on educational institutions when classes were suspended during her final semester prior to her graduation. Hunger and the lack of food security and resources were common experiences among the informants. For instance, one informant was threatened with arrest and later ordered to perform community service for stealing malunggay (horseradish) leaves from the tree of a neighbour who was a barangay (village) official. She insisted that she had intended to ask the barangay official if she could get some malunggay leaves from his tree, but no one was home when she went to his house. She admitted that her desperation drove her to steal, as she and her partner, who had lost his job due to the pandemic, had not eaten for two days. Lacking safety nets, they were often among the people who were hit hardest by the pandemic. Service needs Women in prison face multiple service needs as they continue to serve sentences in a time of pandemic. Aside from basic needs such as food, hygiene kits, alcohol, and face masks, they also confront unique issues stemming from the pandemic. Some prison staff disclosed offhand that they needed people to provide seminars or motivational talks on such issues as anxiety, depression, anger management, and preparation for reentry, among other topics, as the need for these was more pronounced in the time of the pandemic. Meanwhile, formerly incarcerated women encounter multiple challenges in their reintegration into society, such as returning to disfranchised communities, obtaining employment, finding affordable housing, and access to social services. They also face multiple unmet needs, namely further education, healthcare, drug rehabilitation, and family reunification. Counselling for unaddressed issues, such as gendered violence and abuse and drug addiction, is also crucial. As these issues most likely influenced their involvement in illegal activity, these festering concerns could thus impact on their ability to cope with other related challenges in their transition to independent living. The COVID-19 pandemic has intensified these issues and service needs. Comprehensive services are needed to address the issues of women who served time behind bars, especially drug rehabilitation and unemployment, so as to counter the lure of the underground economy and recidivism, especially as the pandemic continues to drag on. Of liminality and shared vulnerability in the time of the pandemic Being an unprecedented health and social emergency, the pandemic has magnified the vulnerability of justice-involved women, be they current or former persons deprived of liberty. The current situation of women in prison and women working to rebuild their lives in free society reflects much liminality, given the disorientation and ambiguity that characterise this time of pandemic and the unlikelihood of returning to pre-pandemic behaviours and lifestyles despite the ongoing preparations for the transition to the "new normal", and the improbability of simply resuming former statuses and relationships in the event of their release from prison. At the same time, these liminal spaces and situations present opportunities to rethink and recalibrate new ways of being human. The liminality of the situation of currently and formerly incarcerated women renders them vulnerable. At the same time, it exposes people's shared vulnerability, as exemplified by the instances during which dutybearers such as prison staff, family members and other significant networks of current and former women PDL, and other members of their communities contracted COVID-19 and/or experienced multiple hardships and constraints associated with the pandemic. This sense of shared vulnerability compels concerned citizens, civil society, and the church and faith-based organisations to reflect upon new ways to cater to the needs of justice-involved women more effectively, especially in a period of metamorphosis. In this time of liminality, people's shared vulnerability can be channelled in an agentic way as a source of learning and as a catalyst for engaging in gender-and culturally sensitive responses and interventions that promote the dignity of currently and formerly incarcerated women and fostering compassionate, life-giving communities for their benefit. THEOLOGICAL REFLECTIONS AND PASTORAL INTERVENTIONS Visiting the imprisoned is one of the corporal works of mercy: "I was in prison and you visited me" (Matthew 25:36). In this time of the pandemic, reaching out not only to people behind bars, but also to formerly incarcerated individuals -a population group of which current and former justice-involved women are part -is a need to which the church and faithbased organisations are compelled to respond. Some teachings that are part of Catholic Social Teaching are essential in building and promoting life-giving communities for justice-involved women. One of the themes of Catholic Social Teaching emphasises the life and dignity of the human person. The Catholic Church proclaims that human life is sacred and that the dignity of the human person is the foundation of a moral vision for society. This belief is the foundation of all the principles of our social teaching. We believe that every person is precious, that people are more important than things, and that the measure of every institution is whether it threatens or enhances the life and dignity of the human person (USCCB 2021). This viewpoint holds that each person is valuable, worthy, and deserving of dignity. As such, the moral fabric of society can be assessed based on the extent to which its institutions and its very structure promote the life and dignity of people, regardless of their social background. This teaching can be utilised to make the case for valuing and promoting the life and dignity of currently and formerly incarcerated women, to demonstrate that they matter, regardless of their legal transgressions. This resonates with biblical teachings that emphasise the avoidance of judging others. As part of one of the major social institutions that promotes social stability, the church is thus compelled to be an instrument of solidarity in upholding the dignity of current and former justice-involved women, particularly in the time of the pandemic, which has exposed numerous social inequalities at their core. Another salient theme of Catholic Social Teaching focuses on the preferential option for the poor. A basic moral test is how our most vulnerable members are faring. In a society marred by deepening divisions between rich and poor, our tradition instructs us to put the needs of the poor and vulnerable people first (USCCB 2021). This teaching is very timely in terms of responding to the call to be of service to justice-involved women. Women in conflict with the law are among the poorest of the poor, in the sense of being deprived of freedoman essential resource for human flourishing -and are thus as vulnerable as they can get. Formerly incarcerated women remain vulnerable, given the multiple barriers they confront in their readjustment to free society and the stigma they face due to their background. The church is thus compelled to be an instrument of compassion in the face of multiple vulnerabilities that current and former justice-involved women confront, particularly in the time of a pandemic. Feminist theology recognises the unique role of gender in shaping women's circumstances and opportunities, including the lived experiences of women involved in the justice system. Intersectional theology recognises overlapping inequities relating to gender, race, social class, and other social positions, and the impact thereof on people's opportunities (Kim & Shaw 2018). The nexus of feminist theology and intersectional theology can be used to build life-giving communities in areas where these are especially needed, such as among women currently or formerly involved in the criminal justice system. This entails recognising that women's conflicts with the law often occur because they are women, as a consequence of living in a society that grants them limited options or that compels them to exercise agency in ways that might lead to their involvement in illegal activity or implication in the offences of others. The first version of the creation story in the Book of Genesis underscores the equality of women and men: "So God created humankind in his image, in the image of God, he created them; male and female he created them" (Genesis 1:27). The same passage can be used to emphasise the inherent dignity of women currently or formerly in conflict with the law. Meanwhile, a passage in St. Paul's Letter to the Galatians points to the abolition of social differences among people, so as to promote equality within the Christian faith: "There is no longer Jew nor Greek, there is no longer slave nor free, there is no longer male and female; for all of you are one in Christ Jesus" (Galatians 3:28). According to Schüssler Fiorenza (1983:213), this biblical passage asserts that distinctions based on social position, including gender, social class, and perhaps even legal background, are insignificant, given the presumed equality of all people in the faith. These teachings can be used to eradicate cultural practices that perpetuate gender inequality and other interrelated oppressions and to empower women relegated to the margins of society not only because of their gender and other social locating factors, but also their history of legal transgressions. This need is especially vital in promoting a life-giving community, especially as the pandemic and its related effects continue to drag on. Pastoral care for justice-involved women is a need that existed prior to the pandemic and is strongly felt as the pandemic drags on. The impact of social isolation, multiple disruptions in one's living situation, the strain of limited resources and safety net, and grief due to the death of loved ones to COVID-19 and related complications, are among the common concerns that people face as a result of the pandemic. Currently and formerly incarcerated women are not above these issues. It is important for pastoral interventions to take a non-judgemental approach and to promote inclusivity by extending such services, regardless of religious affiliation. It is hoped that this will pave the opportunity for the expansion of church services to include groups of women who have traditionally been overlooked, such as those with histories of involvement in the criminal justice system. The link between the Catholic Church and state politics in Philippine society has been documented throughout history. This is evidenced by the mobilisation of the Catholic Church and other religious denominations in response to social and political issues (Bautista 2020;Buenaventura 2016). However, religious dynamics have also perpetuated patriarchal practices and views that impact on women. The entrenchment of faith in the cultural fabric and existing gender stereotypes that compel women to be morally upright and "pure", as exemplified by the Virgin Mary, among other factors, are likely to influence the limitations in responses that seek to ease the hardships experienced by justice-involved women, particularly in the time of a pandemic. The stigma attached to currently and formerly incarcerated women could possibly account for the hesitation of communities of the faithful to get involved -with the exception of faith-based organisations and pastoral agents that have assisted them through various programmes. Given the restrictions in visiting prisons in the Philippines, the involvement of faith-based organisations over the past year has mainly taken the form of the delivery of donations for PsDL. However, some Catholic priests have continued to say Mass in prison, albeit in more confined settings. A possible area of intervention lies in pastoral services that religious and lay persons could provide virtually such as counselling, motivational talks, online recollections, and the like. These activities entail close coordination and consultation with the prison administration and staff, given the need to balance internet connectivity issues with security concerns in prison. Moreover, the church can serve as an agent in the promotion of restorative justice, by taking part in civil society initiatives, particularly advocacy work and lobbying activities that highlight the need for more humane responses, rather than punitive ones, toward people who commit legal transgressions, particularly those with limited histories of illegal activity and those involved in non-violent and non-heinous crimes. Such initiatives should include and benefit women in the criminal justice system, in view of their more vulnerable social positions and relational responsibilities and the vicious cycle of criminality that entraps their families. Another area of intervention lies in support services to aid soon-to-bereleased or formerly incarcerated women in their transition to independent living. The Catholic Church could help provide comprehensive pastoral and social services such as transportation, housing, healthcare, livelihood assistance, counselling, and spiritual or religious activities (as needed) to former PsDL. The formation of support groups for formerly incarcerated women is also essential as a source of encouragement in their readjustment to free society. All these initiatives are helpful in building life-giving communities for justice-involved women.
2023-04-28T15:19:02.249Z
2023-04-26T00:00:00.000
{ "year": 2023, "sha1": "11c86c5a3f1a78a7a31df1a1eedc59f34dbe898e", "oa_license": "CCBY", "oa_url": "https://journals.ufs.ac.za/index.php/at/article/download/6559/4712", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e4b26182d708ac53d77c482c960ebef66d6b52d3", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
244216676
pes2o/s2orc
v3-fos-license
Development of the Technology for Obtaining a Probiotic Fermented Milk Product Enriched with Magnesium and Whey Proteins The problem of an increase in the level of alimentary-dependent diseases has a global scale, and the fulfilment of tasks to solve it is included in the state programs of most developed countries, including the Russian Federation. Enrichment of food products of mass consumption with essential micronutrients is a modern, most economically profitable, effective and physiological way to improve the health of the population. The research is aimed at development of the technology of probiotic fermented milk product enriched with magnesium-containing whey protein concentrate for the prevention of alimentary-dependent diseases. The experiments were aimed at studying the effect of various doses of WPC-Mg on the main organoleptic, physicochemical and microbiological indicators of the enriched fermented milk product. As a result of the conducted research, it has been determined that introduction of WPC-Mg to the milk base in the amount of 10% has a stimulating effect on the biochemical processes in the production of fermented milk drinks. It has been found that the structural and mechanical characteristics of WPC-Mg promote the formation of stronger intermolecular bonds in the fermented milk clot, which significantly improves the rheological characteristics of the product and makes the consistency of the drink similar to that of the products with a high mass fraction of fat. Based on the experimental data obtained, the technology of obtaining a probiotic fermented milk product enriched with magnesium and whey proteins has been developed. The obtained results open up broad perspectives for creating probiotic enriched products for functional and therapeutic nutrition. INTRODUCTION Modern physiologists increasingly attribute magnesium to the priority micronutrients for the human organism (; Trisvetova, 2012;Al Alawi et al., 2018;glasdam et al., 2016;guerreroromero et al., 2016;sarrafzadegan et al., 2016;Farsinejad-Marj et al., 2016;li et al., 2016;kirkland et al., 2018;Joy et al., 2019). Being a necessary macroelement for the cells and tissues, magnesium is involved in many physiological processes that ensure normal vital activity of the organism: in the synthesis of enzymes (ATP substrate, AdP, creatine kinase, hexokinase, etc.), direct activation of enzymes, regulation of the cell membrane function (stabilization of 1385 cell membranes, cell adhesion, the transmembrane flow of electrolytes), antagonism with calcium (muscle contraction/ relaxation, the release of neurotransmitters, the excitability of the specialized cardiac conduction system), and plastic processes (synthesis of protein and catabolism, metabolism of nucleic acids and lipids, and mitochondria) (Al Alawi et al., 2018;glasdam et al., 2016;Trisvetova, 2012). According to the Who, magnesium deficiency takes one of the leading places in human pathologies caused by disorders of mineral metabolism (glasdam et al., 2016;Al Alawi et al 2018;severino et al., 2019;hernández-Becerra et al., 2020). BIosCIenCe BIoTeChnology reseArCh CoMMunICATIons Kirkland et al., 2018;Hernández-Becerra et al., 2020;Joy et al., 2019;Trisvetova, 2012, Ince-Coskun & Ozdestan-Ocak, 2020. Like other macroelements, magnesium is received with food and water. The need for magnesium cannot always be satisfied through nutrition. In this case, mineral supplements and magnesium-containing preparations are prescribed. The effectiveness of drugs containing magnesium depends mainly on two factors: the amount of "elemental" magnesium in the compound and its bioavailability (the ability to be assimilated in the organism). High bioavailability is characteristic of chelated forms of magnesium -compounds of magnesium with amino acids (Glasdam et al., 2016;Al Alawi et al., 2018;Severino et al., 2019;Hernández-Becerra et al., 2020). Whey proteins are advised for consumption by the authors as a source of amino acids for obtaining chelate complexes. By their amino acid composition, whey proteins are among the most valuable proteins of animal origin (they are sources of essential amino acids, exhibit immunomodulatory, antagonistic, and anticarcinogenic activity; they are responsible for transporting fatsoluble vitamins and microelements in the organism) (Gordienko et al., 2015;Nechaev et al., 2007;Khramtsov, 2011). Whey proteins contain significant amounts of branched-chain amino acids and are physiologically beneficial: for example, consuming whey proteins in combination with power training accelerates fat loss in humans (Wang et al., 2020;Lockwood et al., 2017). Besides, whey proteins are widely used for technological purposes, such as forming gels (Egan et al., 2014;Oztop, 2014), changing viscosity , and fat substitution (Akalın et al., 2008). Chelated complexes of magnesium with amino acids from whey proteins are obtained through thermal denaturation with the use of magnesium salt as a coagulant, followed by fermentation of the protein mass by probiotic cultures (Shchekotova & Khamagaeva, 2017;Ince-Coskun & Ozdestan-Ocak, 2020). The studies performed in recent years have clearly shown that probiotics have a beneficial effect on gut microbiota and mineral metabolism (Skrypnik & Suliburska, 2018). Microflora is involved in the metabolism of many microand macroelements, including magnesium (Skrypnik & Suliburska, 2018). Biotechnological processing of whey proteins with probiotic cultures will improve their functional properties after thermomagnesium precipitation. Normalization of the intestinal microflora will cause acidification of the medium in the large intestine and ensure better magnesium absorption (Glasdam et al., 2016;Skrypnik & Suliburska, 2018;Al Alawi et al., 2018;Hernández-Becerra et al., 2020). The presented literature data show that joint enrichment of dairy products with magnesium, WPC, and probiotic cultures will allow obtaining functional food for various purposes, including the prevention of nutrition-related diseases. The work is aimed at developing a technology for obtaining a probiotic fermented milk product enriched with the magnesium-containing WPC. MATERIAl AND METhODS Experimental studies were performed at the Technology of Dairy Products. Merchandising and Examination of Goods Department of the HE FSBEI East Siberia State University of Technology and Management (ESSUTM) in Ulan-Ude, Russia, during the period from May to December 2019. The objects of research at different stages were whole milk, fermented WPC-Mg, probiotic fermented milk product. Pure cultures of Propionibacterium freundenreichii subsp. freundenreichii AC-2585 obtained from the All-Russian Collection of Industrial Microorganisms of Federal Institution "State Research Institute of Genetics and Selection of Industrial Microorganisms of the National Research Center" Kurchatov Institute" (Russia) were used to obtain probiotic yeast. The fermented WPC-Mg was used for enriching the fermented milk drink. Unclarified curd whey was used as the raw material for the production of the fermented WPC that was obtained by thermal coagulation with the addition of magnesium salt as a coagulant, followed by the fermentation of proteins with propionic acid bacteria of the P. freundenreichii subsp. freundenreichii species and drying. Before the introduction, WPC-Mg was preliminarily dissolved in a small amount of pasteurized milk cooled to (60 -65) 0 C. The content of magnesium in the fermented WPC was 236 ± 0.7 mg/100g, the mass fraction of moisture was 70 -80%, and the mass fraction of protein was 13 ± 0.6%. Cow's raw milk was used for the production of fermented milk product. The technological process for the production of the enriched fermented milk product included milk acceptance, purification, heating, normalization to mass fraction of fat of 2.5%, homogenization, pasteurization at (93 ± 2) 0 C for (15-20) sec, cooling of normalized mixture up to (30 ± 2) 0 C, introduction of WPC-Mg, fermentation of the mixture with 5% starter culture until the acidity reached (70-90) 0 T, cooling and bottling. In order to exclude undesirable relationships among microorganisms, fermentation of the normalized mixture was carried out with an active ferment based on the same probiotic cultures that were used for fermentation of WPC-Mg -P. freundenreichii subsp. freundenreichii AC-2585. The research scheme included study of the effect of various doses of fermented WPC-Mg in a milk base on the fermentation process of fermented milk product (assessment of titratable acidity and the number of viable cells of propionic acid bacteria); assessment of the organoleptic properties of fermented milk product with different content of WPC-Mg (taste, smell, color, consistency); study of the effect of WPC-Mg on the structural, mechanical and rheological characteristics of fermented milk clots (assessment of dynamic viscosity, clot density, degree of seneresis); establishment of shelf life and quality indicators of the enriched fermented milk product; development of technology for a probiotic fermented milk product enriched with whey proteins and magnesium. When performing the experimental part of the work, standard and generally accepted methods of physicochemical, organoleptic, microbiological analysis were used. Organoleptic indicators were determined visually, as well as by smelling and tasting the product. The titratable acidity was determined by titration: the method was based on the neutralization of the acids contained in the product with sodium hydroxide solution in the presence of phenolphthalein indicator. The rheological characteristics of acid clots were determined on a Brookfield RVDV-II + Pro rotational viscometer (United States, Brookfield Engineering Labs. Inc., 2009). The clot density was determined by measuring the immersion depth of a plate with certain weight and area, exerting pressure on the clot for (30-60) sec (Krekker et al., 2016). The plate with the weight of 12.4 g and the base area of 1.6 cm 3 was used in the experiment. The clot density was calculated by the equation: D = (0.5 • q • hc) / (d • hn), where D was the clot density, g/cm 3 , q was the load created by the plate (weight of the plate, g), hc was the clot height in the glass (mm), d was the plate base area (cm 2 ), hn was the plate immersion depth (mm). The syneresis was determined by the filtration method through measuring the amount of whey released during filtering 100 cm 3 of the decomposed clot through a paper filter for eight hours at room temperature.The mass fraction of magnesium was determined by the method of capillary electrophoresis on a Kapel-105M device (Russia, St. Petersburg, Lumex-Marketing LLC, 2012). The method was based on sample dilution, further separation, identification and quantitative determination of the mass concentration of magnesium (mg/L) by capillary electrophoresis (Lumex, 2013). The mass fraction of fat in the enriched product was determined by the acidbutyrometric method based on the separation of fat from the fermented milk product under the action of concentrated sulfuric acid and isoamyl alcohol, followed by centrifugation and measurement of the volume of released fat in the graduated part of the butyrometer (Gosstandart of the USSR, 1990). The mass fraction of protein was determined by the Kjeldahl method. The method was based on the mineralization of the analyzed product sample with concentrated sulfuric acid in the presence of a catalyst with the formation of ammonium sulfate, its conversion into ammonia, distillation of the latter into a boric acid solution, quantitative accounting of ammonia by the titrimetric method and calculation of the mass fraction of protein in the analyzed sample (Rosstandart, 2018). The number of cells of propionic acid microorganisms was determined by the method of limiting dilutions (Rosstandart, 2014b). The method was based on sowing propionic acid bacteria in certain dilutions in (on) selective nutrient media for submerged cultivation, their cultivation at a temperature of (30 -1) 0 C for 48 hours with limited oxygen access and subsequent quantitative calculation of the content of propionic acid bacteria in the product.Bacteria of the E. coli group were determined by the signs of growth in liquid Kessler medium (Rosstandart, 2014a). Yeast and molds were determined by sowing the product on a solid nutrient medium (Sabouraud's agar) (Rosstandart, 2015). All experiments were carried out 3-5 times. The data obtained were processed using personal computer in Microsoft Excel 14 with the calculation of arithmetic mean values and corresponding errors (M ± m). The significance of differences between the compared indicators in the groups was assessed by Student's t-test. Differences were considered statistically significant at P<0,05. Graphical dependencies in the figures were presented after the experimental data processing. Calculations, plotting and specification of diagrams were performed using Microsoft Office 14 and Excel 14 applications on Windows 10. RESUlTS AND DISCUSSION At the first stage of the studies, the authors studied the effect of fermented WPC-Mg in a milk base on the fermentation process of the fermented milk product. WPC-Mg was introduced into normalized milk after heat treatment in the amounts of 5, 10, and 15%. The ripening process was monitored by the increase in acidity Fig. (1a) and the growth of propionic acid bacteria Fig. (1b) in the product. During ripening, the authors tracked the time required for the acidity of the tested samples of the product to reach (70 -90) 0T. The reference was normalized milk with the fat mass fraction of 2.5%. ). An intensive increase in the titratable acidity with the introduction of WPC-Mg at a dosage of 10 and 15% allowed shortening the process by (2 -4) hours, compared to the reference Fig. (1a). Quantitative accounting of the probiotic cultures showed more intensive growth of microorganisms in the milk with the addition of WPC-Mg Fig. (1b). The number of viable cells of propionic acid bacteria in the experimental samples with WPC-Mg after six hours of cultivation was (2•109 -9•109) CFU/cm3, which was two orders of magnitude greater than in the reference Fig. (1b). The high biological value of whey proteins probably created favorable conditions for the development of propionic bacteria. The presence of lactose, peptides, and free amino acids in WPC-Mg provided the possibility of the propionic bacteria's faster growth during cultivation. Thus, summarizing the results obtained, a conclusion can be drawn that adding WPC-Mg to normalized milk intensifies the fermentation process and increases the number of probiotic microorganisms in the fermented milk product. This fact indicates the prebiotic properties of the fermented WPCs enriched with magnesium. It was noted that a significant increase in the titratable acidity and the number of viable cells of propionic acid bacteria had been observed after adding 10 -15% of WPC-Mg to the normalized mixture. The joint presence of whey proteins and the macro element in WPC-Mg contributes to a synergistic effect, enhancing their positive effect on the activity of propionic acid bacteria in the fermented milk product. This fact opens up wide opportunities for using WPC-Mg in the fermented milk product technology, providing symbiotic properties and functional orientation to it. According to the literature (Al Alawi et al., 2018;Glasdam et al., 2016;Guerrero-Romero et al., 2016;Sarrafzadegan et al., 2016), the process of product enrichment with various magnesium-containing salts and additives may affect the organoleptic properties of the product. In this regard, in further experiments, the organoleptic properties of the test samples with various contents of WPC-Mg were assessed Table (1). The analysis of the data in Table (1) showed that the introduction of fermented WPC-Mg to the fermented milk drink affected the taste of the product: a bitter aftertaste appeared. After the introduction of (5 -10) % of WPC-Mg, this change in the taste was barely noticeable and was not a defect; with increasing the dose, the bitter taste increased, which significantly reduced the consumer properties of the fermented milk drink. This defect may be explained by the bitter taste of magnesium salts. It should be noted that in all studied samples, the introduction of fermented WPC-Mg improved the consistency of the products. This fact is especially important for producing fermented milk drinks with a low mass fraction of fat. The introduction of WPC-Mg ensured the consistency of the drink similar to that of the products with a high mass fraction of fat, even without the use of stabilizing systems. An objective assessment of the consistency of fermented milk products was provided by the rheological properties that were determined by the type of structure and mechanical properties of the product. These properties were sensitive to the changes in the chemical composition of the product, physical parameters, and processing conditions (Ababkova et al., 2016). In this regard, the next series of experiments was devoted to studying the effect of WPC-Mg on the structural, mechanical, and rheological properties of fermented milk clots Table (2). The analysis of the data in Table (2) showed that the presence of WPC-Mg in the product contributed to forming stronger bonds between the structural elements of the fermented milk clot. This was confirmed by the 0.8 -1.4 times increased viscosity and 1.1 -1.6 times increased strength of the resulting lumps. With increasing the dose of introduced WPC-Mg, a decrease in the acid clots syneresis ability was observed, compared to the reference (Table (2). In the studied samples, the degree of syneresis decreased from 67% to 40%. The high water-binding capacity of WPC-Mg was explained by the presence of amino acids that adsorbed water from hydrophilic elements. Usually, hydration of the native whey proteins is weak, however, the thermal denaturation during WPC-Mg production might have significantly increased this ability, which had a positive effect on the water-binding ability of the fermented milk clots of the product. It should be noted that samples of fermented milk products enriched with WPC-Mg retained uniformity of consistency and a high number of viable cells (108 -109 CFU/cm3) during storage (for 10 -12 days), in contrast to the reference sample. The homogeneity of the consistency of the samples with the WPC, compared to the reference, was explained by the stabilizing properties of whey proteins, which had a water-holding ability and improved the quality of the products and their storage life. The obtained results allowed concluding that fermented WPC-Mg not only enriched the fermented milk drink with protein, easily digestible chelated magnesium, and probiotic cultures, but also intensified the production process, prolonged the shelf life of the product, and improved its structural and mechanical properties, which fact was especially important in the production of fermented milk drinks with low fat content. A comprehensive study of the organoleptic, physicochemical, and rheological parameters of the fermented milk drink made it possible to conclude that a sample with the 10% content of WPC-Mg had the best consumer properties. Within the study, a technology for the production of a probiotic fermented milk product enriched with magnesium and whey proteins was developed. The process envisaged the milk acceptance, purification, normalization, homogenization, heat treatment and introduction of WPC-Mg into normalized milk in the amount of 10%. This method of introduction was explained by the fact that in the case of using the fermented WPC, subsequent pasteurization of the mixture was not advisable (due to the death of probiotic cultures), as well as homogenization, which could affect the structure of the whey concentrates after mixing the components. This was followed by fermentation of a mixture of 5% starter culture, fermentation, cooling and bottling. The ripening time according to the developed technology was only 6-7 hours. The enriched fermented milk product was characterized by good organoleptic properties and contained a high number of viable cells of propionic acid bacteria (109 CFU/g). The qualitative characteristics of the developed fermented milk product enriched with WPC-Mg are shown in Table (3). The data in Table 3 show that the obtained fermented milk product had good organoleptic properties, contained a prophylactic dose of magnesium in an easily digestible form, and had high protein content and a great number of viable cells of propionic acid bacteria. The consumption of 0.25 liters of the developed product will satisfy the daily need of an adult in macro elements by (16 -18) %, and consumption of 0.5 liters -by (32 -36) %, respectively. These values are within the safe levels of product enrichment for magnesium (10 -40%), as recommended by the leading nutritionists and physicians. BIosCIenCe BIoTeChnology reseArCh CoMMunICATIons At present, a number of effective dairy products enriched with WPC are available in our country and abroad (Khramtsov & Nesterenko, 2004;Lawrence, 1993;Lelievre, 1990;Cozzolino, 2003;Patocka, 2006;Smirnova et al., 2014;Lagrange et al., 2015;Henriques et al., 2012Henriques et al., , 2017Nastaj et al., 2020). Protein concentrates, isolated by various expensive membrane methods, prevail among the used WPCs. These concentrates, with all their advantages, have one significant drawbackhigh allergenic activity (Kattan et al., 2011;Botteman & Detzel, 2016;Vonk, 2017;Abbring, 2020). In this work, to enrich a fermented milk product, the authors propose to use WPC obtained by thermal coagulation with the addition of magnesium salt as a coagulant, followed by fermentation of protein clots with probiotic cultures. Biotechnological processing of WPC-Mg using propionic acid bacteria allows increasing the functional properties of the protein concentrate obtained and reducing the allergenic effect of whey proteins. It should be noted that in the literature, the authors did not find data on the production of fermented WPC simultaneously enriched with probiotic cultures and any essential elements. Therefore, the use of fermented concentrates for the enrichment of dairy products, obtained by the method of thermal coagulation with the addition of magnesium salt as a coagulant, is a relevant and cost-effective solution (Minj & Anand, 2020). The proposed biotechnological methods for obtaining a fermented milk product can shorten the production process and significantly improve the quality indicators of the product. The authors have proved the stimulating effect of fermented WPC-Mg on biochemical processes in the production of a fermented milk product: acid formation during fermentation, improvement of structural, mechanical, rheological characteristics and shelf life. The enriched product, developed according to the proposed technology, is of greater importance in dietary nutrition. The introduction of probiotic fermented milk drinks enriched with magnesium and whey proteins into production and their promotion on the market will significantly expand the range of products for the prevention and correction of alimentary-dependent diseases, as well as allow implementing the principle of waste-free production at dairy enterprises and reducing environmental pollution as a result of utilization of whey protein. CONClUSION As a result of the studies, a new technology for producing the probiotic fermented milk drink has been developed, which has made it possible to obtain an enriched dairy product with functional properties. It has been found that the use of WPC-Mg in the production of the fermented milk drink not only enriches it with an easily digestible macro element and whey proteins but also intensifies the fermentation process and increases the number of probiotic microorganisms in the fermented milk product. This fact is the evidence of prebiotic properties of the fermented WPCs enriched with magnesium. This fact opens up wide opportunities for using WPC-Mg in the fermented milk product technology, providing symbiotic properties and functional orientation to it. The introduction of WPC-Mg into the milk base improves the structural and mechanical properties of the finished product: the density and viscosity of the fermented milk clots increase, and the syneresis slows down. This circumstance is of particular importance in the production of fermented milk drinks with low fat content since it allows excluding or significantly reducing the number of stabilizers and/or thickeners used in such cases. The use of the fermented WPC in the production of the fermented milk drink has allowed increasing the shelf life by up to 10 -12 days without significant changes in the organoleptic, microbiological, and structural and mechanical properties, which increases the economic efficiency of the developed product.
2021-10-19T15:16:11.783Z
2021-09-25T00:00:00.000
{ "year": 2021, "sha1": "fd53a9a40d5f8690404441ed60cfa281799d629b", "oa_license": "CCBY", "oa_url": "https://www.lumex.ru/metodics/20ARU03.01.04-1.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "2155031f301f6a3e13c811766351f422bd99f79d", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
32043241
pes2o/s2orc
v3-fos-license
New species of Trophoniella from Shimoda, Japan (Annelida, Flabelligeridae) Abstract Trophoniella hephaistos sp. n. was collected from a tank irrigated with seawater pumped directly from Nabeta Bay, Japan. This species is discriminated from other Trophoniella by having dorsal tubercles, a tongue-shaped branchial plate, a tunic covered with large sediment grains dorsally and ventrally, having eyes, and anchylosed neurohooks starting from chaetigers 17–20. This is the first record of Trophoniella from Japanese waters. Identification keys to species of Trophoniella and four gene sequences (COI, 16S, 18S, 28S) of this species are provided. Phylogenetic analysis was conducted to clarify phylogenetic position of Trophoniella in Flabelligeridae using four genes. Introduction Trophoniella Hartman, 1959 belongs to the family Flabelligeridae and currently consists of 25 species and one undescribed species (Salazar-Vallejo 2012b). Trophoniella polychaetes live in sediments from shallow water to the deep sea in tropical or subtropical regions (Salazar-Vallejo 2012b). This genus is characterized by having anchylosed neurohooks in the median or posterior chaetigers, bidentate or bifid tips, a thick tunic, a tongue-shaped branchial lobe (except for Trophoniella enigmatica), and longitudinal rows of elongated single papillae along the body (Salazar-Vallejo 2012b). Trophoniella resembles Piromis and Pycnoderma in having a thick tunic, often with sediment grains, a tongue-shaped branchial lobe, and multiarticulated notochaetae. However, it is distinct from Piromis and Pycnoderma by having anchylosed neurohooks in the median or posterior chaetigers (Salazar-Vallejo 2011b). Phylogenetic analyses of Flabelligeridae were conducted several times by using morphological and molecular data sets (Burnette et al. 2005;Osborn and Rouse 2008;Salazar-Vallejo et al. 2008). A morphological analysis suggested that Trophoniella was similar to Piromis. However, the molecular data was unable to robustly resolve the phylogenetic position of Trophoniella; this is likely an artefact of limited taxon sampling within the genus. During benthos sampling in an aquarium in the Shimoda Marine Research Center (SMRC), University of Tsukuba, we collected undescribed species of Trophoniella. Here, we describe Trophoniella hephaistos sp. n. and cytochrome c oxidase subunit I (COXI), 16S ribosomal RNA (16S), 18S ribosomal RNA (18S), 28S ribosomal RNA (28S) gene sequences to contribute to the DNA barcoding of the Flabelligeridae. A phylogenetic analysis was conducted using four genes to clarify relationships of Trophoniella within the family Flabelligeridae. To the best of our knowledge, this is the first report of Trophoniella from Japanese waters. Material and methods Worms were collected by hand from a tank (MF-5000S, aquaculture system, Japan. 2.4 m in diameter and 1.1 m in depth) installed in the SMRC, University of Tsukuba, Shizuoka (34°40.045'N; 138°56.145'E) (Fig. 1). The tank contained sandy mud and sea water and the worms lived between 0 and 30 cm below the sediment surface. Seawater in the tank was drawn only from Nabeta Bay, directly in front of the SMRC, from a depth of 3 m (location of the head gate: 34°39.950'N; 138°56.283'E). Several samplings were conducted in Nabeta Bay and other surrounding sites at depths between 2 and 386 m by the first author and members of the SMRC but there was no individual of Trophoniella discovered except in the tank. All the specimens were first anesthetized with menthol and then fixed and preserved in 70% ethanol. The anesthesia duration differed among samples. Preserved specimens were observed under stereoscopic MZ 16F (Leica, Germany) and E600 (Nikon, Japan) microscopes. All specimens were deposited in the National Museum of Nature and Science, Tokyo (NSMT), Japan. Etymology. The worm is coated with sediment particles, resembling armor. Hephaistos (Ἥφαιστος) was the name of the ancient Greek god of blacksmiths who forged the armor worn by Achilleus. Hephaistos is also spelled Hephaestus. The Japanese name is derived from the type locality (Shimoda), Japanese armor (Yoroi), and flabelligerids in Japanese (Habouki). Distribution. This new species is currently only known from the tank of the type locality. The seawater in the tank was drawn only from Nabeta Bay from a depth of 3 m directly facing the SMRC. The natural habitat of this species remains unknown. Due to the location of the head gate, T. hephaistos could be a shallow-water species. However, several sublittoral (~50-60 m) invertebrates were collected from this tank (Dr. Hiroaki Nakano, pers. comm.). Additional sampling efforts in Nabeta Bay will clarify the natural habitat of this species. Phylogenetic analysis. The final lengths of the aligned sequences were 669 bp (COXI), 485 bp (16S), 1893 bp (18S), and 910 bp (28S). The bootstrap value of 98% in ML analysis strongly supported the monophyly of Flabelligeridae, but internal relationships of Flabelligeridae were not resolved (Fig. 6). The sister group of Trophoniella was Piromis. The bootstrap value in ML analysis (100%) demonstrated the monophyly of this clade (Fig. 6). Remarks. Trophoniella hephaistos sp. n. resembles T. enigmatica Salazar-Vallejo, 2012 and Trophoniella indica (Fauvel, 1928) in having dorsal tubercles at the anterior chaetigers, a tunic covered with large sediment grains dorsally and ventrally, and anchylosed neurohooks starting from chaetiger 14 or posterior. However, T. hephaistos is discriminated by the presence of anchylosed neurohooks starting from chaetigers 17-20, whereas those of T. enigmatica start from chaetiger 40, and of T. indica from chaetiger 14. Additionally, T. enigmatica does not have a tongue-shaped branchial plate and T. indica does not have eyes. Chaetiger number of T. hephaistos was more than twice as many as that of T. indica. Trophoniella hephaistos has dorsal body papillae in two longitudinal rows, whereas T. enigmatica in three and T. indica in five. The phylogenetic analysis showed Trophoniella to be the closest relative of Piromis in Flabelligeridae supported by a high bootstrap value (See Fig. 6). Our findings are consistent with previous morphological studies that indicated a close relationship between Trophoniella and Piromis based on their shared characters such as tongue-shaped lobe, multiarticulated notochaeta, and thick tunic (Salazar-Vallejo 2011b; Salazar-Vallejo et al. 2008). Key to species of the genus of Trophoniella The key by Salazar-Vallejo (2012b) is amended with the addition of this new species at couplet 20.
2018-04-03T06:02:05.313Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "c4ea7c5f632bef90b319ecd7c12d2cff6845b033", "oa_license": "CCBY", "oa_url": "https://zookeys.pensoft.net/article/8346/download/pdf/", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c4ea7c5f632bef90b319ecd7c12d2cff6845b033", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
257241918
pes2o/s2orc
v3-fos-license
A Comparative Study of Automated Deep Learning Segmentation Models for Prostate MRI Simple Summary Prostate cancer represents a highly prevalent form of cancer worldwide, with timely detection and treatment being crucial for achieving a high survival rate. Manual segmentation, which is the process of manually identifying different anatomical structures or tissues within an image, is the most prevalent detection method. However, it is a time-consuming and subjective task constrained by the radiologists’ expertise, which underpins the demand for automated segmentation methods. In this study, we conduct a comprehensive and rigorous comparison of multiple prevalent deep learning-based automatic segmentation models for the prostate gland and both peripheral and transition zones, using multi-parametric MRI data. Abstract Prostate cancer is one of the most common forms of cancer globally, affecting roughly one in every eight men according to the American Cancer Society. Although the survival rate for prostate cancer is significantly high given the very high incidence rate, there is an urgent need to improve and develop new clinical aid systems to help detect and treat prostate cancer in a timely manner. In this retrospective study, our contributions are twofold: First, we perform a comparative unified study of different commonly used segmentation models for prostate gland and zone (peripheral and transition) segmentation. Second, we present and evaluate an additional research question regarding the effectiveness of using an object detector as a pre-processing step to aid in the segmentation process. We perform a thorough evaluation of the deep learning models on two public datasets, where one is used for cross-validation and the other as an external test set. Overall, the results reveal that the choice of model is relatively inconsequential, as the majority produce non-significantly different scores, apart from nnU-Net which consistently outperforms others, and that the models trained on data cropped by the object detector often generalize better, despite performing worse during cross-validation. Introduction Prostate cancer is one of the most common cancer types in the world, affecting roughly one in every eight men according to the American Cancer Society. Despite its high survival rate (5-year relative rates of ≈100% for localized and regional, 30% for distant), it was the third most prominent cancer in 2020 [1]. Therefore, there is an urgent need to develop methods that may aid in early detection and better characterization of disease aggressiveness [2,3]. The latter will make possible the avoidance of over-treatment in patients with non-aggressive disease and the amplification of treatment in patients with aggressive disease. Manual segmentation is still the most common practice, a time-consuming task also limited by the subjectiveness of the radiologists' expertise, resulting in high interobserver variability. Most automatic segmentation processes developed for clinical applications are based on convolutional neural networks (CNNs), many being variations of the classic Unet architecture [4]. For prostate segmentation, the proposed models continue to improve as shown by the mean dice score used for the following results.Guo et al. [5] presented a twostep pipeline where a stacked sparse auto-encoder was used to learn deep features, and then a sparse patch matching method used those features to infer prostate likelihood, obtaining a Dice score of 87.8 ± 4.0. Milletari et al. [6] presented Vnet, a volumetric adaptation of unet, achieving a Dice score of 86.9 ± 3.3. Zhu et al. [7] used deep supervised layers on Unet with additional 1 × 1 convolutions, obtaining 88.5 Dice. Dai et al. [8] used the object detection and segmentation model Mask-RCNN, achieving 88 ± 4 and 64 ± 11 Dice in the segmentation of prostatic gland and intraprostatic lesions, respectively. Zavala-Romero et al. [9] presented a multistream 3D model that employed the three standard planes of an MRI volume as inputs and was trained with data from different scanners, achieving Dice scores of 90.5 ± 2.7 for the full gland and 79.9 ± 9.4 for the peripheral zone when using data from one scanner, and 89.2 ± 3.6 and 81.1 ± 7.9 when combining data from both scanners. Aldoj et al. [10] presented a Unet variation with two stacked dense blocks in each level of the downsampling and upsampling paths, obtaining Dice scores of 91.2 ± 0.8, 76.4 ± 2 and 89.2 ± 0.8 for full gland, peripheral and central zones, respectively. Duran et al. [11] presented an attention Unet [12] variation, where an additional decoder was added to perform both gland segmentation as well as multi-class lesion segmentation, achieving a Dice score of 87.5 ± 1.3. Recently, Hung et al. [13] proposed a new take on skip connections, using a transformer to capture the cross-slice information at multiple levels, with the main advantage being that this can be incorporated into most architectures, such as the nnU-Net model. As of late, more complex and novel architectures and paradigms have been adopted for biomedical imaging segmentation. When dealing with large amounts of data, vision transformer architectures have shown great promise for biomedical segmentation [14][15][16][17][18]. These visions transformer models have the advantage of using self-attention mechanisms, allowing them to learn complex patterns, and to also capture the global spatial dependencies of the images, which is highly advantageous for segmentations. As previously mentioned, since manual segmentation suffers from several limitations, most data end up being unlabeled. Self-Supervised learning strategies have been used to leverage these unlabeled data to improve the performance of segmentation models by first pre-training them on pretext tasks [19][20][21]. Lastly, another paradigm that has been recently used for biomedical image segmentation is knowledge distillation, where models are first trained on a larger task (e.g., multi-centre data ) and are then used to help train other models for more domainspecific tasks [22] (e.g., single-centre data). This technique has already been adopted by some researchers for prostate segmentation [23][24][25] and is shown to improve overall model performance. One major problem that presents with most previously addressed Unet-based automatic prostate segmentation works are the different evaluation conditions presented, where the same model produces different results depending on the new architecture being proposed and what it needs to outperform the previous models, regardless of the correctness of the experimental settings. What we propose is not a new method but a comparative study of several of the most common, as well as some new some variations, of Unet-based segmentation models for prostate gland and zone segmentation. Additionally, we present and evaluate another research question, regarding the impact of using an object detector model as a pre-processing step in order to crop the MRI volumes around the prostate gland, reducing computational strain and improving segmentation quality by reducing the redundant area of the volumes without simply resizing the data. We examined this question by comparing the results of the different models in both full and cropped T2W MRI volumes, during cross-validation and in an additional external dataset. Data We used two publicly available datasets containing retrospective data, one to train both the object detection and the segmentation models (ProstateX), the other to serve as an external test set to assess the quality of the segmentation models (Medical Decathlon prostate dataset). The ProstateX dataset (https://wiki.cancerimagingarchive.net/pages/ viewpage.action?pageId=23691656SPIE-AAPM-NCIPROSTATExchallenge, accessed on 2 January 2023) is a collection of prostate MRI volumes that include T2W, DWI and ADC modalities. These volumes were obtained by the Prostate MR Reference Center -Radboud University Medical Centre (Radboudumc) in the Netherlands, using two Siemens 3T MR scanners (MAGNETOM Trio and Skyra). Regarding the acquisition of the images, the following description was provided by the challenge's organizers: "T2-weighted images were acquired using a turbo spin echo sequence and had a resolution of around 0.5 mm in plane and a slice thickness of 3.6 mm. The DWI series were acquired with a singleshot echo planar imaging sequence with a resolution of 2-mm in-plane and 3.6-mm slice thickness and with diffusion-encoding gradients in three directions. A total of three bvalues were acquired (50, 400, and 800), and subsequently, the apparent diffusion coefficient (ADC) map was calculated by the scanner software. All images were acquired without an endorectal coil". Regarding the segmentations, the prostate gland segmentations were performed by a senior radiologist from the Champalimaud Foundation, while the transition and peripheral zone segmentations were obtained from the public dataset repository. We applied bias field correction, using N4ITK ( [26]), to the T2W volumes, and all the masks were resampled to be on the same space and have the same orientation and spacing as the volumes. A total of 153 volumes were used for the gland segmentation, while for the peripheral and transition zone segmentation, a total of 139 volumes was used. As an external test dataset, we used the publicly available prostate segmentation dataset from the Medical Segmentation Decathlon ( [27]), which was acquired at the Radboud University Medical Centre (Radboudumc) in the Netherlands. This dataset consists of 32 MRI volumes of coregistered T2W and ADC modalities, along with segmentation masks with distinct classes for the transition and peripheral zones. We extracted the T2W volumes and, similarly to the ProstateX, performed bias field correction and resampled the masks, and made new masks by joining both classes to obtain a prostate mask. The model of the scanner used to take these MRIs is not disclosed. Prostate Detection To prepare the data for object detection, the volumes were converted into 2D 16-bit PNG images (we chose to use 16-bit images so we keep the notion of depth while working in 2D) where the empty slices were discarded. The edge coordinates of the ground truth bounding boxes were obtained from the prostate gland masks. We used these images to train a variation of the YoLo-v4 Open source code available here: https://github.com/ultralytics/yolov5 , accessed on 2 January 2023, [28] object detection architecture to locate the prostate gland and accurately draw a bounding box around it ( Figure 3). After predicting the bounding boxes, we extracted the coordinates of the edges of the largest bounding box in the volumes, added padding of 40 pixels in each direction, and cropped the entire volume with that box. Using the largest bounding box plus an additional padding, we ensure that all slices contain the entire prostate gland and some additional area. Figure 1 provides a comparison between a standard and cropped slice of a T2W volume. T2W Pre-Processing and Augmentation Before feeding the T2W volumes to the segmentation models, the following preprocessing techniques were applied: Orientation transformation to ensure the models were in RAS+ orientation; Intensity rescaling transformation to ensure the voxel intensity of the volumes was in [−1, 1]; Z normalization transformation; Cropping the images into a smaller 160 × 160 × 32 size. This last transformation was done to reduce the total unused area, making it less computationally expensive, and to ensure that the volumes had an appropriate number of slices for the segmentation models. It was only applied to the volumes that had not been previously cropped by the object detector. In addition, the following augmentation transformations were applied: Random affine transformations, including rescaling, translation and rotation; Random changes to the contrast of the volumes by raising their values to the power of γ, where γ = 0.5; Application of random MRI motion artifacts, and random bias field artifacts. Examples of volumes after pre-processing and augmentation are shown in Figure 2. Segmentation Models In this study, we conducted an extensive analysis of various popular Unet-based models from the literature that either had a publicly available implementation or provided enough detail to be reproducible. These models utilized mechanisms falling into three distinct categories: Dense blocks, Recurrent connections, and Attention mechanisms. The inclusion of a diverse range of mechanisms facilitated a thorough investigation of the utility of each mechanism, given that most Unet variations exhibit only minor differences between one another. Additionally, we introduced new networks that build upon the previously published models by incorporating additional combinations of mechanisms. This enabled us to evaluate the performance of these models against established ones, leading to more reliable and informative results. For the Unets, the standard convolution blocks in the downwards path are composed of two convolution operators with a kernel size 3 and stride of 1, followed by a batch normalization, a ReLU activation function and lastly a maxpool operation for dimensionality reduction. The convolution blocks on the upwards path are composed of an upsample operation with scale 2 to double the size of the previous input, and a convolution of kernel size 2 and stride of 1, followed by a batch normalization and ReLU activation. For the dense unets, the dense blocks are composed of four blocks, with each one having two convolution operations with kernel size 1 and stride of 1, where after each convolution there is a batch normalization and a ReLU activation function. For the transition blocks also used in the dense unets, we perform an upsample operation with scale 2 and a convolution with kernel size 3 and stride of 1, followed by batch normalization and ReLU activation. Regarding the Dense-2 Unet, our implementation differs from the one presented in the original article. As this article does not include enough information to fully replicate the model, we chose the parameters for the dense blocks and convolution blocks to be similar to the ones of the remaining networks. The recurrent residual blocks are equal to the ones described in Zahangir et al. [30], and are composed of two residual operation and two convolution blocks, each one having one convolution operation with kernel size 3 and stride of 1, followed by a batch normalization and ReLU activation. The attention mechanisms are equal to those described in Oktay et al. [12], where we calculate W g , W x and ψ, each composed of a convolution of kernel size 1 and stride of 1, and then compute σ(ψ(ReLU(W g + W x ))). At the start of each Unet, a single convolution block, composed of a convolution of kernel size 3 and stride of 1, followed by a batch normalization and ReLU activation, is applied to double the channels from 32 to 64, so we end up with 1024 channels at the bottleneck area of the unets. Regarding the Vnet, SegResNet and highResNet, we used the models made available by the MONAI package [34]. We used the 3D full resolution version of the nnunet, which is equal to the one described in [31], and publicly available at https://github.com/MIC-DKFZ/nnUNet, accessed on 2 January 2023. All other models were implemented in Pytorch [35]. All volumes were normalized to zero mean and one unit of standard deviation, resized to 256 × 256, and separated in 2D slices. The slices with no prostate were removed, ending with a total of 2801 images. The remaining 2801 images were split into 70% training and 30% validation. To avoid data leakage, we ensured all slices belonging to a volume were only present in one of the sets. To set the initial parameters for the object detector model, we used the genetic algorithm based hyperparameter evolution included in the package, that ran for 100 generations, with 90% mutation and 10% crossover probabilities. The fitness function used to evaluate this evolution is the weighted average between the mean average precision with a threshold of 0.5 (mAP@0.5), which contributed with 10%, and the mean average precision with different thresholds, from 0.5 to 0.95 in steps of 0.05 (mAP@0.5:0.95), which contributed with the remaining 90%. Then, after having set the initial values for the hyperparameters, the model was trained for 350 epochs. Segmentation The datasets were split with 5-fold cross-validation. All models except the nnU-Net were trained for a maximum number of 1500 epochs, using early stopping with patience 30. The optimizer was Weighted Adam (AdamW) with a starting learning rate of 1 × 10 −4 , Cosine Annealing learning rate decay, and weight decay of 4 × 10 −5 . For nnU-Net, we used the default parameters, training for 1000 epochs with the Ranger optimizer [36]. All models were trained using Pytorch-Lightning. To assess the quality of the models, we use Mean Dice score (MDS) with 95% confidence interval (CI), Mean Hausdorf distance (MHD) and Mean surface distance (MSD). The Dice score is a widely used metric for segmentation tasks, and it measures the overlap between the ground truth and predicted masks. The higher the score, the better the segmentation. The Hausdorff distance (HD) measures how far two points are in two images. In this case, how far a point from the predicted mask if from its nearest point in the ground truth mask, essentially indicating the largest segmentation error present in the predicted mask. Low values mean small errors. The Surface distance (SD)measures the difference between the surface of the predicted mask and the ground truth mask. Low values mean small differences. Loss Function Initially, our loss function was the standard averaged sum of Dice (Equation (2)) and Crossentropy (Equation (1)), called Dice CE loss (Equation (3)). It produced good results for the mid regions of the prostate, but struggled with the small and irregular shapes present in both apex and base. Therefore, we included both Focal (Equation (4)) ( [37]) and Tversky (Equation (5)) losses, as they mitigate this issue by focusing on the hard negative examples, reducing both false negatives and false positives. The final loss function was named Focal Tversky Dice Crossentropy loss (FTDCEL) (Equation (6)), an averaged sum of all previously mentioned loss functions, all having the same weight (FTDCEL) (Equation (6)). All of these loss functions were implemented using the MONAI package [34]. Object Detection To evaluate the object detection model we followed the procedure described in Section 2.5.1 for four different variations of the yolo model, a small, medium, large and extra-large. Figure A1 shows the results obtained on the medium model which is the one that produced the best results. As shown in Figure A1, the results obtained were very accurate, with high confidence values on all prostate sections, including the apex, achieving a Precision, Recall, mAP@0.5 and mAP@0.5:0.95 of 0.9709, 0.9534, 0.9653 and 0.6965, respectively. As for the loss values, which represent the error meaning lower values are better, this model obtained a bounding box loss (Box loss), which is the measure of how well the predicted bounding box covers the target, of 0.0227 and an object confidence loss (Obj loss), which measures the probability that the target exists inside the predicted bounding box, of 0.0034, on the validation data. Figure 3 (bottom row) illustrates a batch of the obtained results. As shown, both the apex, base and middle area of the prostate is properly detected with a high degree of confidence. Gland Segmentation Regarding the segmentation of the prostate gland (Table 1A and Figure 4), when working with the full volumes we can see that the nnunet is significantly better than all other models, by ≈2%, achieving a mean Dice score (MDS) of 0.9289 ± 0.0046. This is further corroborated when looking at the boxplots as we can see that most models have a high degree of dispersion, as opposed to the nnunet. A mean Hausdorff distance (MHD) of 5.7155 is considerably better than the ones obtained by the remaining models. It can also be observed that despite small differences, there is no statistical difference between the Dice score results obtained by the other models, although being discernible that some, such as daunet, d2uNet, Vnet and especially the highresnet, have substantially greater MHD values. From Figure 1 right we can see that nnunet is the model that better generalizes, achieving a MDS of 0.8678 on the external test set, a result far greater than most other models, along with a MHD of 10.0231, which is the smallest by far between all models. When segmenting the prostate gland using the volumes cropped by the object detection model ( Figure 1 left and Figure 4), the results are quite different. There is no single significantly best model, there are however four models that under-perform, and nine which have similar scores. The best, although not significantly better, Dice score is obtained by the d2auNet, with a value of 0.9158 ± 0.0068. Looking at the boxplots we can see that the majority of the models, apart from the r2Net and highResNet, present a low degree of dispersion. Looking at the MHD and average surface distance (ASD), we can further see that the choice of the model is mostly inconsequential, with the exception of highresnet. These results still hold for the external test set (Figure 1 right), where we can see that all models, again with the exception of highresnet, have a very similar generalization capability with a MDS of ±0.85. Comparing the performance of full and cropped volume models, looking at Table A1 it is possible see that for 11 of the 13 models there is no significative difference, with the only two exceptions being the Vnet, where it does in fact improve the Dice scores, and the nnU-Net which has the opposite effect, worsening model performance. Looking at the boxplots in Figure 4, we can see that on all metrics the results show less dispersion, which is a good indication. However, when observing the difference between the results on the external test set, we can see that the vast majority of models do improve their generalization capability using cropped data, achieving higher MDS scores and lower MHD scores. Figure 5 shows a comparison between full volume and cropped volume segmentations of both nnU-Net and d2aunet, as they were the models with the highest Dice score in each task. In this case, despite having a lower MDS, the segmentations provided by the cropped models are arguably better than those provided by the full volumes, with the difference being explained by the difference in size when calculating this metric. In smaller volumes, a wrongly calculated pixel will have more impact in the calculation of the MDS than on larger volumes, which may explain why a model with a slightly lower MDS (0.014 difference) can provide more accurate segmentations. Zone Segmentation Starting with the transition zone (TZ) segmentation (Table 2A and Figure 6), we can see that the nnU-Net outperformed all the other models, both on the full volume and cropped volume modalities. On the full volume data, it achieved an MDS of 0.8760 ± 0.0099, which is ≈3% better than all other models, while on the cropped data it further increased the distance to the remaining models, by achieving a mean Dice score of 0.8561 ± 0.0133, ≈7% better than the others. Regarding the maximum error, for full and cropped we obtained an MHD distance of 8.7049 and 10.2390, respectively, which are notably smaller than the errors obtained by the remaining models, ±6 and ±4 mm smaller for the full and cropped volumes, respectively. However, when the models are evaluated on an external test set (Figure 2), we can see that these results do not hold. For the full volume data the nnU-Net falls short, dropping ≈15%, while most other variations of the Unet dropped far less, with the regular Unet being the model with the best MDS of 0.77, despite having a larger maximum error. On the cropped volumes, the nnU-Net remained the best performing model, with an MDS of 0.7540. When comparing the performance of full and cropped volume models, looking at Tables A2 and 2, it is clear that the performance of all models drops significantly when using the cropped data. One interesting particularity to note is that while in the majority of the models, the dispersion of the results on the cropped data is similar to the dispersion on the full data, when observing d2aunet we can see that dispersion of values for both the Hausdorff and surface distances is greatly reduced ( Figure 6). Figure 7 shows a comparison between full volume and cropped volume segmentations of both nnU-Net and aunet, as they were the models with the highest Dice scores during cross-validation. Regarding the peripheral zone (PZ) segmentation (Figures 3 and 8), when working with the full volumes we can see that the nnU-Net outperformed all other models, achieving an MDS of 0.8029 ± 0.0063, ≈5% better than the remaining models, but also the smallest error, with an MHD of 9.8693, less than half of the average of most other models. When working with the cropped volumes, we can see that there is no statistically significant difference between any of the models. They all produce very similar values for all evaluated metrics (MDS, MHD, ASD). Looking at Figure 8, we can also see that the distribution and dispersion are very similar, with only a few outliers such as the aunet when measuring the HD and the d2unet when measuring the SD. When testing the models on the external test set (Figure 3), we can see that while using the full volumes the results hold, with nnU-Net still being the top performing model with an MDS of 0.6835 and having the lowest MHD of only 13.4527. For the cropped volumes, the d2unet was the best-performing model, with an MDS of 0.6387, and the lowest MHD of 16.3358. When comparing the performance of full and cropped volume models, looking at Tables A3 and 3, we can see that there is no statistical difference between using full or cropped data when segmenting this zone, apart from the nnU-Net that similarly to what was shown in the two previous zones performs significantly worse. However, similarly to what was observable on the gland analysis, we can see that many of the models show small improvements on the external test set, most noticeably when examining the MHD values, showing again that cropping may improve the generalization capabilities. Figure 9 shows a comparison between full volume and cropped volume segmentations of both nnU-Net and highResNet, as they were the models with the highest Dice score during cross-validation. When looking at the full volume segmentations, we can observe a large difference between the nnU-Net and the highResNet. While the first produces smooth and well-defined segmentations, the latter shows clear signs of poorly defined edges and obvious noise in some cases. Discussion We conducted an extensive analysis of several commonly used segmentation models for prostate gland and zone segmentation on a unified pipeline with the same settings for all. In addition, we answered the research question regarding the effectiveness of using an object detector to crop the MRI volumes around the prostate gland to aid during the segmentation process. First, we trained an object detector model to perform bounding box detection on the prostate gland, in order to later crop the images. We employed a variation of the Yolo-v4 model to detect the prostate on 2D slices, which provided very good results, with box and confidence validation losses of 0.0227 and 0.0034, respectively. These results show that it is fairly easy to train a robust prostate detection model, raising the hypothesis for a feature work of leveraging the learned local representations of such a model and using them to aid in a segmentation task. Using the results obtained from the object detection, we produced a new version of the ProstateX dataset where the volumes were cropped around the gland, reducing the overall size of the image and computational power required. We then trained 13 different commonly used segmentation models, with some additional new variations, on both the full and cropped volumes. The models were subsequently validated on the prostate dataset from the medical imaging decathlon, both on the original and on a cropped version. When comparing the performance of the models on the full volume segmentation tasks, two main conclusions can be drawn. The first being that the nnU-Net is the overall best model, outperforming all other during cross-validation on the three different segmentation tasks, while still being the model that better generalized on the external test set, on two out of three tasks. Interestingly, despite not being the worst performing model on the external transitional data, it was the one with the highest ASD value, meaning that while it did not make the largest mistakes, it did make the most mistakes out of all models. The second conclusion is that, excluding the nnU-Net, when choosing between the other models, the decision is almost inconsequential. While SegResNet and Vnet produced results significantly worse than some other models, the remaining show no significant differences between each other for the three segmentation tasks. When comparing the performance of the models on the cropped volume segmentation tasks, the results are not as clear as when using the full volumes. One common aspect among all three segmentation tasks is that the nnU-Net is either the best-performing model, or is at least one of the best. While for the transitional segmentation task, the nnU-Net is the clear winner, for both the gland and peripheral segmentation tasks, the choice of the model is almost inconsequential, with the exception of only four models on the gland task, which perform significantly worse. Regarding the performance on the external test set, there is no clear indication of an overall better model. While for the transition segmentation task, the nnU-Net is clearly better than the other models, and for the remaining two tasks, the results are very similar. Lastly, comparing the performance of the models when using the two types of data, it is observable that overall the model performance during cross-validation is either the same, or statistically significantly worse. The clearest case being the nnU-Net, where it consistently hinders performance. This is most likely due to the way nnU-Net processes the data, since it is based on a set of heuristics that are applied based on the characteristics of the data, characteristics that are changed when the volumes are cropped. However, in both the gland and peripheral zone segmentation task, it is also observable that the cropped models generalize better, despite their poorer performance during cross-validation, hinting that it is worthwhile to further explore this approach, ideally on a larger dataset and with data to rule out any hypothesis regarding data quantity. Conclusions In this paper, we perform a comparative study using several of the latest, commonest, and certain variations of Unet-based segmentation models for prostate gland and zone segmentation. We answer the research question regarding the impact of using an object detector model as a pre-processing step in order to crop the MRI volumes around the prostate gland. Regarding the comparison of the different architectures, it is clear that overall there are statistically significant differences between the vast majority of models, regardless of the mechanisms they employ, apart from the nnU-Net model [31], which consistently outperforms all other models. Concerning the object detector, it is shown that despite being straightforward to train and obtain good results for prostate detection, overall the segmentation results are the same or statistically significantly worse.
2023-03-01T16:15:54.278Z
2023-02-25T00:00:00.000
{ "year": 2023, "sha1": "77bd90fd5f38c5f404cb42237146d93611b633f1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/15/5/1467/pdf?version=1677317422", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "91ddae655aa67f6b340122f24e097abc53e3ec55", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
250020106
pes2o/s2orc
v3-fos-license
Boerhaave Syndrome: An Unexpected Complication of Diabetic Ketoacidosis Boerhaave syndrome (BS) is a rare gastrointestinal condition related to esophageal rupture that carries a high mortality rate without prompt medical attention. BS is commonly associated with repeated episodes of severe retching, straining, or vomiting. Diabetic ketoacidosis (DKA), a serious acute complication of diabetes, is characterized in part by laboratory findings of profound hyperglycemia and ketoacidosis. Clinically, nausea and vomiting are seen commonly in DKA patients, which can often include repeated forceful retching, but rarely associated with esophageal rupture. In this article, we will describe a case of BS secondary to repeated episodes of emesis in the setting of DKA. Introduction Boerhaave syndrome (BS) can be characterized by full-thickness esophageal perforation induced by barogenic trauma [1]. The mechanism for barogenic trauma involves repeat pyloric closure with subsequent diaphragmatic contraction against a closed cricopharyngeus [1]. Repeated increase of intra-esophageal pressure can then result in esophageal perforation [2,3]. Esophageal perforation in BS occurs in the left posterolateral aspect of the distal intrathoracic esophagus in 90% of patients. However, rupture of the cervical and intra-abdominal esophagus may also occur [1,4]. Rupture of the esophagus is subsequently followed by contamination of the mediastinum and pleural cavities by gastric contents; mechanical movement of the chest during respiration will dissipate these substances, leading to greater mediastinal and pleural soilage [1]. In the absence of medical and surgical intervention, bacterial infection and mediastinal necrosis will result, eventually leading to sepsis and multiple organ dysfunction syndrome [4,5]. BS in the setting of diabetic ketoacidosis (DKA) has been previously reported in the medical literature, albeit exceedingly rare. Upon review of available literature, two prior case reports describe such a situation. In one of these, a 2013 report by Alkuja et al., an iatrogenic etiology of esophageal rupture could not be ruled out [6,7]. Case Presentation An otherwise healthy 22-year-old male presented to the emergency department due to nausea, vomiting, abdominal pain, and chest pain for the previous four days with associated fatigue and polyuria. Emesis was noted to be non-bilious, non-bloody but seemingly constant. The patient reported that he could not tolerate anything by mouth during this time. The chest pain was described as substernal, non-radiating, constant, progressively worsening, and provoked by eating. The abdominal pain was described as diffuse, nonradiating, constant, stabbing in nature, and provoked by eating. Vital signs on arrival at the emergency department were significant for a heart rate of 108 beats per minute and a respiratory rate of 25 breaths per minute. Physical examination was remarkable for a lethargic male that appeared his stated age with tachypnea, tachycardia, and a diffusely tender abdomen in all four quadrants. Chest X-ray on presentation was read as a normal exam. Laboratory studies demonstrated glucose of 1347 mg/dL, anion gap of 50 mmol/L, and a white blood cell count of 26.3 k/cm 2 . Venous blood gas (VBG) revealed a pH of 7.149, pCO 2 of 26 mmHg, and bicarbonate of 8.7 mmol/L, consistent with metabolic acidosis. Urinalysis (UA) showed 3+ glucose and 2+ ketones. Beta-hydroxybutyrate was over 8 mmol/L. Combined results of the VBG, UA, and comprehensive blood profile indicated that the patient was in DKA ( Table 1). The patient was started on an intravenous (IV) insulin infusion and IV fluids and transferred to the intensive care unit. After transfer, the patient continued to complain of constant chest pain that was pleuritic. Computed tomography (CT) of the chest, abdomen, and pelvis with contrast was ordered due to concern for nonspecific pain and demonstrated anterior and posterior pneumomediastinum with diffuse thickening of the distal esophagus, confirming esophageal perforation consistent with BS (Figures 1, 2). The patient was then transferred to a tertiary care facility with a dedicated cardiothoracic surgeon on call and lost to follow-up thereafter. FIGURE 1: Transverse-view computed tomography of the chest in abdomen window Red arrow demonstrates thickening of the distal esophagus with perforation consistent with Boerhaave syndrome. Yellow arrow demonstrates pneumomediastinum secondary to esophageal perforation. FIGURE 2: Transverse-view computed tomography of the chest in lung window Red arrow demonstrates esophageal thickening and perforation secondary to Boerhaave syndrome. Discussion BS remains a feared complication secondary to sudden increased intra-esophageal pressure from retching with subsequent esophageal perforation. BS has an incidence rate of 3.1 per million and is fatal if left untreated [8]. BS typically is seen in middle-aged men between the ages of 50 and 70 and alcohol is usually involved [8]. BS usually occurs in patients without underlying esophageal disease. However, a subset of patients has esophageal rupture secondary to peptic ulcers, Barrett esophagus, eosinophilic esophagitis, pill esophagitis, and other causes of esophageal inflammation [2,3,8]. DKA has been associated with nausea, vomiting, and Mallory-Weiss tears in the extreme, but is seldomly associated with BS. BS typically is caused by a sudden increase in intra-esophageal pressure with an associated increase in intrathoracic pressure. Leakage of esophageal contents may cause chemical mediastinitis, predisposing patients to further complications including infection, mediastinal necrosis, pneumomediastinum, and end-organ damage [5]. Signs of BS vary widely based on the anatomical location of the rupture. Most commonly, patients present with refractory, unremitting chest pain, crepitus on palpation, fever, dyspnea, tachycardia, tachypnea, cyanosis, and hypotension [2]. More proximal ruptures can present with neck pain, dysphagia, or dysphonia [9]. BS is usually diagnosed incidentally in patients being evaluated for chest pain. It should be suspected in patients who complain of severe neck, chest, or upper abdominal pain after an episode of severe retching and vomiting. Other causes of increased intrathoracic pressure such as intubation, pneumothorax, and pleural effusions may also result in BS. Subcutaneous emphysema may be noted on physical examination and the diagnosis is established by CT scan with oral contrast. Delay in diagnosis is often associated with a higher risk of mortality and complications [10]. Myocardial infarction, pancreatitis, aortic aneurysm, peptic ulcer disease, pneumonia, or spontaneous pneumothorax can also cause acute-onset chest or upper abdominal pain. History, electrocardiogram, laboratory evaluation (e.g., cardiac biomarkers, D-dimer, pancreatic enzymes), diagnostic imaging (e.g. chest X-ray, abdominal ultrasound, CT chest/abdomen/pelvis), and physical examination are important to distinguish these etiologies from esophageal perforation. In addition, patients with Mallory-Weiss syndrome may have similar, albeit less severe presentation; however, there will be no evidence of subcutaneous, mediastinal, or peritoneal air on radiography or extravasation of esophageal contrast [11]. Perforations diagnosed within 12-24 hours have the best outcomes. There are three common treatment options including conservative, endoscopic, and surgical. Intensive care unit admission is recommended not only for patients with hemodynamic instability but also for patients with multiple comorbid conditions [12]. Additionally, all patients with esophageal perforation should avoid oral intake or nasogastric tube placement. IV broad-spectrum antibiotics, proton-pump inhibitors, and antiemetics should be initiated, and nutrition should typically be parenteral. Surgical consultation is warranted for all patients in case of further deterioration after medical or endoscopic management. Endoscopic management of esophageal perforation should be considered in patients who are not surgical candidates or with extensive underlying comorbidities [13][14][15]. A multidisciplinary team of endoscopists and thoracic surgeons may work together to deploy fully covered esophageal stents, through-the-scope clips, over-the-scope clips, endoscopic suturing, or esophageal resection and diversion. The prognosis usually depends on the timing of diagnosis and treatment. Delayed diagnosis and treatment are usually associated with poor outcomes. Those who undergo diagnosis and surgery within 24 hours carry a survival rate of approximately 75%, but this can drop to 50% if the diagnosis and treatment are delayed after 24 hours and approximately 10% after 48 hours [16]. Conclusions BS related to DKA has rarely been described in medical literature in the past. With this rare association in mind, clinicians should add BS to the differential when faced with patients experiencing extreme chest pain in DKA. The severe retching seen in DKA typically manifests with Mallory-Weiss tears in the extremis, but BS has also been identified as a possible, albeit rare, complication discussed in the above case. This case was unique in the sense that many patients have severe retching during DKA but do not develop BS. Our patient also did not fit the typical demographic that develops BS other than being male, as he was not middle-aged or using alcohol. In this case, we highlight the importance of radiologic imaging acquisition in a timely manner to provide a diagnosis for extreme chest pain in the setting of DKA. Also highlighted is the benefit of using CT chest and abdomen in DKA cases that present with chest pain and abdominal pain to rule in BS. Prompt CT will allow for appropriate surgical or endoscopic care quickly to prevent high morbidity and mortality associated with delay. We hope this case provides awareness to clinicians of BS as part of the differential diagnosis in patients with severe chest and abdominal pain after retching. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial
2022-06-28T06:18:03.702Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "d6b322920ead83328f2d2d6ef16a76477b2bf891", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "de2e6c862a013a6b0f279d1c653a15c656cc31a6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225369341
pes2o/s2orc
v3-fos-license
ABDUSSHAMAD AL-PALEMBANI; HIS THOUGHTS AND MOVEMENTS IN THE SPREAD OF ISLAM IN INDONESIA This study aims to find the thoughts and movements carried out by Abdusshamad al-Palimbani. The method used in this research is a literature study or library research, by combining activities related to library data collection methods, reading, collecting, and processing research materials consisting of books, manuscripts, and hand scripts. The findings of this research showed that Abdusshamad al-Palimbani had broad ideas to build Muslim communities, especially in South Sumatra and generally in the archipelago in the areas of Sufism, Tauhid and important to defend the State. In the teachings of Abdusshamad al-Palimbani to his students and the works left behind his mind can be known. His thoughts were collected in various works;(a) HidayatusSalikin fi Suluk Maslak al-Muttaqiin, (b) SiyarusSalikinilaIbadatrabb al-alamin (c) Thufah Al-Raghibin fi Bayan Haqiqat Faith Al-Mu'inin, (d). Nasihat alMusliminwaTadzkiratal Mu‟minin fi Fadha‟il Al-Jihad fi Sabilillah wa Karamat alMujahidin fi Sabilillah (e) Zuhrat al-Murid fi Bayan Kalimat al-Tauhid, (f) Al-Urwat al-WutsqawaSilsilatUli al-Ittiqa, (g) RatibAbdusShamad, (h) Zadd Al-Muttaqin fi TauhidRabb al-„Alamin. The spread of Islam is intended to encourage the community to always fill its days with remembrance and reverence; (a) protect yourself from the fire of hell (b) Allah favors you that God makes us out of those who call on His name, He does not make us blameless (c). be bright and open with the zikrullah light (d). to him the windows of heaven were opened (e). gentle heart and khusyu’ (f). abolished it with one sentence from the remembrance of ten crimes. INTRODUCTION Abdusshamad al-Palembani was a Moslem scholar who was very popular with his works, he was known as aintellectual Moslem who fought hard to attain knowledge, he learned knowledge from various famous teachers, not only from the two holy lands of Mecca and Medina but also almost in several Middle East countries. Abdusshamad al-Palembani became a teacher or lecturer at Haramain and actively wrote with his various works which are still being studied the pearls of knowledge contained about that. From the many works that have been passed down, it can be seen that there are various scientific disciplines he has dominated, such as SairusSalikin, Hidayatus Salikin, and others. Abdusshamad al-Palembani was born in Palembang in 1150 H or 1737 AD His father was named AbdurRahman. Seen from other sources, AbdurRahman turned out to be the son of Sheikh Abdul Jalil bin Abdul Wahab bin Ahmad Al AlMahdali, mufti of the Sultanate of Keddah 1710-1782. The life history of Sheikh Abdusshamad al-Palembani can be known from several sources, both from his own work in the book "Zuhratul Murid Fi Bayan KalimatitTauhid" or from the work of others (his students) especially from the book FaidhalIhsani. In Zuhratul Murid Fi Bayan KalimatitTauhid, this character wrote with his own hand his name, with the sentence: The lowly servant who faqir to Allah Ta'ala namely Abdusshamad bin Abdurrahman Jawi al-Palimbani (Abdusshamad al-Palimbani, 1339 H: 3). Likewise, in the book of FaidhalIhsani written by one of his students, his name is also written: our head and our dignity, namely Sheikh Abdusshamad bin Abdurrahman al-Jawi al-Palimbani (Zen,t.t.:50). Likewise according to AzyumardiAAzra the name of this character is SayyidAAbdusshamad bin Abdurrahman al-Jawi al-Palimbani (Azra, 1994: 245-246). Abdusshamad al-Palimbani was born in the KutoCerancang Palace environment, one of the Palembang Darussalam Sultanate Palace, located in the 17th and 20th regions of Ilir now. His father Abdurrahman served as head of the Guards of the Kuto Palace Cerancangan (Zen, 1937: 17). He could not recognize the face of his mother because his mother died at the age of one year, as written in FaidhalIhsani: "And it was from until he was a year old then his mother passed away, so he was orphaned in rabbani very easily and please try to maintain it he (Zen, 1937:18). Abdusshamad al-Palimbani is known as a scholar who has the works of thought in the form of books in large numbers. The subject of his studies is mainly in the field of Sufism. Besides that, Sheikh Abdusshamad also studied the issue of Tauhid and the importance of defending the country. He has a good ability in Arabic, but does not forget the land of his birth, this is indicated by the writing of his work which also uses Malay. The complex ability to study many themes in religious contexts as seen in the writings has demonstrated the aspect of "modernity" of thinking possessedAbyASyaikhAAbdusASha mad Al-Palimbani. One of his works, namely the Advice of al-MusliminwaTadzkirat al-Mu'minin fi Fadha'il Al-Jihad fi SabilillahAwaAKaramat al-Mujahidin fi Sabilillah is a book that calls for the importance of jihad against the penetration of the West at that time AbdussomadAal-PalimbaniAstu died with famous scholars, not only those in Mecca and Medina, but also in Egypt and Yemen, the student who really struggled with knowledge finally revealed himself as a worldwide figure. Zen in FaidhalIhsani explained about the teachers of Abdussomad al-Palimbani as follows: And it was the teacher of Sheikh radiallahuanhu that many were well-known of all of them with their advantages and sholeh-sholeh from the cheap Makkah people and the Medina people who were mawawwarah and the Egyptians who have the knowledge of the fragrant knowledge of the scent of science that is the science of benefits for humans (Zen, 1937: 17). METHODOLOGIES The method used in this research is a literature study or library research, by combining activities related to library data collection methods, reading, collecting, and processing research materials consisting of books, manuscripts, and hand scripts. The findings of this research showed that Abdusshamad al-Palimbani had broad ideas to build Muslim communities, especially in South Sumatra and generally in the archipelago in the areas of Sufism, Tauhid and important to defend the State. THOUGHTS OF ABDUSSHAMAD AL-PALIMBANI FromAtheAbookA"ManakibAFa -idhalAIhsani", some sayings of SheikhAAbdusSomad al-Palimbani to his students and followers. These words contain advice addressed especially to the lovers of the thariqat. The first word consists of three things, namely: (1) the suggestion to always say the truth, (2) give up all charity only to Allah Ta'ala (3) do not delay purifying the heart. This firstutterance in full reads: "Then half of the words of the radhiallahuanhu" make your clothes stand in all your words, so that he cries out in explaining the members who are zhahir in doing and affirming by you to Allah Ta'ala in all your ages, Al-Fikra: Jurnal Ilmiah Keislaman, Vol. 19, No. 1, Januari -Juni, 2020 (144 -154) DOI:10.24014/af.v19.i1.10020 that he hastens in purifying the secret of the heart " (Zen, 1937: 33). To achieve the degree of closeness to Allah, Sufis decorate themselves with some of the most praiseworthy qualities, among them shidq and ikhlash (true and sincere). Abdusshamad al-Palimbani advised to always wear correct and sincere clothing. True in every word and deed, especially true in worship. Greater shidq means not only true in action or speech, but also true in intention, aspiration, promise and truth in maqam or position. . True worship is included in the category of sincerity, for Allah Almighty commands that in worship of Him be done sincerely. While sincerity is the true intention of God. Because shidq and ikhlash are one of the most praiseworthy qualities that are worn by the Sufis.The secondword is a statement of the position of a person who remembers god most. Abdusshamad al-Palimbani says that many people think of him as a guardian of god.As in the word radhialliasahuwaardhahu "and when you see that mankind is growing from the remember (zikrullah) of Allah ta'ala then you know that it is the guardian of Allah ta'ala without a doubt" (Zen, 1937: 33-34). Remembrance of the perceptions of the people is not only centered on verbal remembrance of several levels and amounts to be read, but also a remembrance of the attitude and behavior which always keeps the senses in motion, so as not to be contaminated with anything that is illegal. The Zikrullah reads: (1) la ilahailla Allah, (2) Allah Allah, (3) HU, HU (4) Haqq, Haqq, Haqq, (5) Hayyun, Hayyun, Qohhar, Qohhar (Sairussalikin,. In addition to such commemorations it should also be performed ratib. Ratib is a kind of remembrance or ritual that is practiced regularly upon the completion of the prayer of God 'on Friday night (Zen, 1937: 36) It can be said that the remembrance made by a guardian of God is a maximal remembrance according to the rules contained in the rules. Third, obey the teacher (Sheikh). Included in FaidhalIhsani with the following language: "and half of the word radhiaalhuanhu" begins half of the address of his true disciple and his Sheikh -that is, he left nothing that his teacher could afford. And yet to keep him in his sight and in his back is not to doubt in our hearts the teacher before him and behind him (Zen, 1937: 34). The Sheikh or teacher has an important position in his command not only as a leader who supervises his students in daily life and daily affairs, to avoid deviating and falling into the habit of big and small sin that he must immediately reprove, but he is also a leader. of high spiritual status, and therefore cannot be assigned to a single person, he requires a strict level of intelligence. According to Abu BakrAtjeh, the Sheikh is a person who has attained to the shrine, a person who is fully immersed in Shari'a knowledge and the fact that through the mosque he has reached a high tomb (Abu BakrAtjeh, 1965: 59). Abdusshamad al-Palimbani said that the Sheikh or The fourth word about the priority of the followers of al-Palimbani (closest to Samaniyah) is: "And he said that I confess to those who obeyed our commands that this was true, believing that entering Allah Ta'ala would bring him into heaven. Beginning our command is the wall from the fire of hell and its bounders " (Zen, 1937: 34). The word al-Palimbani has something in common with what his teacher Sheikh Muhammad Samman said about the priorities of the followers of Sammaniyah, as it is written in the Muhammadan Qur"an al-Samman al-Madani's Evening Message even in different words or languages: "And half from the words of Sheikh Muhammad Sammanr.a that whoever eats our food is Allah's remembrance then he will go to heaven and whoever enters our house or violates us forgives Allah Ta'ala all his sins so he said (al-Palimbani, 1331: 9) The fifth word is a drink that cools the soul, is a symbolic expression that drinks water that can quench its thirst, and is that it says "dare you to protect yourself from the aura then it is a clear drink" (Zen, 1937: 34). If the disciple (Abdusshamad al-Palimbani) uses the symbolic expression of the word "clear drink" to describe the remembrance of remembrance or witchcraft in the Samaritan order, the Master (Sheikh Muhammad Samman) uses the symbolic expression with the words " Food, "as stated in the Sheikh Muhammad Samman's Manuscripts above. The word "food" is also interpreted as a remembrance formula taught in the Sammaniyah (Zulkifli, 2001: 52). The sixth word. His advice is to use the world as a gateway to the afterlife, because the world is a gift from God that is to be commended. And it says, "Do not divide the world, for it is from the great gift of Allah Ta'ala, if there is no world to read there is no hereafter" (Zen, 1937: 35). It is interesting to note al-Palimbani's opinion of the world, since in the public perception of the Sufis are those who leave the world or hate the world. That perception is misunderstood, especially when looking at al-Palimbani's words above. His words are closely related to his views on Sufism. His seventh word was to study the scriptures relating to the world of Sufism authored by Sufis. The expression is as follows: And it says: "Make clear to you the aboriginal books of the aborigines, and then he is beyond all knowledge and beyond the reach of the eyes and the eyes (Zen, 1937: 36). The eighth word covers many things, namely: (1) receiving knowledge from anyone who can draw close to Allah Ta'ala, as long as they believe in the truth, (2) ease the path to Allah SWT, (3) pay attention to manners orders. (Zen, 1937: 36). The movement in the spread of Islam by Abdusshamad al-Palimbani consisted of (1) an invitation to have good character, namely: sincere worship only for God, telling the truth and purifying the heart (2) invitation to increase remembrance (3) invitation to respect teachers (4) an invitation to be a person who practices the Thariqat(5) invitations to do wirid (6) invitations to utilize the world for the afterlife (6) invitations to learn Sufism (7) invitations to accept truths that can draw closer to God and invitations to pay attention to etiquette in the thariqat. First. Worship with sincerity is a condition of accepting the charity of a servant. There are three degrees of sincerity. (a) worship performed solely to achieve God's pleasure, not expecting heaven, nor for fear of the torments of hell (b) worship performed for seeking reward and heaven or fearing from His torment. (c) worship performed because of wanting glory. ABDUSSHAMAD AL-PALIMBANI'S MOVEMENT IN THE SPREAD OF ISLAM The first is the highest degree of ikhlash, the second is the middle degree of sincerity, the third is the lowest degree. While apart from the three degrees are riya' and sum'ah (showing off good deeds and there is an interest to be famous. Sidhq or telling the truth and purifying the heart remains related with ikhlash, because sincerity in worship is difficult if humans do not have Shidq and clean nature.Therefore, the character to God in the form of sincerity in charity is very important to be the purpose of da'wah. Secondly, it invites humans to multiply dhikr. Basically dhikr is expected to produce spiritual clarity in a person, so that it affects the attitude and behavior.Thereare several effects that can be caused, namely: 1. Soften a person's heart so that he tends to accept and follow the instructions (guidance) 2. Awaken awareness that God is the regulator 3. Improve the quality of worship 4. Maintain yourself from the temptations of the devil 5. Keep it from doing ma'siat (Arsam, 2012: 115) It can be understood that the impact of the effect of remembrance by a person is part of the purpose of preaching because it invites humans to always remember their Creator, so that their behavior is organized according to the behavior of those who are pious. Third, inviting humans to have manners towards their teacher. In the book "HidayatushShalihin" written by Abdusshamad al-Palimbani explained there were eleven manners which a student must have. Among them are giving greetings, asking permission when asking, don't look left and right when sitting in front of the teacher, and when the teacher is standing, he also stands up. (Abdusomad al-Palimbani, 2006: 171)Fourth, da'wah is intended to invite mad'u to follow the tarekat, especially the Samaniyahthariqat. Thariqat is a way, instructions in doing something of worship according to prescribed teachings, exemplified by theAProphetAMuhammadAshollalahu alayhi-wasallam and done by friends and tabi'in, hereditary to the teachers, connect and connect chains. Teachers who provide guidance and guidance are usually called murshid. All guidance given by a teacher to his Wahyudi Buska, Yogia Prihartini, Ali Muzakir; Abdusshamad al-Palembani; His Thoughts and Movements in the Spread of Islam in Indonesia DOI:10.24014/af.v19.i1.10020 students in matters of worship is called thariqat. The important thing between the practical guidance is matters relating to remembrance and procedures. The Samaniyah Order is a thariqat that was given to Muhammad Samman. A famous tarekat teacher in Medina. His remembrance is known by the name of RatibSaman (AboebakarAtjeh, 1966: 338-340). Ratib Saman begins by chanting the name of Allah, the Most GraciousAandAMerciful,"Bismillahirra hmanirrahim" followed by the saying of forgiveness asking Allahswt for forgiveness. Then, recitation of surah al-FatihahAaddressedAto the prop-phetARasulullahAshollalllahualayhiA wasallam, recitation of sholawat delivered to Rasulullah, to all the prophets and Apostles, to the angels, to the servants of Allah that Sholih from the inhabitants of heaven and earth.Furthermore, an expression of prayer was given to God to give pleasure to the companions of the O David! Be gentle in conversation and be modest in your dress. The fame of your name among the public will not be identical forever (with that which is obtained) in the hereafter (Ali Usman.et al, 1975: 213). The spread of Islam is aimed at reminding people that do not be like dogs fighting over rotten carcasses, making the world a destination in life. Opponents of a detestable world are world lives that will give good results in the hereafter by paying attention to Allah's laws. The world is a means given by God to humans to achieve happiness in the afterlife. To achieve happiness one needs to have a correct understanding of the world. That is why you need to be reminded of your duty in this world to serve God not the world. Seventh, studying Sufism.The purpose of da'wah according to Sheikh Abdusshamad al-Palimbani is to invite people to learn Sufism, because studying Sufism is legal fardhu 'ain. According to Sayyid Abdul Qadir Ali Idrus in the book called "ad-Daaru as-Tsamin" as quoted by Abdusshamad al-Palimbani in Hidayatus Shalihin;all the knowledge that he is required to fard "ain is three matters: first, the knowledge of monotheism will be called usululuddin's knowledge. Second, the science of syara 'which is called he will be the science of jurisprudence (ilmufiqh). The three inner sciences will be called Sufism. As for qadarfardhu "ain in the science of psychology or the knowledge of Sufism, that is knowing everything that will save so that the deeds he does not become corrupted, such as knowing something that can cancel the reward of his prayer, the reward of his fast and so on. (Abdusshamad al-Palimbani, 2006: 5). Abdusshamad al-Palimbani also argued that if someone wants to win in the world and in the hereafter then his life should be spent studying Sufism, practicing "muthola'ah" he will learn Sufism, because so much fear of Allah. (AbdusshamadAalPalimbani,2006:4). Eigth, the purpose of da'wah is to introduce civilization in the match. In everyday life there are some manners
2020-10-16T22:49:30.790Z
2020-08-07T00:00:00.000
{ "year": 2020, "sha1": "72ac9d4e1c79c4f35ce23c54522966f31d70e4b8", "oa_license": "CCBY", "oa_url": "http://ejournal.uin-suska.ac.id/index.php/al-fikra/article/download/10020/5346", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "72ac9d4e1c79c4f35ce23c54522966f31d70e4b8", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "History" ] }
270815863
pes2o/s2orc
v3-fos-license
The Impact of Dietary Habits on Sleep Deprivation and Glucose Control in School-Aged Children with Type 1 Diabetes: A Cross-Sectional Study Diet plays a crucial role in managing type 1 diabetes (T1DM). Background/Objectives:This study aimed to determine the impact of nutritional habits on sleep deprivation and glucose control in school-aged children with T1DM. Methods: In this cross-sectional study, nutritional habits and sleep deprivation were assessed in 100 school-aged children with T1DM, aged 7–13 years. The Dietary Habits Index and the Sleep Deprivation Scale for Children and Adolescents were used to evaluate nutritional habits and the level of sleep deprivation. Patients’ sociodemographic and nutritional variables were collected through researcher-composed questionnaires. HbA1c levels over the past 6 months were obtained from the patient data system. Results: The study found a moderately strong positive correlation between the Dietary Habits Index score and HbA1c (p < 0.001), with 28% of the variation in HbA1c explained by changes in the Dietary Habits Index score. However, no correlation was found between the Dietary Habits Index score and the level of sleep deprivation. Conclusions: The nutritional habits of school-aged children with T1DM may affect glucose control and sleep deprivation. Therefore, it is important to educate children with T1DM on making healthy food choices to manage their condition effectively. Introduction Type 1 diabetes (T1DM) is an autoimmune disorder caused by the destruction of B cells and deficiency of insulin.In 2022, it was reported that out of 143,396 diabetics, 29,000 were aged <20 years [1].According to the International Diabetes Federation Diabetes Atlas, there were 8.75 million people with type 1 diabetes globally in 2022.The peak incidence occurs at ages 5-9 years in girls and 10-14 years in boys [1].Numerous environmental factors are significant triggers of T1DM, with diet and microbiota effects on inflammation being two primary contributors. Diet is one of the major cornerstones in the management of diabetes.The primary goal of a healthy and balanced diet is to control weight, maintain normal blood glucose levels, prevent complications due to high or low blood glucose levels, and ensure proper growth.An appropriate eating pattern includes main meals and snacks with regular mealtimes [2].Dietary recommendations for children with diabetes are the same as those for healthy children.According to the International Society for Childhood and Adolescent Diabetes guidelines, the daily energy intake should be 45 to 50 per cent from carbohydrates, 30 to 35 per cent from fat (saturated fat < 10 per cent), and 15 to 20 per cent from protein [3].However, some school-aged children with diabetes do not meet the nutritional requirements.A healthy, balanced diet includes consuming low glycemic index foods, reducing dietary cholesterol and saturated fat, and increasing the intake of fruits, vegetables, and whole grains [4].In addition to macronutrients, micronutrients are crucial for a balanced diet, particularly during childhood, which is a critical phase for growth.Poor dietary choices can lead to health issues such as anemia, growth retardation, and anorexia due to vitamin and mineral deficiencies.A Mediterranean-style diet may be beneficial for managing diabetes and any other inflammatory diseases [5][6][7].Microbial diversity and gut microbiota are other key factors in the development of diabetes.A healthy gut barrier prevents the entry of harmful substances into the bloodstream, reducing the risk of inflammation and immune system activation [8].The gut microbiota interacts with diet and influences health outcomes [9].Intestinal permeability and barrier dysfunction can trigger the onset and progression of T1DM, affecting the immune system and activating inflammation.Diet influences gut microbiota diversity, particularly in butyrate-producing communities [10]. Another factor that affects the health of diabetic children is sleep.Sleep is essential for improving attention, behavior, memory, emotional regulation, and overall quality of life.For school-aged children, the recommended average sleep duration is 9-12 h per day [11].Poor sleep in childhood can predict future obesity, depression, and cardiovascular diseases [12,13].Insufficient sleep or poor sleep quality can also worsen glucose control in diabetic patients, with higher HbA1c levels observed in those with inadequate sleep [14][15][16][17].A review of the literature revealed no studies investigating the effect of dietary habits on sleep and HbA1c levels.Therefore, the aim of this cross-sectional study was to evaluate the impact of dietary habits on sleep deprivation and glycemic control in school-aged children with T1DM. Design and Setting This descriptive, correlational, and cross-sectional study was conducted between September 2023 and February 2024.Children aged 7-13 years who were diagnosed with type 1 diabetes and followed by Necmettin Erbakan University, Meram Medical Faculty, Department of Child Metabolism, were enrolled in the study. The required sample size was calculated to be 68 participants, based on a regression analysis using the G-power program, with a significance level of 0.05, 80% power, and a medium effect size for regression.To ensure robust results, 119 children who were being followed up in the clinic were initially included.However, 6 children were excluded due to missing records, and 13 were excluded because they did not fall within the specified age range.Consequently, the final analysis evaluated the results of 100 patients.All children included in the study had been previously diagnosed with type 1 diabetes. Variables and Measurements Sociodemographic variables, presence of any other disease, physical activity level, diabetes duration, frequency of blood glucose control, and related data were collected via a researcher-administered questionnaire (Supplementary File).Data were collected through face-to-face interviews.The weight and height of children were measured at the hospital by researchers.Body mass index (BMI) was calculated according to WHO formula: weight/(height × height).The children's mean HbA1c level over the past 6 months was accessed from the patient data system at the hospital by one of the researchers, who is a pediatric endocrinologist.Additionally, the Sleep Deprivation Scale for Children and Adolescents and the Dietary Habits Index were administered to school-aged children with type 1 diabetes. Sleep Deprivation Scale for Children and Adolescents Kandemir et al. ( 2021) developed a scale that aimed to determine the sleeplessness level of children and adolescents [18].The scale consists of 15 items on a Likert scale (ranging from "agree" to "disagree").Scores on this scale range from 15 to 60 points.Higher scores indicate worse sleep deprivation.The analysis for sampling adequacy yielded a Kaiser-Meyer-Olkin (KMO) value of 0.94, and the Bartlett test result was χ 2 = 1833.03(p < 0.001).Exploratory factor analysis (EFA) indicated that the scale has a single factor structure, explaining 54.48% of the variance, with an internal consistency reliability coefficient of 0.94.Confirmatory factor analysis (CFA) was then conducted to verify the one-dimensional structure.The CFA results showed that Chi-Square/degrees of freedom was 254.94/65, 3.92; the RMSEA value was 0.07, and the RMR value was 0.027.The fit indices for the tested model were CFI = 0.94, GF3 = 0.91, AGFI = 0.91, IFI = 0.96, NFI = 0.94, and TLI = 0.97.Cronbach's alpha internal consistency value was found to be 0.94. Dietary Habits Index Dietary habits were assessed using the 6-item Dietary Habits Index, which was developed by Demirezen E. (1999) and revised in a 2005 study [19,20].The risk level of dietary habits was evaluated based on the total score obtained from the Dietary Habits Index.Scores on this index range from 0 to 24 points.According to the assessment criteria, a score of 0 indicates no nutritional risk, a score of 1 to 6 indicates a low nutritional risk, a score of 7 to 12 indicates a moderate nutritional risk, a score of 13 to 18 indicates a high nutritional risk, and a score of 19 to 24 indicates a very high nutritional risk.The Cronbach's alpha internal consistency value for this index was found to be 0.28. Data Analysis Data analysis was performed using the SPSS 24.0 program (SPSS, Chicago, IL, USA).Descriptive data were presented as frequency, arithmetic mean, minimum, maximum, standard deviation, and percentage.The effect of dietary habits on sleep deprivation and glucose control was evaluated using multiple regression analysis.Multicollinearity was examined using the variance inflation factor (VIF) and tolerance value.It was determined that VIF values were less than 10 and tolerance values were greater than 0.2, indicating no multicollinearity.The relationship between dietary habits, sleep deprivation, and glucose control was assessed using Pearson's correlation analysis.The level of statistical significance was set at p < 0.05. Ethical Considerations The study was conducted in accordance with the Declaration of Helsinki.Ethical approval was obtained from the Medicine Faculty Ethics Board of KTO Karatay University (Number: 2023/037, Date: 21 September 2023).Additionally, permission was obtained from the university where the study was conducted.Informed consent was obtained orally from all the children and in writing from their parents. Results In this study, 53% of the children were female, and the mean age was 10.14 ± 1.79 years.The mean weight was 40.01 ± 11.73 kg for males and 36.78 ± 10.14 kg for females.The mean height for the children was 140.79 ± 14.27.Among the children, 26% had other health problems, with coeliac disease being the most prominent, diagnosed in 18 children.Regarding the children's diabetes characteristics, the mean duration of diabetes was 2.81 ± 2.01 years, and the mean age of diagnosis was 7.05 ± 2.21 years (Table 1). The mean score of the Dietary Habits Index among school-aged children with type 1 diabetes was 11.59 ± 3.36, indicating a moderate risk.It was found that 46 children fell into the medium-risk group with scores between 7 and 12 (Table 2).The mean score on the Sleep Deprivation Scale for Children and Adolescents was 32.14 ± 11.46 (Table 2). There was a moderately strong positive correlation between the Dietary Habits Index score and HbA1c level.Additionally, a positive but weak correlation was found between the Dietary Habits Index score and the Sleep Deprivation Scale for Children and Adoles-cents score.No significant relationship was found between sleep disturbance and HbA1c (Table 3).The regression coefficient for the variable in the model was found to be 4.876, with a standardized regression coefficient of 0.528.This indicates that increasing the Dietary Habits Index score can reduce HbA1c levels by an average of 0.289.Furthermore, 28% of the variation in HbA1c was explained by changes in the Dietary Habits Index score (Table 4).The regression coefficient for the variable in the model was 19.907, with a standardized regression coefficient of 0.307.This indicates that an increase in the Dietary Habits Index score can increase the Sleep Deprivation Scale score by an average of 1.057 (Table 5). Discussion More than 1.1 million children and adolescents worldwide are being followed up with a diagnosis of T1DM.More than 86,000 children under 15 years of age develop T1DM annually.A study conducted in 2013 found that the incidence in our country was 10.8 per 100,000 per year [21].From the moment of diagnosis, children, adolescents, and their parents are responsible for intensive and complex diabetes management, which requires constant attention and effort [22].Diabetes management involves regular blood glucose monitoring, multiple insulin injections, maintaining a healthy diet and activity level, adjusting insulin doses to suit life patterns, and regular hospital visits [22,23].The goal of managing children with type 1 diabetes is to ensure continued growth and development while achieving optimal glycemic control.This approach protects the child from acute and chronic complications.Dietary habits play a crucial role in ensuring proper nutrition.Given this context, a cross-sectional study was designed to examine the impact of dietary habits on sleep deprivation and glucose control among school-aged children with T1DM. In this study, the mean score of the Dietary Habits Index indicated a moderate nutritional risk.One of the main findings is that the dietary habits of school-aged children with type 1 diabetes can significantly impact blood glucose control.A moderate and significant correlation was found between the Dietary Habits Index and HbA1c levels, suggesting that an increase in the Dietary Habits Index score can reduce HbA1c levels by an average of 0.289.This study found a moderately strong positive correlation between the Dietary Habits Index score and HbA1c levels. A study by Seckold et al. included 22 children aged 4.9 ± 1.3 years with an HbA1c of 6.4% ± 0.9% [24].The study concluded that the quality of diet is a concern in young children with T1DM, characterized by excessive saturated fat and inadequate vegetable intake.Similarly, Tayyem et al. identified dietary patterns associated with glucose control in 107 children and adolescents with T1DM.The study revealed that only 25.7% of the participants had good glycemic control.Overall, three dietary patterns were identified: "High-Vegetables", "Unhealthy", and "High-Fruits".The "High-Vegetables" dietary pattern demonstrated a protective relationship in controlling HbA1c levels, particularly in the second and third tertiles [25]. The traditional Mediterranean diet is characterized by a high consumption of vegetables, fruits and nuts, legumes, and unprocessed cereals, and a low consumption of meat, meat products, and dairy products.A study conducted by Dominguez-Riscart et al. (2022) reported improved HbA1c levels in 97 individuals with type 1 diabetes who adhered optimally to the Mediterranean diet [26].Similarly, a study aimed at examining changes in diet composition over 10 years in children and adolescents with type 1 diabetes involved 229 participants aged 6 to 16 years.The study found that the diet composition of these children and adolescents changed, leading to improved neurometabolic control [27].Bodur et al. (2021) conducted a cross-sectional study of adolescents (10-19 years old) with type 1 diabetes, showing a weak and negative relationship between the diet quality scores of the male participants and their waist circumference and HbA1c levels (p < 0.05) [28].Nansel et al. (2016) investigated the association between dietary intake and several indicators of blood glucose control in 136 adolescents with type 1 diabetes participating in an educational nutrition intervention.They found that both the overall quality of the diet and the distribution of macronutrients were associated with more optimal glycemic control [29].Conversely, Zhou et al. (2021) found that daily food intake did not affect glucose control in adults with type 1 diabetes, but C-peptide levels influenced glycemic variability independently of mean blood glucose [30]. Most studies have focused on individuals with type 2 diabetes, with comparatively less research examining sleep in those with type 1 diabetes [31].The interaction between sleep parameters and type 1 diabetes is crucial, as blood glucose control plays a significant role [32].In a systematic review of sleep and type 1 diabetes, adults with type 1 Diabetes reported poorer sleep quality but not shorter sleep duration compared with adults without diabetes [17].Another study found a significant relationship between increased HbA1c levels and the prevalence of sleep disorders, suggesting that sleep disorders can increase HbA1c levels and may be a risk factor for increased levels of HbA1c [33].Jaser et al. conducted a study with 2-to 12-year-old participants with type 1 diabetes and found that 67% of children met the criteria for poor sleep quality.Sleep quality was associated with glycemic control but not with average glucose monitoring [34]. This study aimed to determine the effects of dietary habits on HbA1c levels and sleep deprivation in school-aged children with type 1 diabetes.It is known that sleep deprivation increases as the score obtained from the scale increases.A moderate and significant correlation was found between the Dietary Habits Index and Sleep Deprivation Scale for Children and Adolescents (p < 0.05).The study revealed that an increase in the Dietary Habits Index score could lead to an average increase of 1.057 in the sleep deprivation score.The change in the Dietary Habits Index score accounted for 0.9% of the variation in the sleep deprivation score. Conversely, a study by Brandt et al. (2020) involving 20 people with type 1 diabetes found no association between sleep quality and the time spent in the target glucose range or above or below the target glucose range [35].Similarly, Corrado et al. (2023) conducted a cross-sectional study with 120 adults with type 1 diabetes and found no differences in postprandial blood glucose levels between participants with poor and good sleep quality [36]. The study has certain limitations due to the nature of the sample and the methodology employed.Firstly, the study was designed as cross-sectional.Additionally, dietary habits were determined using a scale rather than food frequency records of actual food consumption.Despite these limitations, the significance of this study lies in assessing the effects of dietary habits on both glycemic control and sleep deprivation among school-aged children with type 1 diabetes. Conclusions The data indicate that dietary habits not only affect HA1c levels but also the degree of sleep deprivation in children with type 1 diabetes.It is important to remember that the dietary habits of school-age children are influenced by many environmental factors.Therefore, healthcare professionals, particularly dietitians, should regularly assess the dietary habits of children with type 1 diabetes and provide guidance on recommended dietary practices.In addition to nutritional advice from health professionals, web-based education or mobile applications may be recommended to provide both children and parents with up-to-date information on nutrition in type 1 diabetes.Further prospective, randomized, large-sample research is needed to confirm the potential therapeutic implications and longer-term outcomes.Additionally, the effects of dietary habits on sleep deprivation and glucose control in children of different age groups with type 1 diabetes should be examined in further studies. Table 2 . Mean scores of Dietary Habits Index and Sleep Deprivation Scale for Children and Adolescents. * This score is valid in 97 participants; M, mean; SD, standard deviation. Table 3 . The relationship between scale scores and Hba 1c levels in school-aged children with type 1 diabetes. Table 4 . The prediction of Dietary Habits Index score on HbA1c Level in school-aged children with type 1 diabetes. Table 5 . The prediction of Dietary Habits Index score on the Sleep Deprivation Scale for Children and Adolescents score in school-aged children with type 1 diabetes.
2024-06-29T15:16:31.895Z
2024-06-27T00:00:00.000
{ "year": 2024, "sha1": "7d43095e9fcf0cc2f59131e99b618660d0cf94e3", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "1f4eadb636ff45e5b1a8d0166d23577d4871e286", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
212964081
pes2o/s2orc
v3-fos-license
The control method of ropes slip of a mine winder with friction pulley The reasons for the development of an emergency mode of ropes slip along the friction pulley are considered. The process of power conversion is described and a system of equations for the movement of vehicles and the electrical values of the drive is composed. A joint simulation of mechanical and electrical values in the electromechanical complex is performed. Introduction Cage and skip hoistings are the main technological units that provide the operation of underground mining enterprises. Safety and productivity of the enterprise depend on reliable and trouble-free operation of lifting equipment. Parts and units of mine winders in the process of work are subjected to significant loads, the action of which leads to gradual wear and failure of the mechanism for one reason or another. Access to rapidly wearing parts and parts of the fire alarm system by the personnel is limited, which prevents the continuous assessment of their condition. For this reason, the faults and accidents are not detected in a timely manner, and their nature is not often defined correctly, which greatly increases the production losses. In this regard, timely and accurate determination of the location and nature of damage is an important task for production [1]. Problem description Skip hoisting with a pulley friction includes a complex set of components and units [2,3]: • the base part of the hoist (figure 1) (friction pulley, deflector pulleys, braking devices, gearbox, mine hoist control unit (HCU)); • hoist vessels (skips) with pull-type devices and ropes; • loading devices (metering devices), mine shaft with guides, dump tracks; • electric DC twin-motor drive with a control circuit. During the technological shift, the whole complex of mine hoist is monitored from the control panel by the skip-hoist operator. In addition, circuits and devices are used that control individual components or emergency situations of the winder. The emergency control is usually carried out by various kinds of sensors: re-hoisting the 1 st and 2 nd positions of the skip, the weight of the load in the skip, brake pad wear, speeding, etc. The total number of sensors on the skip hoist is approximately twenty. The emergency slip of the ropes along the traction sheave is typical for winders with a friction pulley. The cause of its occurrence is local reduction of the friction coefficient of the pulley lining and dynamic phenomena in the rope in the friction zone of the lining -rope pair. Figure 1. General view of a mine winder: 1 -deflector pulleys, 2 -friction pulley, 3 -gearbox, 4 -engines, 5 -air intake, 6 -rope, 7 -counterweight, 8 -foundation, 9 -control panel, 10 -braking device with two crank levers. The safety of the hoisting and the reliability of the winder operation largely depend on the design and lining material of the driving pulleys. The lining provides the necessary adhesion of the hoisting ropes with the traction sheave, the long (2-3 years) hoisting performance at a given pressure and temperature without ropes slipping, load balancing. Currently, PP-45 plastic compound, which is a polyvinyl chloride plasticized with dioctylphthalate, is mainly used as the lining of the drive and deflector pulleys of winders [4]. This lining allows a design friction coefficient of 0.25 and a specific pressure of 2 MPa to be paired with strand ropes. The surface of the pulleys of domestic multi-rope winders is lined with bars from commercially available plastic PP-45. Lining pads are pressed against the sheath of the pulley by fastening wedges. The lining block section is unified. The lining of cable-guide pulleys is pierced with a special device with a difference in the diameter of the grooves for the ropes exceeding 0.5 mm. The main malfunctions of the traction sheaves are mainly related to the wear of the lining and its fastening. The presence of a large number of holes for bolts for attaching the lining weakens the shell and causes cracking. Different rope tension causes uneven wear of the lining. The disadvantage of the lining is that when it is heated (up to 100°C) it softens and the coefficient of friction decreases. At low temperatures it loses its frictional properties. During the rope sliding 20-30 cm along the lining, its upper layer begins to melt. Thus, even with a minor disturbance of the normal mode, the rope slips along the traction sheave, causing the lining melting and an emergency situation. To control the ropes slip, the schemes are used that compare the rotational speeds of the traction sheave and deflecting pulleys. Sometimes skip speeds are additionally measured [5]. The schemes involve the use of a significant amount of additional equipment and, due to this fact they are not very reliable. The consequences of emergency situations with ropes slip over the pulley, taking into account the size and mass of the vessels, speed, depth of the shaft, are serious. In this regard, timely and accurate determination of the rope slip is an important task for production. In a production environment for its implementation can be used the method of energy evaluation of the drive performance [1]. Results and discussion To register an emergency according to [1] information on the electrical values of the winder drive can be used. In the case of a DC drive, information is provided on the magnitudes of the current of motor armature circuit, excitation winding current, motor armature voltage and their rotation frequency [1,6]. It is possible and appropriate, based on the results of work [1,6], to use information about the electrical values of the winder drive to determine the emergency situation of rope slip along a traction sheave. To apply this control method, it is necessary to keep in mind that the work of a skip lift is accompanied by a continuous process of converting one type of energy into another. The electrical 3 energy of the source (electrical network) is converted into the energy of rotational motion on the motor shaft and is then used in the mechanism for performing useful work. The amount of energy converted from one form to another, according to the law of its conservation with allowance for losses, must be the same at any time interval. The power values of the various processes are described by the following well-known expressions: • power of the three-phase AC network: where U is the line voltage, V; I -phase current network, A; cos φ -is the power factor. • power (electrical) at the DC terminals of the converter and the armature circuit of the driving motor: where U is the voltage of armature circuit, V; I -current of the armature circuit, A. • mechanical power on the motor shaft: where M is the mechanical moment of the electric motor, N•m; ω -angular frequency of shaft rotation, s -1 . • total power of moving and rotating parts of the mechanism: where F nat1 , F nat2 are the tension forces of the ropes, N; υ -skips linear speed; ω -frequency of motor shaft rotation; J -reduced moment of inertia; ε -the angular acceleration of the drive. Any of the emergency situations (including slippage of the rope on the friction pulley) leads to a deviation of the mechanical torque on the shaft of the hoist motors from the normal value for this mode [6]. As a result, the electrical values of the drive of the winder are changed according to the scheme [7]: The design scheme for performing joint modeling of mechanical and electrical values in the electromechanical complex is shown in figure 2. where , , -are the masses of rotating parts, vehicles and ropes reduced to the movement of the ropes; , , -movement of reduced masses; k, F fr -coefficient and force of friction of the rope against the lining; i a , i e -currents of the armature and excitation of the electric motor; -the coefficient of reduction of the motor dynamics to the movement of the ropes; c e -constructive constant electric motor; L a , L e , L m -inductance of the armature windings, excitation and mutual inductance; r a , r e -active resistance of the armature windings, excitation; U a , U e -supply voltage of armature and excitation circuits. Simulation of the skip hoist operation in normal mode and emergency situations of rope slipping along the traction sheave was performed in MATLAB environment. The emergency slippage of the rope was simulated by reducing the value of the rope friction coefficient against the lining in equations (6). The results of modeling the vehicles movement in the shaft and the occurrence of rope slippage are shown in figure 3. As the friction coefficient decreases (rope slippage), a deviation from the normal value recorded by measuring means is recorded on the current diagram of drive armature. The deviation from the current values on the current diagram gives information about the location of the fault and its severity (figure 3). When performing additional studies, it is possible to identify most types of faults in the mechanical part of the unit, since the nature of the current deviation from the normal values is related to the type of fault in the mechanical part mentioned above.
2019-11-22T01:01:38.678Z
2019-11-19T00:00:00.000
{ "year": 2019, "sha1": "4804f6b6f55c162deec842a917bf6418069812f4", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/377/1/012044", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "118190fccdd8ba11bc518c7d1dfe799050b01019", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Geology", "Physics" ] }
257933498
pes2o/s2orc
v3-fos-license
Review of hierarchical database access control for E-medicine systems ABSTRACT Key management schemes for hierarchical access control enable users who have hierarchical relationships with each other to manage their secret keys efficiently. In these schemes, the users are divided into several groups, and all groups have their own central authorities. Each central authority is responsible for setting parameters and generating user’s secret keys in a hierarchical structure such that all users efficiently derive their secret keys and solve dynamic access control problems. Several key management schemes with Health Insurance Portability Accountability Act regulations were recently proposed for hierarchical access control in e-medicine systems. However, these schemes either are insecure or require a large amount of storage and heavy computations. Therefore, this study reviews and discusses hierarchical access control schemes with privacy/security regulations for medical record databases. Introduction Background W ith the rapid development of the Internet, the medical records of various hospitals and medical organizations are also oriented toward electronic medical information. Electronic medical records have become an important research topic in the electronic medical system. To protect the security of medical record data and patient privacy, a secure access control mechanism is very important. Electronization of medical information can reduce the waste of administrative costs while increasing the quality and efficiency of medical care. The benefits brought about by the electronicization of medical information have caused governments around the world to invest a lot of resources to build relevant systems. Based on the characteristics of medical service provision, medical information collection, use, and electronic consent exercise do not actually have much choice. Therefore, patients' right to control their medical information is not as strong as that of general information, so it is necessary to maintain the subject's privacy through rigorous information consent and confidentiality mechanisms. However, the organizational structure of personnel in medical institutions is huge. If the organizational authority mechanism of management personnel is poorly designed, data will be stolen and leaked. The system will also suffer from poor load due to the huge amount of computing and storage space. The on medical information security, including the establishment of a medical electronic authentication mechanism based on the public key cryptography system by the department of health to ensure the safety of the electronic operation of medical information. The department of health has gradually completed the establishment of the "Healthcare Certification Authority," and has begun to investigate and use credential IC cards for medical institutions and medical personnel. Its main purpose is to ensure the leakage of private or sensitive information generated by people seeking medical treatment, and to cooperate with the completion of relevant laws and regulations, and to actively plan related medical information applications such as electronic medical records. The structure of hierarchical database access control The hierarchical access control divides users into many groups, and users are divided into different security class (SC) sets according to their permissions, where SC = (SC 1 , SC 2 ,…, SC N ). The SCs in the hierarchy have a privilege order relationship. When SC j ≤ SC i , the privileges of SC i are greater than those of SC j , and SC i is called the ancestor of SC j ; SC j is called the successor of SC i . The relation is defined as (SC i , SC j ) ∈ R i, j . When SC j ≤ SC k ≤ SC i , and SC k does not exist, SC i is called the immediate predecessor of SC j ; SC j is called the immediate successor of SC i . The certification center (CA) will generate a suitable key and public parameters for each SC. The user only needs to store one secret key. Then, the successor's key can be deduced using this secret key with the public parameters to access the files corresponding to its permissions. Thus, the problems of repeated key storage and key management difficulties can be overcome [3][4][5]. Figure 1 illustrates the structure diagram of hierarchical access control. SC is the security level, and CA will generate a key for each SC. SC 1 has the highest permission and can use its own key to derive the keys of other SCs through public parameters to access files; SC with lower permissions cannot derive the keys of SCs with higher permissions, so as to achieve the confidentiality property of data access. The hierarchical access control mechanism is divided into dependent key and independent key. In the process of calculating the key, the subordinate key needs to use the key and parameters to calculate all the keys in the SC interval (indirect key derivation); the independent key only needs to use the owned key and parameters to do one operation (direct key derivation). For example, in Figure 1, when SC 1 attempts to obtain the key of SC 5 , it needs to calculate the key of SC 2 between SC 1 and SC 5 in the way of subordinate key, and then use the calculated key of SC 2 to calculate the key of SC 5 [3]. The personnel organization structure in medical institutions is huge, and personnel in different departments can access different information. In general, a hospital organization has not many classes but has a lot of departments. Therefore, the management of the hospital organization with many departments focuses on 1. Using a small number of parameters to reduce the difficulty of management 2. The rapid generation and derivation of the key, and 3. Dynamic updating and management of keys. The remainder of this investigation is organized as follows. Section 2 reviews the hierarchical access control schemes, which include access control schemes compliant with privacy/ security regulations and hierarchical database access control schemes. Section 3 provides a performance comparison of related works. Section 4 describes the analysis and discussions. Finally, Section 5 draws conclusions and future works. Access control schemes compliant with Health Insurance Portability Accountability Act privacy/security regulations In 1996, the United States passed the HIPPA Act, so that the privacy of patients' personal medical records was protected by law. In recent years, many studies were presented on HIPAA-compliant access control research. For example, in 2008, Lee and Lee [6] proposed a HIPAA-compliant electronic medical information system. Lee and Lee proposed a health data card-based electronic health-care plan, in which patients use smart cards for secure storage and retrieval of PHI during treatment consultations. Symmetric encryption/decryption keys based on the health-care provider's session architecture are used for PHI data confidentiality. The mechanism weakness of Lee and Lee is that the smart card cannot be queried from a distance through the network, and multiple queries of the patient's PHI cannot be performed simultaneously. Subsequently, Hu et al. [7] in 2010 and Huang and Liu [8] in 2011 enhanced the scheme of Lee and Lee [6] and developed better solutions. Hu et al. [7] in 2010 proposed an e-health system for HIPAA privacy and security regulations, which uses a hybrid security mechanism based on public key infrastructure (PKI) and Medicare smart cards, and provides access from PHI to Medical Center Server (MCS). Patient consent is not required during storage and retrieval, once the phase task is completed, the patient's PHI record is deleted, the patient cannot obtain a copy of his PHI for subsequent treatment sessions, and this mechanism does not take into account the legal requirements [3] for patient consent exceptions. Therefore, if an emergency occurs, it still cannot be handled correctly in accordance with HIPAA regulations. In 2011, Huang and Liu [8] proposed an efficient key management scheme compliant with HIPAA regulations. Their scheme was based on elliptic-curve cryptography (ECC) and facilitates interoperability between the applied cryptographic mechanisms. In 2014, Ray and Biswas [2] proposed a solution, which is similar to the scheme of Hu et al., [7] to comply with HIPAA privacy/security regulations. Their scheme was developed using a public key encryption-based e-health system architecture and using contracts and intelligence cards with RSA signature technology to protect user's PHI data. This scheme addresses emergency inquiries and data sharing with external medical centers, but does not provide user anonymity, prevent insider attacks, and safeguard data security. In 2014, Lee et al., [9] proposed to use N-degree Lagrange interpolating polynomial to effectively solve the shortcomings of the scheme of Hu et al. [7] and the scheme of Huang and Liu [8] in the key authorization. The proposed scheme is to store the keys of patients and MCS in their own smart cards. When the key is generated, the patient's key and the master key generated by the linear equation are required. However, if the patient's smart card is obtained by an attacker, it may cause doubts about the security of the key and threaten the confidentiality of medical record information. Hierarchical database access control schemes The hierarchical database access control schemes are classified into PKI-based hierarchical database access control schemes and hierarchical database access control schemes without PKI. The former needs to use the public key cryptosystem in the process of key derivation, while the latter does not. Public key infrastructure-based hierarchical database access control schemes In 1983, AKL and Tylor [3] first proposed a key management scheme for hierarchical database access control. Later, many related schemes for hierarchical database access control were proposed one after another. These related schemes still require a large amount of computation and huge storage space. Some schemes are more likely to have security threats. In addition, when the database hierarchy is complex, its efficiency will gradually decrease and the dynamic management of the keys is not easy to carry out. In 2006, Jeng and Wang [4] proposed an efficient hierarchical access control key management mechanism based on polynomial and elliptic curve public key cryptosystems to solve the hierarchical access control problem. Each class in the hierarchy is allowed to select its own secret key. The problem of efficiently adding or deleting classes can be solved without the necessity of regenerating keys for all the users in the hierarchy, as was the case in previous schemes. The scheme is shown much more efficiently and flexibly than the schemes proposed previously. In 2008, Chung et al. [5] proposed a novel hierarchical access control key management scheme based on elliptic-curve cryptosystem and one-way hash function to solve dynamic access problems in a user hierarchy. In 2010, Nikooghadam et al. [10] proposed a hierarchical access control key management mechanism based on elliptic-curve encryption keys. Although the computing efficiency was improved, their scheme uses the elliptic-curve cryptosystem, and still requires heavy computations. In 2012, Das et al. [11] pointed out that the management schemes proposed by Jeng and Wang [4] and Chung et al. [5] had the security problem of key leakage, so they proposed an improved hierarchical access control key management mechanism to solve the security problem. In 2012, Wu and Chen [12] pointed out that the scheme of Nikooghadam et al. [10] lacked formal security analysis, and used elliptic-curve encryption and decryption operations that were slower than symmetric encryption and decryption operations. Wu and Chen also developed a hybrid hierarchical access control in the electronic medical system. Their scheme was developed by adopting elliptic curve and symmetric encryption/decryption systems to improve the operation efficiency. Subsequently, Nikooghadam and Zakerolhosseini [13] found that the scheme of Wu and Chen could not effectively overcome the man-in-the-middle attack problem. To improve this problem, elliptic-curve signatures were used in their new developed scheme. However, it required a lot of computational operations in the verification process. Hierarchical database access control schemes without public key infrastructure In 2013, Odelu et al. [14] proposed an efficient key management scheme for hierarchical access control in e-medicine system scheme. Their scheme used symmetric encryption and decryption hash functions, which greatly reduces the complexity of parameter storage and operation. Although the used parameters are reduced in their scheme, many parameters are still required in the case of complex layers. In 2017, Chao et al. [15] proposed an improved hierarchical access control scheme based on the scheme of Odelu et al. Performance Comparison This section compares the performance of related schemes for hierarchical access control in terms of storage space complexity and computational complexity. Assume that there are N SCs in the hierarchy to form the set SC = (SC 1 , SC 2 ,… SC N ); Each SC i has vi high-authority SCs. Both keys and parameters are 128-bit in length. Table 1 lists the storage space comparison of related schemes for hierarchical access control, and compares the key and parameter space stored in CA, SC, and public directory, respectively. The storage space of the schemes of Odelu et al. [14] and Chao et al. [15] in CA, SC, and public directory is significantly lower than that of other related schemes, and do not generate a large number of parameters in the case of complex hierarchy. Table 2 shows the comparison of related schemes for hierarchical access control in terms of computation complexity, which is the sum of the computational complexity of the key generation stage and the key derivation stage. Comparison of computational complexity The key generation stage is the calculation time required by the CA to generate parameters and keys for each SC, and the key derivation stage is the time it takes for each SC to derive all keys that meet its own authority. T MUL denotes the time required to perform a multiple operation; ADD EC T denotes the time required to perform a multiplication operation; MUL EC T denotes the time required to perform a multiplication operation on ECC; T SHA1 denotes the time required to perform a hash operation; T AES denotes the time required to perform a symmetric encryption/decryption; T XOR denotes the time required to perform an exclusive-OR operation; T ADD denotes the time required to perform an addition operation. The schemes of Odelu et al. [14] and Chao et al. [15] use AES symmetric encryption/decryption and hash operations, which greatly reduces the computation time compared to previous related works using elliptic curves. Discussion From performance comparisons of related schemes in Section 3, the schemes of Odelu et al. and Chao et al. are developed using symmetric encryption/decryption and hash operations, and more efficient than related works in terms of storage space complexity and computational complexity. Based on the related studies review in this article, most current lightweight computing authentication schemes are mainly based on key exchange and agreement. There are few studies discussing the key management and database access control, even applicable to electronic medical records and health-care records with security/privacy regulations. In addition, most studies related to database access control are complex in structure and require heavy computations. Some studies may have security problems, including the confidentiality of medical records cannot be ensured, and the integrity of medical records cannot be achieved, and noncompliance with security/privacy regulations where keys are authorized by the patient to control. Conclusion and future work This study divides hierarchical access control schemes into access control schemes compliant with HIPAA privacy/ security regulations and hierarchical database access control schemes. The hierarchical database access control schemes are also classified into PKI-based hierarchical database access control schemes and hierarchical database access control schemes without PKI. A hierarchical database access control scheme without PKI is more computationally efficient and preserves security requirements. The future work plans including other lightweight operations, such as PUF, and the development of privacy/security compliant database access control schemes for e-health-care systems. Financial support and sponsorship This research was funded by the Ministry of Science and Technology of the Republic of China, grant number MOST 110-2221-E-320-005-MY2. Conflicts of interest There are no conflicts of interest.
2023-04-05T15:20:29.458Z
2022-08-23T00:00:00.000
{ "year": 2022, "sha1": "2485b41335d72712afefeee2a509edf9a5b6e909", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/tcmj.tcmj_124_22", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "55aab6b476ee69c2f3799c33167f1822677034f7", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237991174
pes2o/s2orc
v3-fos-license
Geometric Characteristics and Mass-Volume-Area Properties of Haricot Beans (Phaseolus vulgaris L.): Effect of Variety ABSTRACT The geometric characteristics and mass-volume-area properties of haricot beans are essential for the design of equipment for harvesting, handling, drying, storing, dehulling, processing, and packaging. This study was carried out to determine the effect of variety on the geometric characteristics and mass-volume-area properties of four improved haricot beans varieties. The moisture content, 1000 seed mass, and true density of beans varied significantly (p < .05) in the range of 9–11.28%, 199.9–529.93 g, and 1127.52–1212.40 Kg/m3, respectively. The dimensional properties of the improved haricot bean were significant (p < .05) among the varieties, indicating that these would require some variation in the processing equipment design. Hydration capacity varied significantly from 0.14 to 0.36 g/seed among the improved haricot bean varieties. The hydration index also displayed significant differences among the varieties. Significant differences were observed in hydration coefficient and swelling capacity among the varieties and varied from 1.71% to 1.77% and 0.28 to 0.81 mL/seed, respectively. INTRODUCTION Common beans are the most broadly grown legume species in the world and are the third most significant bean after soybean (Glycine max (L.) Merr.) and peanut (Arachis hypogea L.). Common beans have considerable potential now and in the future to contribute to nutrition and food security. [1] Haricot beans (Phaseolus vulgaris L.) are a type of legumes that are widely consumed due to their high nutritional value, delicious taste, and ease of preparation. In the East and Great Lakes regions of Africa, haricot beans play an important role in human nutrition. [2] They have a high protein source and are recognized as the "poor man's meat." They are nearly 2-3 times higher in protein than cereals. [3] Besides, they are also an important contributor of fiber, prebiotic, vitamin B, and other micronutrients in the human diet. [4,5] Haricot bean (Phaseolus vulgaris L.) has been an export crop for Ethiopia for more than 50 years. [6] There are a wide range of haricot bean types grown in Ethiopia, including mottled, red, white, and black varieties. The most commercial varieties are pure red and white-colored beans, and they are becoming the most commonly grown types with increasing market demand. Nowadays a continuous increase in area and volume of production in the country has been noticed due to the growing demand for the local and export market of these crops. [7] This demonstrates that, while Ethiopia produces a large amount of haricot beans on a global scale, postharvest handling is still inefficient and mostly done by hand. Therefore, it is necessary to have information on the geometric characteristics and mass-volume-area properties of haricot beans to handle them mechanically. As a result, there is an urgent need to investigate the geometric characteristics and mass-volume-area properties of Ethiopia's improved haricot beans varieties. Information on the physical properties of common beans is important in the design of equipment used for processing, transportation, sorting, separation, and storage. Furthermore, these properties are required during the processing and handling of agricultural materials to set the operational parameters of the equipment for efficient operations. [8,9] For instance, the size and shape of foods are important physical characteristics that are used in screening, grading, and quality control. [10] Data on the angle of repose, volume, density, and porosity are also important for the design of processing, storage of particulate material, determining the power required for pumping, and modeling and design of various heat and mass transfer processes, such as drying, frying, baking, heating, cooling, and extrusion. [10] The functionality of raw materials is a combination of properties that determine product quality and process effectiveness. These properties are relevant to the mechanization of processing to increase the utilization as a food resource. Hence, the knowledge of the geometric characteristics and mass-volume-area properties of haricot beans is needed. Thus, the objective of this study was to explore geometric characteristics and mass-volume-area properties of improved haricot beans and their dependence on variety, which can help out in the design of handling, processing, and packaging machinery for haricot beans production. Sample Four improved varieties of haricot bean (Phaseolus vulgaris L.), namely: SER 119, SER 125, SAB 632, and Awash 2, were obtained from the Awash Melkassa Agricultural Research Institute of Ethiopia, from February to March 2020 ( Figure 1). The choice of exploring these haricot beans varieties was based on the fact that they have been shown to have a high production percentage, disease tolerance, short period to ripe and easy to adopt in Ethiopia. The sample seeds were cleaned from foreign materials, such as dust, stones, dirt, immature seed, damaged seeds, and other impurities by manual picking and for further analysis; the healthy selected seeds were kept at 5 • C by placing in an airtight plastic vessel. Before starting a test, the seeds were allowed to warm up to room temperature. Throughout the test and experiments, sample selection was randomized. Moisture content Before oven drying each of thecleaned and selected seeds samples were weighed using an electronic weight of accuracy of 0.001 g (Metler toledoML303T/00, China). By the use of small trays, all samples were put in an oven at a temperature of 103 o C as per ISO-665-2020 [11] and weighed every time after cooling the samples in a desiccator till constant mass. The moisture content was then calculated by using equation 1. where Wd is dried beans weight, Ww is wet bean weight (total mass) and mc d , is moisture content (dry basis) in percentage. Thousand seed weight The 1000 seed weight was determined using a digital electronic balance (Metler toledo ML303T/00, China) having an accuracy of 0.001 g following the procedure as described by Sharma et al. [12] with some modification. To assess the 1000 seed weight, 1000 randomly selected haricot bean seeds were counted and weighed. The reported value is the mean of three replications. Bean mass The mass of the improved haricot beans was determined by using a precision electronic balance reading to an accuracy of 0.01 g. Bulk density, true density, and porosity The bulk density, true density, and porosity of the haricot beans were determined using the method of Sharma et al. [12] In brief, the bulk density was obtained by filling 500 mL in the volume of a circular container with the seed from a height of 150 mm to create a tapping effect in the container to mimic the settling effect during storage at a constant rate and then weighing the contents with a digital electronic balance with an accuracy of 0.001 g. No manual compactions were done for each seed variety. The bulk density, ρ b was calculated as the ratio of the mass of the beans to the volume of the cylinder. where V c is the volume of a cylinder (m 3 ), and M s is the mass of seed (kg). The true density of haricot beans was determined using the toluene (C 7 H 8 ) displacement method. The true density was found as an average of the ratio of their masses to the volume of toluene displaced by the seeds. The volume of toluene displaced was found by immersing a weighted quantity of haricot seed in the toluene. True density was then calculated from the obtained values using the formula: where M is the mass of seeds (kg), V 1 is the initial volume (m 3 ), and V 2 the final volume (m 3 ). The porosity of haricot beans was determined by using the following equation: where ε is the porosity (%); ρ b is the bulk density (kg/m 3 ), and ρ t is the true density (kg/m 3 ). Angle of repose The angle of repose of the sample was determined by filling the seed beans in a topless and bottomless cylinder (with a 10 cm diameter and 15 cm height) placed on a flat surface and allowing it to overflow and form a cone in its natural rest position. The angle of repose was calculated using the formula given by Aviara et al. [13] θ where θ = Angle of repose in degrees, h and r =height and the radius of the cone, respectively. Color measurement The color of haricot beans was measured with a precision colorimeter (3NH Technology co., LTD, China). The color readings were displayed as L*, a* and b* format values where L* represents lightness/darkness dimension; positive and negative a* values indicate redness and greenness, respectively; and b* indicates yellowness for positive and blueness for a negative value. The color measurement was repeated seven times. Dimensional Properties The three principal axial dimensions (length (L), width (W), and thickness (T)) of the haricot bean were measured using the method of Sahin and Sumnu. [10] The dimensions of 100 randomly selected haricot beans from each variety were measured using a digital vernier caliper (TA, M5 0-300 mm, China) of 0.01 mm precision. The arithmetic mean diameter (D a ), geometric mean diameter (D g ), square mean diameter (D s ), and equivalent mean diameter (D e ) of the haricot bean were determined by using the following equations (6, 7, 8, and 9) . [14] The volume (V) and surface area (S) of the haricot bean were determined using equations adopted by Baryeh and Mangope . [15] V ¼ where B = WT ð Þ 0:5 ; L is the length of the seeds; W is the width of the seeds; T is the thickness of the seeds in mm. Sphericity, aspect ratio, flakiness ratio, and percent roundness The sphericity and the aspect ratio of the haricot beans were calculated using the following equation 12 and 13 as per the method of Wani et al. [16] Φ The flakiness ratio (R f ) of the haricot bean seed was determined using the following equation. [17] where φ is sphericity, R a is the aspect ratio, R f is the flakiness ratio, and L, W, D g are the length, width, and geometric mean diameter of haricot beans seeds, respectively. The percent roundness R p was calculated as follows. [18] The projected area of the seed was measured by an image analysis method. The area of the minimum circumscribing circle was determined by taking the largest axial dimension of the seed at a natural rest position (length of the seed) as the diameter of the circle. The process was repeated for 20 seeds selected randomly. The average was taken as the representative value of roundness. where: A p is the projected area of seed in mm 2 , and A c is the minimum circumscribing circle in mm 2 . Functional properties Hydration and swelling capacity: The hydration and swelling capacity of the haricot bean were determined using the method of Shimelis and Rakshit [2] ; Kaur and Singh. [19] Hydration and swelling index: Hydration and swelling index were evaluated using the method of Shimelis and Rakshit. [2] Hydration and swelling coefficients: The percentage increase in the mass of haricot bean seeds soaked in distilled water for 24 hours was used to measure the hydration coefficient. [2] The swelling coefficient was calculated as a percentage of the volume of bean seeds after soaking divided by the volume before soaking . [2] Statistical analysis The results obtained were presented as the mean and standard deviation (SD). The data were subjected to one-way analysis of variance (ANOVA) using Statistical Package for Social Science (SPSS version 20). Significant differences between the means were determined with the Tukey test at p < .05. Table 1 shows the effect of variety on the physical properties of improved haricot bean varieties. According to the result, the value of moisture content ranged from 9% to 11.28% on a dry basis. The highest was recorded for SER 125 variety and the lowest was for SER 119 variety. There were no significant differences between SAB 632, SER 119, and Awash 2 varieties, but these three were significantly (p < .05) different from SER 125 variety. The present results show that the moisture content range was within those reported in Shimelis and Rakshit [2] for haricot bean which was between 9.08 and 11.00 g/100 g (d.b) and Tuned-Akintunde et al. [20] for soybean which was between 6.25% and 11.60% d.b. The moisture content of the seed can indicate its storage stability as well as the ease of the dehulling process. [12] For food researchers and processors, the amounts of water present in agricultural products are extremely important as they assist in determining certain phases of adaptation and resistance to processing, such as drying, bagging, storing, cooking, and even consumption. RESULTS AND DISCUSSION The seeds mass of improved haricot beans varied from 0.20 to 0.51 g. The highest seed mass was observed in SAB 632 variety. The haricot bean seed is heavier than soybean which is between 0.11 and 0.18 g reported by Tunde-Akintude et al. [20] but the mass of the SAB 632 variety was in tune with the observations of Palilo et al. [21] for common beans Wanja variety grown in Tanzania, which is 0.50 g. According to the classification of bean size adopted by De Barros and Prudencio, [22] the haricot beans studied were considered small, except the beans from the SAB 632 variety, which were classified as big. Regarding the 1000 seed weight, results showed significant (p < .05) differences between the haricot bean varieties. However, SER 119 and SER 125 varieties were not significantly different in their thousands of seed weight from each other. The highest (529.93) 1000 seed weight was observed in SAB 632 and the lowest (199.90) in the Awash 2 variety. The data of 100 seeds weight are a significant factor in the design of equipment for cleaning, separation, conveying, and elevating unit operations. [13] It can also be used to estimate the overall bulk mass of haricot bean seeds during bulk handling. There were significant (p < .05) differences in the true density values of haricot bean seed varieties. The true density of the haricot bean seed varieties had ranged from 1127.52 to 1212.40 kg/m 3 . Data on the true haricot bean seed density is used to design haricot bean seed separation or cleaning processes. There is a significant (p < .05) difference in the bulk density between the haricot beans varieties. SAB 632, SER 119, and SER 125 varieties had no significant differences in their bulk density. Awash 2 variety had the highest bulk density (958.2 kg/m 3 ). The bulk density observed was higher than those reported by Altuntas and Demirtola [23] for legume seeds, such as kidney bean (Phaseolus vulgaris), pea (Pisum sativum), and black-eyed pea (Vigna sinensis) that were between 426.26 and 503.72 kg/m 3 measured at different moisture content. Information on bulk density is an important parameter in determining packaging and storage requirements for agricultural materials. [10] It is also practically used to calculate heat transfer problems with thermal properties, to identify Reynold's number of materials and to predict the pressures of the stock structures and chemical composition. [13] The values of porosity and angle of repose were statistically the same for all the varieties. The values are lower than that reported for Indian kidney bean cultivars, 33.6% to 37.5% and 15.20 to 18.67 � , [16] 35 to 40 � for common beans grown in Tanzania. [21] Seeds with low porosity take a long time to dry, while seeds with higher porosity have greater aeration and water vapor diffusion during the drying process. The angle of repose of the haricot bean measured in the present study is higher than 6.09 to 8.40 � for soybean. [20] The porosity refers to the percentage of space in bulk seeds that is not filled by seeds. It is useful for calculating the rate of aeration, cooling, drying, and heating, as well as designing heat exchangers and other similar bean handling equipment. [24] The angle of repose is important when designing hopper openings, storage bin sidewall slopes, and chutes for bulk seed transport, and is especially useful when measuring the number of granular materials that can be stored in implied or flat storages. [9,24] Color measurement Table 2 shows the effect of variety on the color of improved haricot bean varieties. The L* value which shows the lightness of the samples are significantly (p < .05) different due to variety. The highest 80.91 L* value was recorded for the Awash 2 variety. The L values of different dry bean varieties, which ranged from 28.823 to 73.937 was reported by Shimelis and Rakshit. [2] The skin color and brightness are some of the most significant quality parameters of the common bean. The a* and b* values, which indicate the red or green and the yellow or blue color, respectively, of the improved haricot beans, showed significant (p < .05) difference due to variety. However, SAB 632, SER 119, and SER 125 varieties were not significantly different in their a* and b* values from each other. These findings are similar to the range reported (1.693 to 14.390 and 5.710 to 25.393 for a* and b* values, respectively,) by Shimelis and Rakshit [2] for improved dry bean (Phaseolus vulgaris L.) varieties grown in Ethiopia. Color values of L*, a* and b* in the range of 33.31-38.90, 3.43-8.58, and f 1.88-7.32, respectively, have been reported for Indian kidney bean cultivars. [16] Red color beans are favored by Ethiopians as the beans provide an attractive red color when cooked with other cereals and legumes. [2] Dimensional properties Table 3 shows the effect of variety on the dimensional properties of improved haricot bean varieties. The dimensional properties of the improved haricot bean were significant (p < .05) among the varieties, indicating that these would require some variation in the processing equipment design. The average length of the improved haricot beans ranged from 11.12 to 13.09 mm, while the corresponding width ranged from 6.23 to 8.41 mm. Comparisons in terms of length and width indicate that SAB 632 variety is longer and wider than SER 119, SER 125 and Awash 2 varieties and is within the range of 13.71-18.32 mm (length) and 7.61 to 8.97 mm (width) reported by Palilo et al. [21] for common beans cultivated in Tanzania. The thickness of the improved haricot bean is found to be between 4.79 and 7.01 mm, and the highest was recorded for SAB 632 variety, while the lowest was for SER 125 variety. Wani et al. [16] reported length, width, and thickness in the range of 11.45-16.45 mm, 6.65-7.80 mm, and 4.70-6.13 mm, respectively, for Indian kidney bean cultivars. The arithmetic and geometric mean diameters of improved haricot beans were ranged from 6.76 to 9.50 mm and 6.60 to 9.16 mm, respectively, being this value lower than the length and width, and higher than thickness. The equivalent and square mean diameters of the four improved haricot beans were 8.31 to 11.60 mm and 11.57 to 16.14 mm, respectively. SAB 632 reported the highest and Awash 2 the lowest arithmetic, geometric, equivalent, and square mean diameters. The equivalent diameter of Indian kidney bean cultivars has been reported to vary from 7.31 to 9.24 mm. [16] The geometric mean diameter is useful for the appraisal of the projected area of a particle moving in the turbulent or near-turbulent area of an air stream, which is a useful parameter in the design of separation systems for the seeds from extraneous materials. [25] The improved haricot beans have a sphericity and roundness range of 60.2-75.4% and 67.33-75.18%, respectively. The results showed that the aspect ratio, flakiness ratio, projected and surface area, and volume of the improved haricot beans ranged between 0.54 and 0.73 � , 0.76 and 0.83, 47.08 and 97.54 mm 2 , 731.95 and 1726.29 mm 2 , and 112.99 and 287.72 mm 3 , respectively. Sphericity, aspect ratio, seed volume, and surface area of Indian kidney bean cultivars have been reported to vary from 52.13% to 63.08%, 0.40 to 0.61, 113.83 to 223.96 mm 3 , and 137.84 to 224.18 mm, respectively. [16] The nearer the sphericity to 1.0, the higher the affinity to roll about any of the three-axis, and the closer the ratio of thickness to width to 1.0, the higher the tendency to rotate about the major axis. [26] This propensity to either roll or slide is very essential in the design of hoppers and de-hulling equipment for the seed since flattest seeds slide more easily than spherical seeds that roll on structural surfaces. [27] Functional properties The effect of variety on functional properties of four improved haricot bean varieties is presented in Table 4. Hydration capacity varied significantly from 0.14 to 0.36 g/seed among the improved haricot bean varieties. SAB 632 had the highest, whereas Awash 2 had the lowest hydration capacity. Shimelis and Rakshit [2] reported hydration capacity in the range of 0.081 to 0.194 g/seed for different dry bean varieties. The hydration index also displayed significant differences among the varieties. This parameter varied from 0.71 to 0.77. SER 125 had the maximum hydration index followed by SER 119, SAB 632, and Awash 2 varieties. Hydration capacity and hydration index of some Indian kidney bean cultivars have been reported to vary between 0.12 and 0.42 g/seed and 0.48 and 0.93, respectively . [16] Significant differences were observed in hydration coefficient and swelling capacity among the varieties and were varied from 1.71% to 1.77% and 0.28 to 0.81 mL/seed, respectively. SAB 632 showed the highest swelling capacity, while the lowest was found in Awash 2 among the improved haricot bean varieties. A similar trend was reported by Wani et al. [16] in some Indian kidney bean cultivars. The swelling index and swelling coefficient did not show significant differences among the improved haricot bean varieties. CONCLUSION The effect of variety on the geometric characteristics and mass-volume-area properties of improved haricot beans was reported, and the following conclusions were drawn from this investigation. The moisture content, seeds weights, 1000 mass, true and bulk density were significantly different among the varieties. The effect of variety on the dimensional properties, such as length, width, thickness, arithmetic, and geometric mean diameter of the haricot bean, was significant (p < .05) indicating that these would require some variation in the processing equipment design. In addition, the results showed that the aspect ratio, flakiness ratio, projected and surface area, and volume of the improved haricot beans ranged between 0.54 and 0.73 � , 0.76 and 0.83, 47.08 and 97.54 mm 2 , 731.95 and 1726.29 mm 2 , and 112.99 and 287.72 mm 3 , respectively. Hydration capacity varied significantly from 0.14 to 0.36 g/seed among the improved haricot bean varieties. The hydration index also displayed significant differences among the varieties. In conclusion, this paper deals with the geometric characteristics and mass-volume-area properties of improved haricot beans, enlarging the knowledge about these varieties and providing useful data for their post-harvest handling and further industrial processing. Further studies should be conducted to explore the moisture-dependent geometric characteristics and mass-volume-area of these improved haricot bean varieties.
2021-08-27T16:43:37.169Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "fa6022c1f5e1998d9cbe359a1c8e44d084d44808", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/10942912.2021.1937210?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "b8d25c70584a8c29bbdecc407d53165438fe49d6", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Mathematics" ] }
2345913
pes2o/s2orc
v3-fos-license
Opium addiction in patients with coronary artery disease: a grounded theory study Background: There are widespread misconceptions about the positive effects of opium on coronary artery disease (CAD). Thus, we performed a study to explore the opium addiction process contributing factors among CAD patients using a grounded theory approach. Methods: The sample comprised 30 addicted CAD patients and their family members, physicians, nurses and friends. Purposive and theoretical sampling was employed; semi-structured interviews were conducted. Coding and constant comparative analysis techniques were as proposed by Strauss and Corbin (1998). Results: The core category was ‘Fighting for Survival’, comprising three main themes, namely, ‘the gateway’, ‘blowing into the fire’ and ‘getting stuck in the mud’. Conclusion: Increasing knowledge about the adverse effects of opium on the cardiovascular system would reinforce prevention and rehabilitation measures. Involving patients’ family-members in addiction prevention and rehabilitation programs and referring patients to specialized rehabilitation centres could help patients quit opium. Healthcare providers (HCPs) should notice to the effects of opium consumption among CAD patients; nursing care must be holistic in nature. Although opium is stigmatised in Iran, HCPs must treat addicted CAD patients similar to other patients. Nursing students’ must be aware of the negative effects of illegal drugs on CAD patients and the misconceptions regarding the positive effects thereof. Any misconceptions must be probed and clarified. Rehabilitation centres must be supervised by cardiologists and HCPs. Introduction In 2011, the United Nations Office on Drugs and Crime estimated that 12-21 millions (3-5%) of 15-64-year-old people worldwide were opium users. Most of the users live in countries like Iran and Turkey that are located on the route of opium trafficking from Afghanistan to Balkan and East and central Europe (1). Ray et al (2006) reported that 69 cases per 100,000 people living in rural areas of the north of Iran are opium users (2). In addition to geographic location, some misconceptions also contribute significantly to the high prevalence of opium consumption in Iran. Iranians, particularly villagers, deeply believe in the therapeutic effects of opium in the treatment of problems such as headache, toothache, earache, sexual impotence, etc. (3). It is believed that Iranian traditional medicine practitioners have propagated this belief. Consequently, the use of opium as a painkiller has become increasingly prevalent among Iranians (4). These beliefs have led to the outbreak of opium addiction in Iran. Opium consumption is particularly more prevalent among several patients including those with heart problems. Cardiac patients generally use opium to manage their heart problems. However, the evidence show that opium not only has no protective effect on heart (5,6) but also can exert adverse effects on it and on central and autonomic nervous system (7). The most prevalent adverse effects of opium on cardiovascular and respiratory systems are tachycardia, bradycardia, and orthostatic hypotension (7). Shirani et al (2010) reported that opium consumption does not slow down the atherosclerosis of carotid arteries in opiumaddicted patients. They referred to opium consumption as a risk factor for coronary artery disease (CAD) (8). As opium is pure morphine and morphine in turn inhibits pain pathways, opium consumption can obscure the presence of an acute heart attack. Consequently, an opiumaddicted patient with an acute heart attack may lose the golden time between the onset of symptoms and the administration of thrombolytic therapy (9). Studies have shown that one of the major leading causes of death among opium users is CAD. It is second only to accidents as a leading cause of death among opium users (1,10). The process of becoming dependent on addictive drugs is influenced by different personal, cultural, and social factors. Accordingly, different people may become addicted differently (11). It is widely accepted that people's tendency towards drug abuse is so much complex that cannot be thoroughly understood through a single theory (12); nevertheless, unfolding this process by using the grounded theory approach can provide valuable information about the contributing demographic, situational, contextual, cultural, and psychosocial factors. Most of the studies on opium addiction in CAD have been conducted by using quantitative approaches. Quantitative approaches cannot provide a deep understanding about complex phenomena such as addiction. Moreover, as the phenomenon of addiction has a close relationship with demographic, situational, contextual, cultural, and psychosocial factors, investigation of the issue in different societies seems crucial. The aim of this study was to explore the process of opium addiction and its contributing factors in patients with CAD. Design This was a grounded theory study conducted in 2012-2013. The grounded theory approach is useful for exploring the processes of human experiences and the conditions and contexts in which these processes take place (13). Participants The study population consisted of opiumaddicted patients with CAD who were hospitalized in a post coronary care unit affiliated to a public heart research center in Tehran, Iran. The inclusion criteria were having the ability to speak Persian and having at least one-year history of opium consumption. We started the sampling process by employing the purposive sampling method to recruit a sample with maximum variation in terms of variables such as gender, marital and employment status, and the length and way of opium consumption. Then, we employed the theoretical sampling method to explore the dimensional range of the emerged concepts and categories. The sampling process was pursued until saturation. Consequently, we recruited a sample of 18 opium-addicted patients with CAD, three non-opium-addicted patients with CAD, two physicians, three nurses, two family members, and two of friends of the patients. Data collection We collected the data by conducting semi-structured face-to-face and telephone interviews as well as by making detailed field-notes. Interviews were arranged according to participants' preferences and conducted in a quiet room. Three participants were interviewed twice. The first-round interviews were face-to-face while the second-rounds were on the phone. The basic questions of the interviews included:  How did you start using opium?  What factors affected your opium consumption? Subsequently, we also employed probing questions to delve into the participants' experiences. All of the interviews were recorded using a digital sound recorder. Interviews ranged in length from 30 to 40 minutes. Data analysis Data collection and data analysis were conducted concurrently. To analyze the data, we employed the coding and constant comparative analysis techniques as proposed by Strauss and Corbin (1998). Accordingly, immediately after each interview, we transcribed it verbatim. The process of analysis consisted of open coding, axial coding, and selective coding. The generated codes were compared with each other and categorized according to their similarities and differences. Categories, in turn, were also compared and merged with each other or divided into other categories. Finally, we identified the core category and linked it with other categories using selective coding. Ethical considerations A university-affiliated Institutional Review Board and Ethics Committee approved the study. The aim and process of the study were explained to the participants. We assured the participants that participation/ withdrawal from the study was voluntary. Moreover, we guaranteed the confidentiality of the participants' personal information. Finally, we asked the participants to read and sign the informed consent form of the study. Rigor Generally, there are four criteriaincluding credibility, confirmability, dependability, and transferability-for main-taining the rigor of a qualitative study (14). Accordingly, we employed techniques such as prolonged engagement in the study setting, thick descriptions of the study process and findings, member-and peer-checking, and constant comparative analysis. Results In total, we conducted 30 interviews with subjects who used opium for 4.5±1.7 years in average (Table 1). In the open coding, we primarily generated 739 codes which were abstracted and reduced to 65 codes. The codes were categorized into three main categories including 'gateway', 'blowing into fire', and 'getting stuck in the mud'. The core category of the study was 'Fighting for Survival'. Below, we explain these themes and categories ( Table 2). Gateway This theme reflected the causal and contextual conditions for having the motivation for starting opium consumption. Two subthemes were 'volition" and "compulsion'. Volition Our participants referred to instances such as peer pressure, presence of an ad- dicted family member, easy access to opium, and employment in certain occupations as the compulsive conditions for opium consumption. A participant blamed his addicted friends and stated, "Access to opium was very easy. I could have access to opium only by a brief walk to the street and a simple contact with the suppliers. Access to opium is cheaper than access to cigarette". Another participant who had two addicted brothers mentioned that "the first time I tried opium was with my brothers". Compulsion On the other hand, some of our participants mentioned that they involuntarily began to use opium. Most of these participants believed that false cultural belief about the positive effects of opium on cardiovascular disease and sexual impotence was an important leading cause of opium consumption. A participant who had voluntarily chosen to consume opium mentioned, "I saw the miracle of opium with my own eyes; since the beginning of the disease, it is opium that has empowered me and prevented me from disability. Whenever I have chest pain, opium has its painrelieving effects". On the other hand, some of our participants did not believe in the positive effects of opium on cardiovascular disease. A nonaddict patient with CAD stated, "I believe that opium cannot eliminate or even relieve my heart problem. These [beliefs about the positive effects of opium on heart problems] are all autosuggestion. One should consider the fact that opium may ruin his life. I believe that opium has no benefit except for annihilation". Another important factor contributing to the opium consumption was peer pressure which was in turn derived from the widespread misconceptions about the positive effects of opium. A participant who, under peer influence, had begun consuming opium for managing his heart problem mentioned, "My friend and I were working in a small clothing factory. My friend continuously recommended me that if like to to get rid of heart problem and save my life, I would better consume opium". On the other hand, peers also had an important role in addicted-patients' decision to quit opium and in opium consumption relapse. An addicted friend of an addicted patient stated, "I saw with my own eyes that my friend experienced a heart attack immediately after quitting opium and died after several days. Consequently, I am afraid of quitting. I fear of becoming disabled and confined at home". In addition to peer influence, personal experiences from childhood age for the shortterm positive effects of opium were another determining factors contributing to opium consumption. A patient who deeply believed in the positive effects of opium on heart disease mentioned, "My mother also had heart problem. As long as she smoked opium, she had no problem. Opium was effective in reducing her blood pressure. [Moreover,] when I had toothache, my grandmother gave me some opium solved in water and I got rid of pain. I have seen the miracle of opium from childhood". Those participants who had voluntarily chosen to consume opium had fewer problems with the social stigma of opium consumption. In other words, they had a less negative attitude towards addiction and believed that opium can save their lives. A patient who had voluntarily chosen to consume opium since three years ago mentioned, "I think of opium as a medicine. I have Blowing into fire The second theme of the study was 'blowing into fire'. This theme stood for the intervening conditions for becoming addicted to opium. The two sub-themes of this theme were 'perceiving opium consumption as a real threat', and 'experiencing conflict between craving for quitting and compulsion to continue consumption'. Perceiving opium consumption as a real threat Although our participants firmly hold misconceptions about the effects of opium on CAD, they believed that opium consumption does not conform to the established social norms. They suffered from many personal, family, and social problems as a result of addiction. Either consuming opium voluntarily or compulsorily, our addicted participants finally had perceived opium consumption as a real threat to their health and life. Seeking information from lay and professional people was the consequence of such perceived threat. A patient with a 5-year history of opium consumption, whose wife had applied for divorce twice, stated, "When I found that I was becoming addicted to opium, I felt extremely shy of my family. Most of my budget was being spent on opium consumption. I feel uncertain about my future and life". Experiencing conflict between craving for quitting and compulsion to continue consumption Our addicted participants perceived opium consumption as a real threat to their health and life and hence tended to quit opium; however, because of physical and psychological dependence on opium, lacking information about the adverse effects of opium on cardiovascular system, and lacking information about how to access to addiction rehabilitation centers, they feared of quitting opium. A patient with a 4-year history of opium consumption mentioned, Beside psychological disorders, opium consumption had resulted in the loss of family, community, and healthcare providers' support. The family-members and healthcare providers' main reason for being reluctant to support addicted patient in overcoming their addiction was the fear of developing withdrawal symptoms and lifethreatening cardiac complications. The family, community, and healthcare providers' indifference towards addicted patients was like blowing into a fire that had already engulfed them. A participating cardiologist said, "We ask patients to quit smoking cigarette because it causes no problem [for a cardiac patient]. However, we do not recommend patients, particularly those with a history of heart attack or unstable angina, to quit opium…on the other hand, we cannot recommend these patients to smoke opium in the coronary care unit. Instead, we recommend them to consume it by swallowing. Of course, after achieving hemodynamic stability and after opening blocked arteries by coronary artery by-bass graft-ing or stenting, we asked them to quit opium consumption after hospital discharge". In addition to the fear of developing withdrawal symptoms, the social stigma of addiction also played an important role in healthcare providers' reluctance to consider addicted patients' educational needs. A practicing staff nurse mentioned, "I prefer to spend my time caring for a non-addicted patient. To tell you the truth, I do not feel comfortable with addicted patients. I believe that addicted patients place a burden on the community". Getting stuck in the mud The third theme of the study was getting stuck in the mud. The consequence of the first two themes, i.e. the causal and contextual as well as intervening conditions, was to having no choice but to continue consuming opium. We coded this condition as getting stuck in the mud. This theme consisted of two subthemes including 'accepting the reality of addiction', and 'resorting to defense mechanisms'. Accepting the reality of addiction Our participating addicted patients referred to dependence on opium, fear of developing withdrawal symptoms and lifethreatening cardiac complications, and also receiving no support from the family and healthcare providers as the main reasons for having no motivation for quitting opium. Consequently, they were hopelessly compelled to accept the reality of addiction. An addicted patient stated, "I've been consuming opium for a while. I like to quit; but I fear; I fear of experiencing heart attack and becoming permanently disabled as a result of quitting opium consumption". Resorting to defense mechanisms Receiving others' confirmation had left our addicted participants with no choice but to accept opium consumption as the only available treatment option for their heart disease. Meanwhile attempting to resolve the conflict between craving for quitting and compulsion to continue consumption, our addicted participants had found themselves in a complicated situation, unable to make any changes, and having little chance of survival. They believed that finding themselves unable to survive from addiction had exacerbated the pain of becoming socially isolated. Consequently, they felt compelled to resort to defense mechanisms to escape from the blame and torment of opium consumption. The most commonly used defense mechanisms were projection and rationalization. Storyline Factors such as false cultural beliefs, childhood experiences and memories of opium, and peer pressure predispose patients with CAD to begin to use opium to manage their chest pain and prevent disability. Primarily, they generally assume that they can quit opium whenever they will. However, after a while, they will find themselves addicted to opium. The adverse physical, mental, psychological, and financial problems and social stigma of opium consumption cause patients perceive opium consumption as a real threat to their health and to their personal and family life. Seeking information from lay and professional people is the consequence of such perceived threat. Factors such as the dominance of misconceptions about the positive effects of opium on CAD, healthcare providers' indifference towards these patients' concerns and needs, fear of developing withdrawal symptoms and life-threatening cardiac complications, fear of experiencing death, and loss of family, community, and healthcare providers' support make patients unable to quit opium. Consequently, they are left with no choice but continuing to use opium. The ultimate fate is addiction. Briefly we can say that the finding of this study shows that continuing the use of opi-oids is the only way they can save their lives. In the other hand, they tend to rehabilitate because of the personal, family and society problems. Because using drugs is against the society's norms, lack of information about its side effects, and the way they can find rehabilitation centers and receiving incomplete and bad advises from unsecure people frighten them from rehabilitation. Besides, health care providers accept this kind of behavior because of their fear from the side effects of artery disease, not paying enough attention to the patient's concerns and not providing a complete service like informing the patient about the rehabilitation and supporting centers. It is like blowing to the fire which is burning the patient. The patient will remain mixed up since he tends to rehabilitate but the need of his corpus never lets him do this and he cannot change his condition at the end (Fig. 1). Discussion The aim of this study was to explore the process of opium addiction and its contributing factors in patients with CAD. Our findings revealed that the complex nature of CAD as well as false cultural belief about the positive effects of opium on cardiovascular disease and sexual impotence were the most important factors contrib-uting to opium consumption among patients with CAD. The false belief about the positive effects of opium on longevity and on the prevention of heart problems and heart attack played an important role in our patients' tendency towards opium consumption. The study findings also revealed that the sedating and pain-relieving effects of opium as well as its short-term positive effects on sexual impotence were the main reasons for use of opium. Previous studies also demonstrated that the most common reasons for experimenting with opium were diabetes mellitus, hypertension, sexual impotence, chronic pain (15) and cardiovascular diseases (16,17). Farahani et al (2008) believed that Iranians' misconceptions about pain-relieving, fatigue-reducing, and energizing effects of opium are the major barriers to patient and public education about addiction. The main sources of these misconceptions are childhood experiences. Peer pressure for consuming opium for the management of heart disease and sexual impotence is also derived from these misconceptions. Sediq-Sarvestani et al referred to common misconceptions as the main reason for use of opium in Iran. Consequently, identifying socio-contextual factors contributing to opium consumption is a fundamental prerequisite for planning addiction prevention and opium quitting pro- (18). It is noteworthy that narcotics have some short-term therapeutic effects; however, the therapeutic effects are gradually reduced and the consumer becomes increasingly dependent on them. Abdollahi et al reported that opium only can temporarily relieve problems such as chest pain and sexual impotence (19). According to Azimzade-Sarvar et al long-term use of opium can exert negative effects on sexual potency and chest pain perception. On the other hand, long-term and high-dose use of opium is potentially life-threatening and can result in death (6). However, despite the adverse effects of opium on cardiovascular system (7,8), our addicted participants and their family members were reluctant to quit opium because of the fear of developing withdrawal symptoms and life-threatening cardiac complications and the fear of experiencing death. Lack of information is an important factor contributing to such fears. Therefore, it seems quite essential to develop patient and public educational programs aiming at promoting public awareness about opium (20). We also found that our participants began to use opium to treat their sexual impotence. Sexual impotence among patients with CAD is secondary to fear of experiencing heart attack during sexual activity. Most of the patients, out of modesty and embarrassment, are reluctant to consult healthcare professionals about their sexual problems. Instead, they prefer to overcome their sexual problems by taking over-thecounter remedies and by consulting with friends and lay people. Viagra (sildenafil citrate) and narcotics are two over-thecounter treatment options widely used by patients with CAD. However, the therapeutic effects of these treatments are gradually decreased and hence, patients need to progressively increase their daily consumption. This unsafe practice puts them at great risk for developing life-threatening complications. Again, developing patient and public education programs for increasing public awareness and correcting common miscon-ceptions about opium seem clearly essential (16,20). Among the other factors contributing to start opium use were personal and childhood experiences on the short-term positive effects of opium on alleviating physical and psychological problems. Childhood experiences can be a strong support for holding misconceptions later on in life. Jafari et al also reported that childhood good experiences and memories of narcotics can lead to accepting narcotic use as a social norm in adulthood. Another finding of the study was that the addicted participants considered family members and healthcare providers' indifference towards their opium consumption as an important factor contributing to their reluctance to quit opium. This finding is in line with the findings of Jafari et al study (15). According to Kyngäs, education is a key component of providing care to patients with chronic conditions; accordingly, providing education and having a holistic approach to addicted patients' needs, expectations, and preferences are important to addiction prevention and management (21). We also found that healthcare providers' recommendations about opium quitting or about changing the way of consuming opium were effective on patients' decision to continue to use it. Jafari et al reported that traditional beliefs as well as information provided by peers, family members, and healthcare providers are the key factors in patients' decision to quit opium (4). Our participants also frequently referred to the importance of healthcare providers' role in changing patients' high-risk behaviors. They believed that healthcare providers are the most competent authorities for providing drug-related educations. Moreover, they mentioned that during hospitalization, they are free from severe mental distress experienced when out of hospital. Consequently, they considered hospital as the best place for receiving educations about opium. Farahani et al also emphasized the healthcare providers' substantial role in providing education to addicted patients and in changing their attitude towards opium (16). Our participants believed that addiction carries a social stigma. However, participants in Jafari's study considered opium consumption as a normal habit of daily life and opium as a recreational drug suitable for entertaining guests. Since opium, compared to heroin and opium residue, causes less serious problems, they also did not consider opium consumption as an illegal activity or as guilt (4). Considering addiction as a social stigma can result in social isolation, decreased self-confidence, disillusionment, and reluctance to follow drugprevention and rehabilitation programs. Consequently, patient may develop the serious long-term problem of addiction (22,23). On the other hand, we found that the social stigma of addiction played an important role in the healthcare providers' indifference towards addicted patients' needs and concerns. Consequently, promoting healthcare providers' awareness of addiction treatment strategies as well as changing their attitude towards addiction and addicted patients can help remove the stigma of addiction. Correcting public impression about addiction in turn can change healthcare providers' attitude towards addiction (24). We found that social stigma of addiction as well as patients' lack of information about complications and management of addiction had resulted in perceiving opium consumption as a real threat. However, strong cultural belief about the miracles of opium, peer pressure, and healthcare providers' indifference towards patients' needs and concerns had brought patients into a major conflict between craving for quitting and compulsion to continue consumption. Shaw (2002) also reported the same finding. This conflict is associated with feelings of depression, despair, and loneliness and finally puts patients at greater risk for addiction (25). Conclusion Different mental and physical conditions as well as psychosocial factors contribute to beginning and continuing to use opium among patients with CAD. Widespread misconception about the positive effects of opium on heart problems is among the most important factors. Consequently, patient and public education can help correct such misconceptions and hence, prevent adverse physical, mental, psychological, and financial problems associated with opium consumption. Another important factor contributing to opium consumption and reluctance to quit is the fear of developing withdrawal symptoms and life-threatening cardiac complications. This fear originates mainly from patients and their family members' lack of knowledge about the adverse effects of opium on cardiovascular system. Consequently, patient and public education as well as referring patients to specialized addiction rehabilitation centers can help patients quit opium altogether. On the other hand, as addicted patients and their family members readily accept healthcare providers' recommendations and instructions, healthcare professionals should place a high priority on patients' education in providing care to these patients. Limitations We diagnosed our participants' addiction only through asking them the following simple question, 'Do you consume opium?' Consequently, we were unable to confirm the validity of their responses. Another limitation of the study was that we had no access to addicted female patients.
2016-05-04T20:20:58.661Z
2015-01-15T00:00:00.000
{ "year": 2015, "sha1": "7e3904c1e70edbef6d99ca4a17c4d13d6fb87fa1", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "7e3904c1e70edbef6d99ca4a17c4d13d6fb87fa1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4889360
pes2o/s2orc
v3-fos-license
Development of mesenchymal subtype gene signature for clinical application in gastric cancer Previously, in the Asian Cancer Research Group (ACRG) project, we defined four distinct molecular subtypes in gastric cancer (GC). Mesenchymal (microsatellite stable with epithelial-to-mesenchymal transition phenotype, MSS/EMT) tumors showed the worst prognosis among all the subtypes. To develop a gene signature for predicting mesenchymal subtype GC, we conducted gene expression profiling using a NanoString assay in 70 ACRG specimens. The gene signature was validated in an independent set obtained from the prospective Adjuvant chemoRadioTherapy In Stomach Tumor (ARTIST) trial. The association between the mesenchymal subtype and survival was investigated. After cross-platform concordance test performed in 70 ACRG specimens, a 71-gene MSS/EMT signature was obtained. In the validation set, the gene signature predicted that 20 of 73 (27%) patients had mesenchymal tumors. Patients with mesenchymal subtype had diffuse GC, poorly-differentiated or signet ring cell carcinoma, and were microsatellite stable. The estimated hazard ratio for survival in patients with mesenchymal GC compared to those with non-mesenchymal tumors was 2.262 (95% confidence interval, 1.410 to 3.636; P=0.001). The survival difference remained significant when the subtypes were analyzed according to clinical prognostic parameters. This study suggested that the NanoString-based 71-gene signature for mesenchymal subtype is a strong predictor of the outcome in patients with GC. INTRODUCTION Gastric cancer (GC) is one of the most frequently occurring malignancies worldwide and the third-leading cause of cancer death [1].Most GC patients present with advanced stage disease and the overall prognosis remains very poor.Clinical trials involving novel targeted agents have demonstrated little success as palliative treatment for GC, with the exceptions of trastuzumab in patients with human epidermal growth factor receptor 2 (HER2)-positive tumors [2], and ramucirumab as a second-line treatment [3,4].Possible explanations for the lack of improvement in survival include that GC is a heterogeneous disease, with substantial differences in its aggressiveness and responsiveness to therapy, and its clinical outcome and prognosis in the individual patient do not always conform to the published data [5].Subtypes with different prognosis and different effects on cancer therapy, if found, may help ensure that patients receive the best possible treatment, thereby avoiding unnecessary treatment and associated toxicities, to eventually improve the overall outcomes. Beyond well-known morphological subtypes for GC [6], most recently, distinct molecularly defined subtypes have emerged in GC [6][7][8][9][10].The Asian Cancer Research Group (ACRG) was founded as a non-profit consortium of the pharmaceutical industry, academic medical centers, and sequencing companies to characterize GC subtypes.Molecular classification by the ACRG demonstrated that there are four subtypes: 1) GC with microsatellite instability (MSI); 2) GC with microsatellite stable (MSS) with an epithelial-to-mesenchymal transition (EMT) phenotype; 3) GC with a p53 signature (expressing CDKN1A and MDM2); or 4) tumors without the p53 signature.The most striking finding of this analysis was that the MSS/EMT subtype showed a significantly higher recurrence rate, higher probability of developing peritoneal seeding at the first site of recurrence, younger age at diagnosis, and extremely poor survival compared to other subtypes [8].The survival curve consistently declines over 5 years because of disease recurrence leading to death.Hence, more aggressive treatment should be developed for this subset of GC to improve survival. In order to make a gene expression profiling-based molecular classification more clinically applicable, we developed a gene signature system involving NanoStringbased targeted expression profiling to: 1) investigate the concordance rate between gene expression levels using conventional versus targeted gene expression profiling using the NanoString assay for the mesenchymal MSS/ EMT subtype in 70 randomly selected samples from the ACRG; 2) define cross-platform concordance with the nCounter assay for MSS/EMT signature; 3) test the mesenchymal NanoString assay in 70 ACRG samples with known molecular subtypes; 4) validate the mesenchymal gene signature in the 73 samples obtained from the prospective phase III Adjuvant chemoRadioTherapy In Stomach Tumor (ARTIST) trial [11,12]. Development of mesenchymal subtype signature A total of 143 tumor specimens were analyzed: 70 and 73 patients from the ACRG and the ARTIST cohort, respectively.As expected, the ARTIST patients were younger and had earlier stage disease than those in the ACRG cohort (Table 1).The study design is outlined in Figure 1.In brief, we began the cross-platform concordance test using 70 ACRG tissue specimens with NanoString targeted gene expression.After refining the final gene set, the concordance was tested between subtypes classified by Affymetrix and mesenchymal subtype by NanoString.As shown in Figure 2, 60 genes were upregulated from the EMT/MSS gene signature, whereas 11 genes were downregulated, revealing a high correlation between the two platforms.Finally, the mesenchymal subtype in the ARTIST cohort was evaluated to determine whether the gene set could predict the clinical features of MSS/EMT.We chose quartile-based cutoffs (top quartile) for each dataset (0.325 for the ARTIST and 0.14 for the ACRG). Next, we tested the 71-gene EMT/MSS signature in the ACRG cohort with known molecular subtypes using the conventional Affymetrix method.The concordance rate between the two platforms were very high: among 70 ACRG samples, only two samples which were previously categorized as mesenchymal subtype by Affymetrix platform were classified as non-mesenchymal subtype by NanoString (Table 2).There were 16 MSS/ EMT, 20 MSI, 23 P53 active/MSS, and 11 P53 inactive/ MSS subtypes included in the cohort.Of the 16 MSS/ EMT samples, 14 (88%) were identified as mesenchymal subtype by NanoString.Of note, these two NanoString nonmesenchymal but MSS/EMT tumors were of signet ring cell subtype (ACRG #42, #47).Histologic review revealed that the #42 subjected to ACRG analysis was obtained from serosal side, whereas the NanoString specimen contained tumors from gastric mucosa.Similarly, ACRG #47 tumor contained a mixture of signet ring cell carcinoma and tubular moderately-differentiated adenocarcinoma.All samples from MSI, P53 active/MSS, P53 inactive/MSS ACRG subtypes were categorized as non-mesenchymal with 100% concordance based on our scoring system. Validation of mesenchymal subtype in the ARTIST cohort In order to validate the mesenchymal subtype, we tested the gene set in 73 samples from the ARTIST cohort.Using the top quartile of the 71-gene mesenchymal signature, 20 of 73 patients predicted to have mesenchymal subtype tumors.The proportion of the mesenchymal subtype, which was equivalent to MSS/EMT, was within our previously reported range.As shown in Figure 3A, patients with the mesenchymal subtype had significantly worse survival compared to non-mesenchymal subtype in the ARTIST cohort (P=0.019). When combining the two datasets, the comparison of clinical characteristics between mesenchymal and nonmesenchymal subtypes revealed that GC patients with mesenchymal tumors were more likely to have diffuse type disease, GC involving the whole stomach, poorlydifferentiated or signet ring cell carcinoma, and MSI low disease (Table 3).Overall survival was significantly shorter in the mesenchymal subtype (hazard ratio [HR], 2.262; 95% confidence interval [CI], 1.410 to 3.636; P=0.001; Figure 3B).In regression analysis with clinical characteristics as covariates, only the mesenchymal subtype (HR, 2.045; 95% CI, 1.205 to 3.472; P=0.008) was independently related to shorter survival.To investigate whether interactions between these clinical characteristics were related to this probability, a stepwise Cox model was used.Again, only the mesenchymal subtype was significantly associated with survival. DISCUSSION Because of the distinct clinicopathologic features of the MSS/EMT subtype in GC, it is considered clinically meaningful to stratify GC subtypes based on genomic or transcriptional aberrations.According to our previous study [8], patients with the MSS/EMT subtype have a more aggressive natural history including high recurrence rate, predilection for peritoneal seeding at the first site of recurrence, younger age at diagnosis, and extremely poor survival.Hence, we hypothesized that treatment strategies and/or clinical trial designs for this particular subset of GC patients should be treated differently.Likewise, for a successful GC clinical trial involving specific molecularly targeted agents, it may be crucial to account for the mesenchymal subtype to enhance treatment outcome.In addition, in this era of immunotargeted therapy, stratification according to EMT may be increasingly important in terms of tumor immune infiltrates or responsiveness to immune checkpoint inhibitors [13]. The use of accurate molecular biomarkers to stratify patients with GC may lead not only to personalized treatment, but also to potential reductions in healthcare costs.Recently, a growing body of evidence supports 4 main molecular subtypes of GC distinguished by gene expression profiling [6][7][8][9][10].Although the use of tumor biomarkers has been proposed for decades, the discovery of specific genetic or protein biomarkers has been fundamentally complex because of the technical nature of comprehensive expression platforms, limitations in multiplex clinical assay development and, most importantly, an incomplete understanding of tumor biology.Most clinical specimens are FFPE tissues, particularly in cancer patients, and extensive RNA sequencing may not be feasible in clinically available specimens.We previously demonstrated that targeted profiling by the NanoString nCounter assay is a feasible and reliable method that can be readily used with FFPE specimens [14][15][16].Importantly, in the present study, we successfully constructed a gene signature derived from conventional gene expression profiling and cross-validated in an independent GC cohort.The concordance rate between NanoString and conventional gene expression profiling for identifying the MSS/EMT subtype was extremely high: only 2 discordant cases were found among 70 specimens. The identified mesenchymal subtype showed aggressive tumor behaviors such as diffuse type disease, GC involving the whole stomach, poorly-differentiated or signet ring cell carcinoma, MSI low, and significantly shorter survival.The distinct molecular and clinical features indicate that the mesenchymal subtype arises from different transformed stem or progenitor cells, with distinct biologic properties.Previous studies suggested that substantial improvement in the treatment of GC can be achieved by using individualized therapy strategies [17], including the identification of genetic alterations and the study of molecular biology of therapeutic agents.Recently, antibodies directed against immune checkpoint proteins have shown therapeutic efficacy in a number of cancer types [18].In limited feasibility studies [19], immunotargeted therapy also showed promising antitumor activity in GC.The efficacy of these immune checkpoint blockades vary among different tumor types, and an increased understanding of these differences may enhance the efficacy of this treatment modality.Attention is now focused on the identification of predictive biomarkers to select patients for immunotargeted therapy, although currently no single immunologic or tumoral characteristic in a patient has been found to solely determine response to an immunotherapeutic agent.One of the potential biomarkers is an inflamed tumor phenotype [20], as a non-inflamed tumor microenvironment may predict the resistance to immunotargeted therapy.EMT, or mesenchymal subtype, is highly associated with the inflammatory tumor microenvironment, independent of tumor mutation burden [13]. Interestingly, two MSS/EMT tumors had nonmesenchymal NanoString genotypes, likely because of intratumoral heterogeneity.Given the molecular tumor status is generally detected in a small fraction of the primary tumor, heterogeneity may limit treatment decisions based on a single biomarker test [21].From a practical perspective, careful selection of the most poorlydifferentiated area for RNA extraction would make it unlikely that this intratumoral heterogeneity, when present, will lead to incorrect results.Another limitation of the present study is the potential ethnic differences in GC patients.It is well known that significant geographic variation in the GC incidence exists, with the highest rates being reported in East Asian countries including Korea, and survival outcomes also differ considerably between Western and Asian countries.This discrepancy may be related to different diagnostic or treatment policies, and different tumor biology [22].The different patterns of GC between Western and Asian countries are quite apparent, and thus our results warrant validation in different ethnic groups.However, our main focus has been the identification of a distinct, mesenchymal GC subtype with very poor prognosis, and it is clear that the detection of molecular subtypes may enable the stratification of patients with high risk and development of the most appropriate treatment.Potential biological differences between the subtypes may suggest different therapeutic approaches with different molecular targets. MATERIALS AND METHODS The ACRG cohort consisted of 300 primary GC specimens that were procured at the time of curative or palliative gastrectomy at Samsung Medical Center (SMC, Seoul, Korea) between 2004 and 2007, and frozen at -80°C as previously reported [8].The study protocol was reviewed and approved by the SMC Institutional Review Board (IRB No. 2010-12-088).All participating subjects provided written informed consent after being informed about the purpose and investigational nature of the study.Cases were selected based on the following criteria: histologically confirmed adenocarcinoma arising from the stomach; surgical resection of primary GC; aged 18 years or older; complete pathological, surgical, treatment and survival follow-up data.Primary GC tissues were used for genomic analysis.Of the 300 patients, 70 tumor specimens were randomly selected based on the availability of tissue specimens.For validation, we selected 73 patients from the ARTIST [11], a phase III trial comparing adjuvant chemotherapy with chemoradiotherapy in 458 GC patients, in whom tissue specimens were available and sufficient for RNA extraction.In both cohorts, all tumor specimens were prepared from primary surgical specimen.Clinical characteristics of the patients are listed in Table 1.All patients were of Korean ethnicity. RNA preparation Hematoxylin and Eosin stain was performed on one tumor section per patient and tumors were reviewed by a pathologist (KMK) for tumor purity.Samples containing <50% tumor was discarded from the study.The tumor component was macro-dissected from 2 x 5μm formalinfixed paraffin-embedded (FFPE) tissue sections or fresh frozen samples, and RNA was extracted using the RNeasy FFPE Extraction kit or QIAamp DNA Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions.Sample RNA was quantified using Qubit 2.0 Flourometer with the Broad Range RNA kit using the standard protocol.Samples containing <20 ng/μl total RNA were not tested in the NanoString assay.Where available, more tissue for these samples were ordered, reextracted, and those containing 20 ng/ul or greater were tested in the NanoString assay. Gene expression profiling: Affymetrix microarray For training the algorithm for gene selection for the signature, we used the previously published dataset (accessed via https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE62254); RNA was extracted from tumors according to the manufacturer's protocol (Affymetrix, Santa Clara, CA, USA) [8].We used Affymetrix Human Genome U133plus 2.0 Array for gene expression profiling and processed the raw files using standard Affymetrix software including RMA normalization.system. Gene expression profiling: NanoString In the NanoString assay, we included 584 genes that were previously published to define the 4 subtypes, including 15 housekeeping and 14 technical control genes.The NanoString assays were performed following the standard protocol 'Setting up 12 nCounter Assays (MAN-C0003-03, 2008-2013)'.Hybridization incubations were performed between 17 and 18 h.Cartridges were either read immediately or stored dark (in aluminum foil) at 4°C until reading.All cartridges were read within 2 days of preparation on the AZ GEN2 Digital Analyzer station with high resolution selected.Data were processed using nCounter PanCancer pathways (Supplementary Table 1), and were normalized by dividing the raw counts by the geometric mean of the manufacturer-defined housekeeping genes and transforming into a log10 scale. Gene expression cross-platform concordance filter For each gene, we calculated the correlation between the gene expression level on the NanoString platform and on the microarray platform in the training set (n=70).Following inspection of the distribution of correlations (Supplementary Figure 1) we chose a cutoff of 0.4 correlation to select genes that were concordant between the two platforms.The genes remaining in the signature are represented in Supplementary Table 2. Original up (UP) and down (DN) arms of the EMT signature were previously defined [23].UP/DN refers to up/down regulation of genes at a pre-defined significance levels in a panel of solid cell lines defined as Epithelial or Mesenchymal using levels of CDH1 and VIM. Gene signature analysis We calculated the mesenchymal signature on the NanoString platform using the average of the genes in our previously defined GC mesenchymal signature [8], downselected to genes present on the NanoString platform, and with cross-platform concordance as defined in the previous section. Statistical analysis The primary endpoint of the present study was the identification and validation of a mesenchymal gene signature in GC.The secondary endpoint was survival, defined as the time between the date of surgery and the date of death.Survival data were updated at the time of analyses (May 2016), and analyzed using a Cox regression model.Baseline characteristics were compared using chi-square or Fisher's exact test.We used Spearman correlation for pairwise correlations between continuous variables.The significance levels were set at alpha=0.05.All analyses were performed using either the Matlab package including the Statistics toolbox (Mathworks, Natick, MA, USA) or R for Windows, v2.15 (R Core Team, Vienna, Austria; http://www.Rproject.org). CONCLUSION In the present study, we evaluated the gene signature of GC for mesenchymal subtype using a targeted NanoString gene expression, and validated the findings in an independent GC patient cohort.We found a 71-gene signature for mesenchymal GC with a high concordance rate.Because GC is considered a heterogeneous disease, it appears unlikely that one genomic and/or transcriptomal change will be uniformly defined.Therefore, a panel of biomarkers (i.e., gene signature) may enable more accurate prediction than a single biomarker.The results of the present study support the use of gene expression profiling analyses for the stratification of GC patients.Our results also provide further insight into the molecular heterogeneity of GC, and set the foundation for more detailed investigations, leading to the identification of a patient subset for novel, individualized therapy. Figure 1 : Figure 1: Study design to explore and validate gene signature for mesenchymal subtype.EMT, epithelial-to-mesenchymal transition; ACRG, Asian Cancer Research Group; ARTIST, Adjuvant chemoRadiotherapy In Stomach Tumor. Figure 2 : Figure 2: Concordance test subtypes classified by Affymetrix gene expression profiling and mesenchymal subtype by NanoString.
2018-04-03T00:26:36.084Z
2017-08-07T00:00:00.000
{ "year": 2017, "sha1": "56aea1bbcdd3eae8d608b0235b647125cdf3760d", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=19985&path[]=63776", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "56aea1bbcdd3eae8d608b0235b647125cdf3760d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12867873
pes2o/s2orc
v3-fos-license
Healthcare financing , decentralization and regional health planning : federal transfers and the healthcare networks in Minas Gerais , Brazil The Decrees 4279/10 and 7508/11 established norms to guide health politics, with impacts on funding of the Middle and High Complexity Hospital and Outpatient. To verify the effects on the consolidation of care networks in Minas Gerais, we performed an analytical-descriptive study of the National Health Fund from 2006 to 2014. We observed decentralization of responsibilities, accompanied of resources and innovative financing mechanisms, resulting expansion of the network care model. The federal government definitions suggest reduction of the autonomy and limitation of regional solutions. Introduction A framework of a new social order in Brazil, the Unified Health System (SUS) represents the overcoming of a contributory and centralized healthcare model with another of a redistributive, universalist and egalitarian character.The full realization of these constitutional social rights requires the configuration of a complex institutional structure capable of concretizing citizenship.Thus, one of the major challenges is the building of a national health system capable of simultaneously addressing the heterogeneity of regional needs and reducing existing inequalities 1 . Financing, decentralization and regionalization strategies form a triad of analysis that lead to reflections on the advances in the consolidation of the SUS.Decentralization, because, in a federalist context, repercussions on the definitions of responsibilities and tools of articulation between entities are crucial for the operationalization of policies.Financing, since there is no guarantee that decentralization of responsibilities, by itself, will promote, in an efficient and responsible manner, universal access to equitable levels of health care, requiring a consolidated institutional arrangement that, while respecting the different collection capacities of entities, can facilitate the triple(federal, state and municipality) commitment of financing the system.Regionalization, because financing, albeit at satisfactory levels and in fair proportions between entities, is not guided by redistributive allocation criteria and regional-based spatial planning and is unable to overcome the barriers inherent to the deep-seated inequalities that mark the Brazilian case. There is a synergistic relationship between this triad and the normative and institutional configurations of the SUS.The Federal Constitution establishes that public health actions and services must integrate a regionalized and hierarchical network, constituting a unified and decentralized system financed by the three federated spheres.Those precepts are established in Law Nº 8.080/90 and, later, in the Basic Operational Standards and Health Care Operational Standards, demonstrating the normative effort of elaborating a national proposal of healthcare regionalization, with definition of decentralized responsibilities and shared planning, management and financing tools. However, given its political-institutional, structural and conjuctural distance from subnational realities and its inability to reallocate resources and to induce increase public health expenditure, this proposal limited the regional project to the logic of services supply, definitions of healthcare and financial flows, which has reinforced health inequalities and competitiveness among federated entities 2 . In 2006, the Pact for Health was established to strengthen decentralized management of the SUS and cooperative intergovernmental relations.The Pact innovated by recognizing the political conception of regionalization and decentralization and proposing the agreement and coordination among managers toward greater coherence in the organization, funding and management of the system.However, because it did not significantly modify planning tools, with the exception of the creation of funding blocks and the monitoring and evaluation indicators, the Pact did not achieve the expected improvements in the shared management of SUS 2 . Among the most recent attempts to overcome the intense fragmentation and improve the political-institutional functioning of the SUS are the publication of Ministerial Ordinance Nº 4.279/10 and Decree Nº 7.508/11.The first one defines the guidelines for the structuring of the Health Care Network (RAS), which aims to promote the systemic integration of health actions and services, ensuring the provision of continuous, comprehensive, responsible, humanized and quality care 3 .The Decree deals with the organization of the system, health planning, health care and interfederative articulation 4 .Both highlight the need to consolidate the health region as a privileged section for the induction and integration of policies, the expansion of cooperative intergovernmental financing and the structuring of thematic networks aiming at ensuring comprehensive access to the system.This paper discusses the triad financing, decentralization and regionalization, based on the guidelines for the RAS implementation process, with reference to the case study of Minas Gerais.We intend to verify whether the criterias used by the federal government to transfer resources to subnational entities focused on the funding of hospital and outpatient care have advanced in relation to the guidelines proposed by Ministerial Ordinance Nº 4.279/10 and Decree Nº 7.508/11. Methodology This is an analytical-descriptive study based on data from federal transfers for the funding of me-dium and high complexity hospital and outpatient care of the SUS of Minas Gerais (SUS/MG) from 2006 to 2014, considering Ordinance Nº 4.279/10 as a starting point for the elaboration of new financing criterias for the operationalization of the RAS. We collected SUS/MG data on federal transfers from the National Health Fund (FNS) website.We made thorough consultations by action/ service/strategy of the MAC Block of funding for the 853 municipalities of the State, for the nine years under analysis, as per the cash method.We consolidated files in a single database, and the annual values, after being checked with data provided by the Ministry of Health on the website of the Strategic Management Support Room, were organized according to the Expanded Health Regions of Minas Gerais, established by the most recent version of the PDR-SUS/MG 5 .This planning tool organizes Minas Gerais territory in three levels: municipal, micro-regional and macro-regional.The latter level, concentrate in a hub the services that offer high-complexity and special medium-complexity care for the group of municipalities, therefore this is the setting in which comprehensive care is achieved and, thus, the focus of this work.When adjusting the PDR-SUS/MG to the terms of Decree Nº 7.508/11, macro-regional territories became known as the Expanded Health Region. The first step of the analysis comprised the characterization of the Expanded Regions.We collected the following information: territorial extension and the number of municipalities data of each Expanded Health Region, available by the PDR-SUS/MG 5 ; population data, as per estimates provided by the Brazilian Institute of Geography and Statistics (IBGE); number of health establishments by type of provider, according to the National Registry of Health Establishments (CNES); data from the Resolubility in the High complexity hospital care, an indicator calculated by the Minas Gerais State Health Secretariat (SES-MG), which measures the proportion of outpatient and/or hospital care capacity of the population in the expanded region of residence relative to the list of expected services for this level of care; and data from the national typology of health regions, available on the website of the Policy, Planning and Management of Health Care Regions and Networks 6 . Then, we analyzed the behavior of federal transfers carried out under the MAC Block of funding.In this stage, we adjusted values collected to December 2015 by the Broad National Consumer Price Index (IPCA/IBGE), based on the accumulated index number of the month of funds transfers.The analyzes considered both resource management -state and municipal, as well as the transfer component, Strategic Actions and Compensation Fund (FAEC) and the Medium and High Outpatient and Hospital Complexity Financial Limit (MAC), which was organized into two categories, by action / service / strategy, in view of the work focus: . MAC Limit, which includes resources that remunerate production, according to the logic of the Finally, to understand the behavior of federal transfers in relation to the guidelines established by Ordinance Nº 4.279/10 and Decree Nº 7.508/11, we developed a detailed analysis of the care network resources in the Expanded Health Regions.In this stage, the first step was to understand the financing policy of each priority network, identifying types of incentives, the number of ministerial ordinances that incorporate resources for each state network and the amount transferred.Then, from the example of the Emergency Care Network, we studied the effective allocation of the transferred funds in the territory.In Minas Gerais, most municipalities lack management of their providers, consequently, a significant portion of federal funds is transferred to the State Health Fund (FES/MG), without objective discrimination of the beneficiary.Thus, we consulted all ordinances that incorporate resources to this network in the Medium and High Complexity Financial Limits Control System -SISMAC and verified their actual allocation in the Integrated Agreed Program of Minas Gerais (PPI-MG) to identify creditors of state-managed amounts. Results and discussion The PDR-SUS/MG 5 organizes the 853 municipalities of the state in 77 Health Regions, which in turn make up 13 Expanded Health Regions.Ta-ble 1 addresses some key aspects to understand the reality of these regions. While corresponding to less than 10% of the territory of Minas Gerais, the Central Expanded Region comprises the second largest number of municipalities (103), concentrating 31.3% of the population of the state and has the highest population density (111.5 inhabitants/Km 2 ).Although it covers the smallest number of municipalities (23) and has the smallest resident population, only 1.4%, the Jequitinhonha Region is behind the Northwest Region in terms of population density (8.7 inhabitants/km 2 ).These data highlight an interesting aspect of the state regionalization process.PDR/SUS-MG pillars are based in four fundamental principles: comprehensiveness, economy of scale and scope, accessibility and geographic contiguity.Given the recognized regional inequalities, this instrument defines that, in case of conflict between access and scale, this last principle must prevail.By cross-referencing indicators that make up the socioeconomic situation and health services supply, the national typology classifies Health Regions into five categories.Group 1 features low socioeconomic development and low supply of services, and Group 5, high socioeconomic development and high service supply 6 .It should be noted that, in Minas Gerais, Expanded Regions are marked by diverse settings and the predominance of medium socioeconomic development and average service supply (Group 3).While nine of the 13 Expanded Regions cover at least one region in Group 1, only five comprise regions classified in the best performing category. Regarding health establishments, we observed that the Central region concentrates almost a third of the total state establishments (35,670), which reflects its reference role for the whole state.Analysis by type of provider indicates that Jequitinhonha, along with the West and North regions, show the highest percentage of public providers, 53%, 59% and 48%, respectively.These regions have historically been subject to greater state interventions due to their lower capacity to provide services, difficult retention 7 reinforce the importance of analysis based on the proposed triad, considering that decentralization promoted by the Brazilian health policy, without regional integration and weak public supply of services of higher complexity, with the presence of large healthcare gaps enabled a growing private supply, financed both by the State in the form of a tax waiver, and by all through payments of plans and insurance. On the other hand, the Resolubility confirms the regional discrepancy also in terms of health outcomes.Again, Jequitinhonha stands out with the worst performance.Less than half of the high complexity hospital care demand of residents was met in the Expanded Region itself in 2014.This result is not surprising given the poor supply structure already observed.Similar performance occurred in the West, which evidences issues in the supply and management of the network.The setting suggests that only funds transfers' criteria based on pay-per-service logic, having as financial limits population parameters built from historical series, a model adopted since the 1980s and coexisting to this day, will not be enough to reverse the situation in these regions. Understanding regionalization as a technical-political process, conditioned by the supply capacity, healthcare financing, power distribution and relations established among the various stakeholders throughout the territories 8 , we observed that Ordinance Nº 4.279/10 and Decree Nº 7.508/11, in Minas face great challenges due to the various regional realities. Studying federal transfers' behavior, considering the Federal Government's fiscal hegemony and its important redistributive role in the system, is fundamental for signaling alternatives that promote a more balanced SUS organization, reducing regional inequalities. The analysis of the federal financing of the MAC Block of funding reveals that, in the period 2006-2014, 75,803 bank transfers were made from the National Health Fund to State and Municipal Funds, of which 372 (0.5%) were canceled due to non-existent bank address (96%), incompatible beneficiary (2.2%), unrealized cash withdrawal within 7 days due to lack of list (1.6%) and by the manager after transfer to the bank (0.2%).In gross amounts updated at December 2015, transfers effected in the period totaled USD$ 9,456,912,095.50.Of this amount, USD$1,274,498,952.82 (13.5%) were deducted at source by the FNS as a result of payroll loans, Credit Assignment Term, University Hospitals/Ebserh, PROSUS, CONASS, CONASEMS, among others.Thus, the net amount transferred for medium and high complexity costs in the state was USD$8,182,413,142.52 (86.5%). Graphic 1 shows the behavior of the federal transfers to the MAC Bloc under SUS/MG over the nine-year period, by resource management and component category. There is a tendency to increase resources for medium and high complexity hospital and outpatient funding, except for the year 2013, down from the previous year, although nominal values indicate an increase of USD$ 47,657,837.90 compared to 2012.As stated by Ugá et al. 9 , while constant values, that is, deflated values, indicates increased federal health expenditure, the GDP-related fraction shows a trend towards stabilization or reduction of the Federal Government's contribution.This decreased federal participation in health financing is significant throughout the SUS consolidation process, from 72% of public health spending in the 1980s to just over 45% in 2010, which is worrying given the strong dependence of subnational spheres in the context of decentralization. From the viewpoint of funds decentralization, the institutional framework of the SUS defines two management modalities for municipalities: full management of basic care, in which it is incumbent upon the state to assume the management of medium and high complexity outpatient and hospital providers; and full system management, which gives the municipal manager autonomy to manage the actions related to the promotion, protection and recovery of health in own territory, and funds are transferred directly from the National Health Fund to the Municipal Health Fund 9 . Regarding this aspect, we can observe that 71.8% of the amount transferred in the period were decentralized directly to municipalities, with a consequent increase in municipal autonomy, from 70% in 2006 to 75.1% in 2014.It is noteworthy that, in 2006, only 59 municipalities managed the service providers, reaching 84 in December 2014.Currently, 122 municipalities have full system management, which means that the state is still responsible for the management of providers of 86% of the municipalities of Minas Gerais, accumulating responsibilities of coordinator of the system in its territory, leader of the regionalization process, financing co-partner and executor of funds transferred by the Federal Government. Regarding the analysis by component, we observed that the MAC Limit concentrated the largest volume of resources in all years, comprising 78.97% of the total transferred in the period.The level of funds allocated to the FAEC remained stable, varying from 13% to 19% of the annual total.Funds allocated for the financing of care networks suggest the implementation of measures by the federal government in order to adapt the financing of the medium and high complexity ambulatory and hospital services to the precepts of Ordinance Nº 4.279/10 and Decree Nº 7.508/11.In 2010, year of publication of the Ordinance, resources identified by the FNS as specific to thematic care networks were approximately 1.4% of the yearly total, level very similar to the four years prior to the norm.In 2011, this percentage has remained stable (1.3%), which may indicate a period of preparation of new allocation criteria aligned with RAS guidelines, and in 2012, this proportion went to 7.5%, more than doubling in 2013 (16.5%) and almost 20% of the total resources of the block in 2014, indicating that these tools developed during the maturation process of the model in the national health policy. These findings confirm efforts to overcome the pay-for-performance model established in the 1980s and in force to this day, knowingly inducing a fragmented and inefficient production of care.The options found point to the adoption of two major types of allocation criteria: financial incentives, which feature the search for improving the quality of care provided to the user, with funds transfers linked to goals and with preset payments; and general services budgeting, characterized by periodic transfers of an annual amount set programmatically, which, although formally calculated based on expected production for the specific period, giving, therefore, greater predictability of expenditure to the manager and revenue to the service provider, is not earmarked to the effective production of expected services 10 . These results reinforce the vision of Santos and Luiz 11 , who argue that to induce policies, among them structuring care networks, the Ministry of Health has used federal transfers criteria.They also clarify that the amount transferred has proved to be insufficient for the implementation of the RAS in all the States, which, in addition to compromising the national policy, has overwhelmed states and municipalities, mainly from 2014, with the backdrop of budget constraints, whether by lack of adjustment of costing amounts or lack of transfers to services already provided for in action plans. To make interface of the Brazilian reality with the international findings, Cashin et al. 12 highlight that the allocation tools and transfer methods to providers, especially those of medium and high complexity hospital and outpatient services, has a major impact on the volume and quality of the services offered.Hence, increased transfers criteria and tools that seek to align pay incentives with goals of healthcare systems have been observed.Authors highlight that these initiatives, dating back to experiments adopted in private enterprise in the United States at the beginning of the 1990s, are being developed in a wide variety of countries, mentioning not just Brazil, but also the United Kingdom, Germany, China, India and even low-income countries like Rwanda. Table 2 categorizes the resources by component and the Expanded Health Region, by decentralized amounts for the municipalities covered, by years 2006, 2010 and 2014.Funds transferred to the FES/MG appear in specific lines. The Central Expanded Region concentrates most features, regardless of component.In the SUS/MG, it was decided that the Psychosocial Care Centers (CAPS), the Dental Specialties Centers (CEO), the Emergency Care Units (UPA) and the Regional Dental Prosthesis Laboratories (LRPD), financed by general budgeting, have their management decentralized to the municipality, even if the latter does not have full management of the municipal system.Thus, in the state, the care networks' component by itself already shows a more decentralized character, which explains the fact that Jequitinhonha evidences the greatest variation of decentralized funds in the period subsequent to Ordinance Nº 4.279/10 (1017%), although it did not have any full municipality at the time.In order to better understand the specific allocation tools and criteria of the MAC component, we analyzed ministerial ordinances that allocate funds to priority care networks in the SUS/MG.Table 3 shows the characteristics of these networks, such as the number of ordinances that incorporate resources, types of incentives established for each network, total amounts transferred and the number of beneficiary Expanded Health Regions in the years of 2006, 2010 and 2014.Funds allocated to the Cancer Control Network were not included in the detailing, since the three ordinances that incorporate funds do so in the payment-per-procedure logic and are programmed in the PPI/MG not as an incentive, but as increased values in hospital and outpatient production ceilings. Two aspects draw our attention from the management viewpoint.First, the large number of funds incorporation ordinances (262), which indicates that financing networks and their expansion in the territory has gradually taken place.This is confirmed in the analysis of the number of Extended Regions covered per year. The second aspect relates to the multiplicity of incentives established in the different networks.Since they have different financing logic and their transfer is often linked to performance based on a specific list of indicators for each network, these incentives demand the formalization of several contractual tools, making the relationship between managers and providers more complex.The variety of incentives also points to another important issue.Federal government transfers of funds with preset allocation tend to compromise the autonomy of subnational entities, since they do not allow implementation according to locoregional needs.Thus, although financing tools established since the advent of Ordinance Nº 4.279/2010 have moved to overcome the population criteria for transfers, the way the process has been conducted may reduce the principle of decentralization to mere deconcentration of resources. Again on this aspect, we have to consider that, since federal funds are crucial sources of funding for the SUS, its volume should be high and their allocation balanced, which could encompass a general redistribution proposal guided by priority-setting general criteria consistent with the intended model of care, implemented through automatic transfers not earmarked to established programs 13 , which could reduce clash between collection, autonomy and cooperation. Regarding the volume of funds, 61.5% was allocated to the Emergency Care Network (RUE), much higher than the Mental Health Network, which was the second most benefitting from decentralized funds (12.2%).The Care Network for People with Disabilities was the one that received the least, with only 2% of the total decentralized funds in the nine years under analysis. In order to verify whether the criteria adopted by the national network deployment policy have contributed to reduced regional inequali-ties, a case study of the Urgent and Emergencies Network (RUE) was developed, in which, based on the analysis of ministerial ordinances and CIB-SUS/MG deliberations, the final beneficiaries of funds transferred to FES/MG within the scope of this network for the years 2006 and 2010 were identified. Figure 1 illustrates the development of transfers relevant to RUE by the Expanded Health Region, considering the final destination of the re- The trend of the volume of funds transferred for the implementation of the RUE in the state is noted.While, in 2006, USD$ 6,582,804.51weretransferred, of which 61.7% destined for the Central region, in 2010, the amount transferred hiked.to USD$ 9,654,195.80,now with a more deconcentrated distribution in the territory: 38% allocated in the Central region, 35% in the North, which evidenced the highest per capita amount (USD$ 2.73) and the remainder ranging from 7% to 2% in the remaining seven Expanded Regions covered, with the lowest per capita observed in the South region (USD$ 0.11).In 2014. the total amount transferred was USD$ 112,146,785.71,more than eleven times greater than in the year of the enactment of Ordinance Nº 4.279/10, with all per capita values showing an increase compared to 2010, reflecting federal government efforts to operationalize the guidelines proposed in the regulations.Also in this year, we note that, although all the Expanded Regions have received incentives from the RUE, resources were again concentrated in the Central region (64%), with north at 10% and the other 26% distributed among the other 11 Expanded Regions.Among the final considerations of the case study of Minas Gerais, we believe that Ordinance Nº 4.279/10 and Decree Nº 7.508/11 have managed to make possible both a significant input of resources and innovative funding tools, which has contributed to increased implementation of the care network model in the various regions of the state.In light of the financing-decentralization-regionalization triad, we conclude that municipal managers are gradually assuming a set of new responsibilities, whether in the contractualization of services, in the agreement of indicators, in the execution of resources or in the mediation of conflicts of the various stakeholders involved in the RAS consolidation process.However, despite increased autonomy, the federal government's impositions regarding access to financial resources to increase the financing of medium and high hospital and outpatient com-plexity is still evident.The increased volume of transfers made through multiple and predefined incentives by the Ministry of Health tends to concern to the extent that it makes the role of subnational spheres more complex and limits their allocation possibilities according to locoregional specificities.Debates on resource allocation tools should be expanded, bringing to the surface not only quantitative transfer criteria, but questions pertinent to SUS dynamics' dilemmas, such as autonomy versus liabilities versus collection capacity versus operational capacity. In this context, it is clear that the expected results for each Expanded Health Region is only achieved through strengthening and maturation of these interfederative relationships, in order to have convergent efforts to reduce inequalities and effective guarantee of constitutional rights. Collaborations LMC Moreira worked on the conception and design of the study, data analysis and interpretation, writing of the paper and approval of the version to be published.F Ferré worked on the conception and design of the study, interpretation of data analysis, critical review and approval of the version to be published.EIG Andrade worked on the conception and design of the study, data analysis and interpretation, critical review and approval of the version to be published. ministerial ordinance the transfer referred to.Maps show the process of expansion and consolidation of the RUE in the territory of Minas Gerais.In 2006, while the care network policy proposal had not yet been structured, only six Expanded Regions received incentives for emergency care, specifically for the costing of SAMUs.In 2010, this incentive policy for the structuring of SAMU had already covered nine regions.With the enactment of Ordinance Nº 2.395/11, which sets out RUE's guidelines, we begin to observe the diversification of types of incentives from 2012, achieving, in 2014, 100% of the Extended Regions receiving some kind of incentive. Figure 1 . Figure 1.Development of RUE-related transfers by Expanded Health Region -Minas Gerais -2006, 2010 and 2014.Source: Own elaboration based on data provided by FNS.Ministerial ordinances and CIB-SUS/MG deliberations. Table 1 . Characterization of the Expanded Health Regions, by population, territorial extension, socioeconomic and health conditions, type of healthcare providers and resolubility -Minas Gerais. Table 2 . Amounts transferred by Expanded Health Region by component per year, in millions of US$-MinasGerais -2006, 2010, 2014. * Total amounts include Cancer Control Network amounts. Table 3 . Federal funds transferred, ministerial ordinances and types of incentives, by network -MinasGerais -2006.2010 and 2014. source, regardless of management, with per capita values in Brazilian Reais (R$) highlighted.Of the total funds transferred to the network, only 0.5% had no identified destination, either because they were still macroallocated in PPI/MG, or because it was not possible to identify which
2017-09-23T05:15:04.414Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "e45fa98842805431b20fe7ad2ac86c1191a15aad", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/csc/v22n4/en_1413-8123-csc-22-04-1245.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e45fa98842805431b20fe7ad2ac86c1191a15aad", "s2fieldsofstudy": [ "Economics", "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
210158123
pes2o/s2orc
v3-fos-license
Regulation of Gdf5 expression in joint remodelling, repair and osteoarthritis Growth and Differentiation Factor 5 (GDF5) is a key risk locus for osteoarthritis (OA). However, little is known regarding regulation of Gdf5 expression following joint tissue damage. Here, we employed Gdf5-LacZ reporter mouse lines to assess the spatiotemporal activity of Gdf5 regulatory sequences in experimental OA following destabilisation of the medial meniscus (DMM) and after acute cartilage injury and repair. Gdf5 expression was upregulated in articular cartilage post-DMM, and was increased in human OA cartilage as determined by immunohistochemistry and microarray analysis. Gdf5 expression was also upregulated during cartilage repair in mice and was switched on in injured synovium in prospective areas of cartilage formation, where it inversely correlated with expression of the transcriptional co-factor Yes-associated protein (Yap). Indeed, overexpression of Yap suppressed Gdf5 expression in chondroprogenitors in vitro. Gdf5 expression in both mouse injury models required regulatory sequence downstream of Gdf5 coding exons. Our findings suggest that Gdf5 upregulation in articular cartilage and synovium is a generic response to knee injury that is dependent on downstream regulatory sequence and in progenitors is associated with chondrogenic specification. We propose a role for Gdf5 in tissue remodelling and repair after injury, which may partly underpin its association with OA risk. Recent studies using mice harbouring BAC transgenes have revealed a conserved cis-regulatory architecture for GDF5 between humans and mice 19,[22][23][24] . Regulatory sequences that control Gdf5 expression in developing and adult joints are distributed over a hundred kilobases, including regions both upstream and downstream of its coding exons 22 . While Gdf5 expression in the developing knee is driven by both upstream and downstream regulatory sequences, in adulthood downstream regulatory regions are uniquely used 19,22 , suggesting that the genomic sequences regulating continued expression of Gdf5/GDF5 in the adult knee during homeostasis may be distinct. Of note, these downstream regions harbour a number of genetic risk variants for knee OA 3 . In this study, we used BAC Gdf5-LacZ reporter mice 19,22 to map Gdf5 expression during adult knee joint tissue remodelling associated with OA development or acute cartilage injury and repair, and to determine whether a differential regulation of Gdf5 expression is associated with such events. Methods Mice. All methods were carried out in accordance with relevant guidelines and regulations. All animal experimental protocols were approved by the UK Home Office and the Animal Welfare and Ethical Review Committee of the University of Aberdeen. Two Gdf5 BAC transgenic mouse lines were used 19,22,23 . They both harbour a BAC transgene containing mouse Gdf5 with an IRES-LacZ cassette in the 3'UTR. Gdf5UP-LacZ mice contain a modified BAC extending 110 kb upstream to 30 kb downstream of Gdf5 coding exons, which includes a conserved regulatory region adjacent to the promoter upstream of the Gdf5 coding exons. Gdf5DOWN-LacZ mice contain a modified BAC extending a further 109 kb downstream, which includes additional regulatory regions downstream of the Gdf5 coding exons. Both lines were maintained as heterozygotes on an FVB background. Gdf5-CreER mice 13 were provided by Dr. Elazar Zelzer (Weizmann Institute of Science, Israel) and crossed with Cre-inducible tdTomato (tdTom) reporter mice (Jackson Laboratory; B6.Cg-Gt(ROSA)26Sor tm14(CAG-tdTomato)Hze /J) 25 . Mice were group-housed in conventional cages on a 12:12 light-dark cycle, in a temperature-controlled room with water and food ad libitum and environmental enrichment provided. Tamoxifen (Sigma) dissolved in corn oil was administered by gavage at 6 weeks of age (180 mg/kg daily for 5 days), or to the pregnant dam at E11.5 (120 mg/kg), E13.5 (160 mg/ml) and E15.5 (160 mg/ml), and embryos were collected following euthanasia of the pregnant dam at E19.0. Surgical procedures. Male mice, 11-12 weeks old, underwent surgical unilateral destabilisation of the medial meniscus (DMM) on the left knee 26 while the right knee served as internal control, and mice were euthanised 2 or 8 weeks later. Female mice, 9-11 weeks old, underwent surgery to induce unilateral joint surface injury by medial parapatellar arthrotomy as previously described 14 , and were euthanised 6-7 days or 4 weeks later. For all surgeries, isoflurane inhalation anaesthesia was used, and mice received a subcutaneous injection of 0.1 mg/kg Vetergesic (containing 0.3 mg/ml Buprenorphine) on the day of surgery and the following day. Mice were kept group-housed. X-gal staining. Whole-mount staining with X-gal to detect β-galactosidase (β-gal) activity was performed as described 27 , with modifications. Limbs were fixed in 4% PFA for 2 h at 4 °C, washed 3x in wash buffer (0.1 M phosphate buffer supplemented with 2 mM MgCl 2 , 0.01% sodium deoxycholate and 0.02% Igepal), stained with 0.75 mg/ml X-gal in staining solution (wash buffer supplemented with 4 mM potassium ferrocyanide, 4 mM potassium ferricyanide and 20 mM Tris buffer, pH 7.4) for 6 days at room temperature, then washed 3x in PBS. Limbs from wild-type mice were stained as controls. Human tissue collection. All human cartilage samples were obtained after informed consent and in accordance with the relevant guidelines and regulations, with approval from the NHS Grampian Biorepository Tissue Bank Committee. OA samples were obtained from five patients (47 to 79 years old, all female) undergoing knee arthroplasty. Normal samples were obtained from five joints (two knee joints, 1 st metatarsal phalangeal joint, ankle joint, talo-calcaneal joint) donated by three patients (40 to 59 years old, two males, one female) undergoing excision or amputation surgery for tumours unrelated to the joint sampled. Histology and immunohistochemistry. Samples were fixed in 4% PFA at 4 °C and decalcified in 10% EDTA in PBS. Samples were embedded and sectioned as described 14 . Sections were stained with Nuclear Fast Red (Vector Laboratories, UK) to stain nuclei, or with safranin-O (Sigma) to stain glycosaminoglycans in the cartilage matrix red, with fast green (Sigma) counterstain, following standard protocols. TRAP staining to detect osteoclasts was carried out using a TRAP staining kit (Sigma). Immunohistochemistry was performed as described 28,29 using antibodies listed in Supplementary Table 1. Collagen type II was detected following enzyme-based antigen retrieval with 1.5 mg/ml porcine pepsin (Sigma) for 45 min at 37 °C. Yap and GDF5 were detected following antigen retrieval for 4 hours at 80 °C in antigen unmasking citrate buffer solution (pH 6, Vector Laboratories, UK). Stained sections were imaged using a Zeiss Axioscan Z1 slide scanner (Carl Zeiss Ltd, UK), Zeiss Axioskop 40 (Zeiss) with Progress XT Core 5 colour digital camera and ProgRes CapturePro 2.9.0.1 software (JenOptik, Germany), or 710 META Laser-Scanning Confocal Microscope with ZEN software (Zeiss) and analysed using ZEN2 (blue edition, Carl Zeiss Ltd). Cartilage damage of the tibial plateau was assessed using the Osteoarthritis Research Society International (OARSI) scoring system 30 . Quantification of X-gal staining. Colour deconvolution was applied to images of X-gal-stained sections to remove the Nuclear Fast Red counterstaining using ImageJ with Fiji package and Colour Deconvolution Plugin (Dr. Gabriel Landini, University of Birmingham, UK) based on published methods 31 . All images were acquired with the same magnification, resolution and light settings. The number, size and staining intensity of X-gal-stained chondrocytes in the tibial cartilage was then determined by creating a binary image using thresholding and watershedding, and analysing particles by redirecting measurements to matching greyscale images. Four sections per sample were analysed. Total X-gal staining was calculated by multiplying the number and staining intensity of X-gal-stained chondrocytes. primary cell isolation and in vitro chondrogenesis. Cells were isolated from Gdf5 BAC mouse knees as described 14 . Chondrogenesis was induced in high-cell density pellet culture (2.5-3 × 10 5 cells) with 10 ng/ml TGFβ1 (Gibco) or 300 ng/ml BMP-2 (Prospec) for 21 days, as described 14 . Pellets were fixed in 4% PFA for 15 min, X-gal-stained for 4 h and post-fixed for 15 min, cryoprocessed, sectioned and stained with Toluidine Blue or Nuclear Fast Red. Overexpression and knockdown experiments. C3H10T1/2 cells (American Type Culture Collection, USA) were retrovirally transduced to express wildtype or constitutively active YAP1, as described 32 . Cells were seeded in monolayer (15,000/cm 2 ), transduced the next day, and RNA extracted 2 days later. Alternatively, transduced cells were seeded in high-cell density micromass culture (4 × 10 5 cells) in chemically-defined serum-free medium (high-glucose DMEM with glutamine, supplemented with 50 μg/ml ascorbic acid, 1 mg/ml recombinant human insulin, 0.55 mg/ml transferrin, 0.5 ug/ml sodium selenite, 50 mg/ml BSA and 470 µg/ml linoleic acid) 32 , and the next day RNA was extracted. For knockdown experiments, cells were seeded at 42,000/cm 2 and transfected the next day with DsiRNA (Supplementary Table 2) (Integrated DNA Technologies, USA) using Mirus TransIT-X2 reagent (Mirus Bio LLC, USA). The following day, cells were seeded in micromass culture (2.5-3 × 10 5 cells) and cultured under chondrogenic conditions by treatment with 300 ng/ml BMP-2, as described 32 . After 4 days, RNA was extracted for analysis of gene expression. Gene expression analysis. Total RNA was extracted using TRIzol reagent (Invitrogen, Paisley, UK) according to standard protocols, and RNA was quantified using a NanoDrop ND-1000 spectrophotometer (Labtech, Uckfield, UK). cDNA was synthesised using random hexamer primers and SuperScript Reverse Transcriptase (Invitrogen), according to manufacturer's instructions. Quantitative PCR (qPCR) was performed with a Roche LightCycler 480 using SYBR Green Master (Roche), according to standard protocols. Expression of genes of interest was normalised to expression of Hprt1. Primer sequences are listed in Supplementary Table 3. Statistical analysis. Microarray data were analysed using Bioconductor (Affy package for pre-processing and normalization and Limma for statistical comparison of expression levels using a false-discovery-rate of 5%). Principal component analysis was performed using the prcomp package in R. All other data were analysed using GraphPad Prism v5 and SigmaPlot v13. A p-value ≤0.05 was considered statistically significant. For comparison of two groups, two-tailed t-test was used. For comparison of ≥3 groups, one-way or two-way ANOVA with Holm-Sidak post-test was used. Data following a lognormal distribution were log-transformed for statistical testing. N-numbers and data points on graphs represent individual mice, patients, or in vitro experiments, with horizontal lines indicating mean. Results Gdf5 expression in oA. To investigate Gdf5 expression in experimentally induced OA, we used two Gdf5-LacZ reporter mouse lines 22 . Gdf5UP-LacZ mice contain a BAC extending 110 kb upstream to 30 kb downstream of Gdf5 coding exons, which includes a conserved regulatory region adjacent to the promoter upstream of the Gdf5 coding exons. Gdf5DOWN-LacZ mice contain a BAC extending a further 109 kb downstream, which includes additional regulatory regions downstream of the Gdf5 coding exons that are not present in the Gdf5UP-LacZ BAC. Both BACs were modified to contain an IRES-LacZ cassette in the 3′UTR of the Gdf5 gene, thus LacZ expression is indicative of the activity of the Gdf5 regulatory regions contained within the BAC 22 . While both mouse lines express LacZ in the knee during development 19,22 , only Gdf5DOWN-LacZ mice express LacZ in the knee in adulthood ( Supplementary Fig. 1) 19 . The Gdf5DOWN-LacZ BAC is also able to rescue the knee phenotype in bp mice, indicating it contains the regulatory regions necessary for adequate expression in the knee 19 . Here, we found that the LacZ expression pattern in Gdf5DOWN-LacZ adult knees resembled the tdTom labelling pattern in knees from adult mice with a knock-in of CreER at the endogenous Gdf5 locus 13 crossed with Cre-inducible tdTom reporter mice 25 shortly after tamoxifen induction (Supplementary Fig. 2A,B). TdTom labelling was sparse, likely due to inefficient Cre-recombination as observed in embryos ( Supplementary Fig. 2C-E) 13 . Nonetheless, these data support LacZ expression in knees from adult Gdf5DOWN-LacZ mice as reflecting transcriptional activity of endogenous Gdf5. We analysed LacZ expression in the knees of Gdf5-LacZ mice after DMM (Fig. 1A,B). In Gdf5DOWN-LacZ mice, increased LacZ expression was observed in medial compartment articular cartilage at 2 weeks, particularly in areas with early signs of damage, as shown by loss of Safranin O staining which stains proteoglycans in the cartilage extracellular matrix (Fig. 1A). Quantification showed an increase in both the number of LacZ-expressing chondrocytes and average X-gal staining intensity per chondrocyte (Fig. 1C), resulting in a significantly higher overall LacZ-expression in the medial tibial plateau cartilage in DMM knees. At 8 weeks after DMM, LacZ expression persisted in articular cartilage of Gdf5DOWN-LacZ mice but was less pronounced and undetectable in areas of severe damage (Fig. 1A). In Gdf5UP-LacZ mice, no LacZ expression was detectable in the cartilage at either time-point (Fig. 1A). These data indicate that Gdf5 downstream regulatory elements are activated in articular chondrocytes in the early phase of OA. For clinical relevance, we analysed data from published microarrays of human cartilage from knees of normal donors and OA patients 33 . GDF5 expression was upregulated in the cartilage of OA patients (Fig. 3A), alongside increased expression of cartilage degrading proteins known to be upregulated in OA (MMP13, ADAMTS5) (Fig. 3B). GDF5 expression correlated with expression of SOX11 and WNT9A (Fig. 3B,C), known upstream regulators of Gdf5 expression during development [34][35][36] , indicating these factors may also modulate GDF5 expression in human articular cartilage during OA. Immunohistochemistry for GDF5 on articular cartilage samples from a distinct cohort of OA patients and controls confirmed GDF5 was upregulated in OA cartilage ( Fig. 3D and Supplementary Fig. 3). Gdf5 expression following joint surface injury. To investigate Gdf5 expression during cartilage repair, we analysed LacZ expression in the Gdf5-LacZ transgenic mice 4 weeks after joint surface injury. In Gdf5DOWN-LacZ mice, chondrocytes in the repair tissue strongly expressed LacZ. We also detected prominent LacZ expression in chondrocytes in the native cartilage immediately adjacent to the repair tissue (Fig. 4A). In contrast, no staining was observed in repaired cartilage in Gdf5UP-LacZ mice (Fig. 4A). In support of these findings, while undetectable in monolayer culture, LacZ expression was detected in MSCs isolated from the knees of Gdf5DOWN-LacZ mice following chondrogenic differentiation in pellet culture, but not in chondrogenic pellets of Gdf5UP-LacZ MSCs (Fig. 4B). These data indicate upregulation of Gdf5 expression, mediated by downstream regulatory regions, during articular cartilage repair. Since LacZ was switched on in Gdf5DOWN-LacZ MSCs during chondrogenesis, we next analysed the synovium, which contains stem/progenitor cells that can undergo chondrogenic differentiation following injury and are postulated to repair injured cartilage 14,28,37 . LacZ was not detectable in synovium during homeostasis in either www.nature.com/scientificreports www.nature.com/scientificreports/ model ( Supplementary Fig. 1). One week after joint surface injury, the synovium was hyperplastic, as expected 28,38 . In the synovium on the lateral side of the knee, not incised during surgery, LacZ remained undetectable in both mouse lines at both time-points ( Fig. 5A and not shown), indicating that Gdf5 expression is not switched on in synovium in response to cartilage injury. However, in synovium on the medial side, which was incised during surgery, small clusters of LacZ-expressing cells with a fibroblast-like morphology were detected in Gdf5DOWN-LacZ mice (Fig. 5B), and such cells persisted at 4 weeks after injury (Fig. 5C). They were predominantly localized near surgical sutures, where fibroblast-like cells that stained strongly for β-gal were observed around small clusters of LacZ-expressing chondrocytes embedded in a matrix containing collagen type II (Fig. 5D). Thus, as in DMM mice, Gdf5 expression is upregulated in synovium in areas of prospective cartilage formation, suggesting a role for Gdf5 in chondrogenic specification and differentiation. Yap suppresses Gdf5 expression in chondroprogenitors. We previously reported that Yap is upregulated in synovium after joint surface injury and is required for the local expansion of Gdf5-lineage MSCs and their recruitment to the cartilage defect 14 , whereas Yap prevents chondrogenic differentiation 32 . Here, we compared expression of LacZ and Yap in Gdf5DOWN-LacZ mouse knees after joint surface injury and observed areas in synovium where Yap and LacZ showed an inverse expression pattern, with cells that expressed LacZ showing diminished Yap compared to surrounding cells (Fig. 5E). We hypothesized that high Yap activity during cell proliferation inhibits chondrogenic differentiation, as reported 32 , by actively suppressing chondrogenic factors including Gdf5. Hence, we determined the effect of overexpression of Yap on Gdf5 expression in high-cell-density cultures using murine C3H10T1/2 MSCs. After one day of high-cell-density micromass culture, Gdf5 expression was upregulated approximately 20-fold when compared to cells in monolayer (Fig. 6A), as previously reported with human synovial MSCs 39 . Strikingly, overexpression of YAP1 prevented the upregulation of Gdf5 in micromass (Fig. 6A). In contrast, YAP1 overexpression failed to prevent the upregulation of Wnt9a, known to be upstream of Gdf5 35 , even when cells were transduced to express constitutively active YAP1 S127A (Fig. 6B). Conversely, knockdown of Yap in C3H10T1/2 MSCs in micromass increased Gdf5 expression, an effect that was synergistically enhanced with concomitant knockdown of the paralog of Yap, Transcriptional Co-Activator with www.nature.com/scientificreports www.nature.com/scientificreports/ PDZ binding motif (Taz) (Fig. 6C-E). Wnt9a expression was not similarly modulated by Yap and Taz knockdown (Fig. 6F). Altogether, these data identify Yap as a negative regulator of Gdf5 expression in chondrogenic MSCs, and indicate that Yap acts downstream of Wnt9a, possibly by directly modulating the activity of one or more transcription factors acting on Gdf5 cis-regulatory elements. Discussion Allelic variants at the GDF5 locus have been linked to OA risk, suggesting GDF5 plays important roles in joint maintenance throughout life. Expression of Gdf5 in adult articular cartilage has been reported in mice 40 and humans 5,41 , with upregulation in OA 41 . Little was known regarding Gdf5 expression in response to acute joint surface defects, which can progress to OA in the absence of repair 42 , or during the different stages of OA. Here, www.nature.com/scientificreports www.nature.com/scientificreports/ we show Gdf5 expression in remodelling joint tissues, using two BAC LacZ reporter mouse strains harbouring distinct yet partially overlapping regions of the Gdf5 locus 19,22 . After joint surface injury, Gdf5 was highly expressed in chondrocytes both inside the newly formed cartilage repair tissue and in the adjacent stressed cartilage. Similarly, Gdf5 was upregulated in cartilage during early-stage OA, particularly in areas of initial damage, and was detected in forming chondrophytes. Given the known chondrogenic activity of Gdf5 12,43 our findings implicate a role for Gdf5 in new cartilage formation following injurious events in adulthood, possibly representing an attempt to repair joint damage. During late-stage OA, areas of advanced cartilage damage displayed markedly reduced LacZ staining, in line with previous studies reporting decreased Gdf5 expression in extensively damaged cartilage in mice with inflammatory or degenerative arthritis 34,40 . These data support a role for Gdf5 in the maintenance and repair of articular cartilage in adult life, and provide a rationale for the administration of exogenous Gdf5 to aid cartilage repair in OA treatment 44 . We show that Gdf5 expression after injury and during OA is dependent on DNA sequence more than 30 kb downstream from the Gdf5 coding region. This downstream sequence contains joint-specific regulatory elements 22 , and is both capable of, and necessary for, rescuing the bp knee phenotype in mice 19,[22][23][24] . Importantly, it harbours many common risk variants for OA, of which several reside in known enhancers. Our findings indicate that such downstream variants may confer OA risk partly through modulating Gdf5 expression in the adult knee in response to injurious events, thereby impacting on joint maintenance and reparative processes. They further indicate that the effect of a human variant such as the rs143383 SNP in the 5′UTR 2,4-6 is likely to be dependent on cis-acting variants present in downstream cis-regulatory elements that are critical to drive adequate expression of Gdf5. Whether downstream regulatory elements involved in repair are different from those involved in OA development remains to be determined. The identification of molecules that regulate Gdf5 expression will provide critical insights into joint formation, maintenance and disease. We have unveiled a regulatory mechanism, to our knowledge hitherto unreported, that links Yap activity to Gdf5 expression. Undetectable in quiescent synovium, Gdf5 was switched on in activated chondroprogenitors in synovium following injury, concomitant with Yap downregulation. In chondrogenic MSCs, Yap suppressed expression of Gdf5 but not Wnt9a, known to induce Gdf5 expression 35,36 . Our data indicate that Yap negatively regulates Gdf5 expression, possibly downstream of Wnt9a, and we propose that Yap needs to be down-regulated to enable Gdf5 expression to prime progenitors towards chondrogenesis. Indeed, Yap prevents MSC chondrogenic differentiation in vitro 32 . Candidate transcription factors that could partner with Yap to regulate Gdf5 include Sox11, reported to directly regulate Gdf5 expression 34 and found here to correlate with GDF5 expression in human OA cartilage, and ZEB1, since ZEB1 binding sites are present in the enhancer upstream of the Gdf5 promoter region 22 and a direct interaction between ZEB1 and Yap has been reported 45 . In conclusion, Gdf5 is upregulated in stressed cartilage, switched on in chondroprogenitors and expressed in newly forming cartilage during tissue remodelling following knee injury. This is dependent on activity of downstream regulatory sequence and occurs irrespective of whether the injury is acute or the result of chronic joint instability, indicating that Gdf5 modulation is not linked to a specific injurious event. An understanding of the regulation of Gdf5 in the context of remodelling, repair and OA pathogenesis will have important implications for joint surface regenerative therapies and OA treatment. Data availability The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
2020-01-13T14:57:08.057Z
2020-01-13T00:00:00.000
{ "year": 2020, "sha1": "41d3959f0e6fa06b33f27286625f1cca85652a6e", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-57011-8.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "41d3959f0e6fa06b33f27286625f1cca85652a6e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
211075949
pes2o/s2orc
v3-fos-license
Sequential Monitoring of Changes in Housing Prices We propose a sequential monitoring scheme to find structural breaks in real estate markets. The changes in the real estate prices are modeled by a combination of linear and autoregressive terms. The monitoring scheme is based on a detector and a suitably chosen boundary function. If the detector crosses the boundary function, a structural break is detected. We provide the asymptotics for the procedure under the stability null hypothesis and the stopping time under the change point alternative. Monte Carlo simulation is used to show the size and the power of our method under several conditions. We study the real estate markets in Boston, Los Angeles and at the national U.S. level. We find structural breaks in the markets, and we segment the data into stationary segments. It is observed that the autoregressive parameter is increasing but stays below 1. Introduction Housing has been the most substantial investment or cost for a large portion of the households so modeling changes in housing prices has received a considerable amount of attention in the literature. Following Shiller (1989, 2003), Piazzesi and Schneider (2009) and Zheng et al. (2016) we write the change in the log of the housing prices as a linear combination of macroeconomic fundamentals and we also include a first-order autoregressive term of the change in the log housing prices. One of the fundamental questions is if the model stayed stable during the observation or it is segmented into several periods including stationary and nonstationary epochs. Himmelberg On the other hand, while the 1980s boom can be explained by a general economic expansion, the source of the housing price increase in the 2000s is different. It has been explained by the "amplification mechanism" of positive expectation of future housing price appreciation. Home buyers started to see real estate as an investment instrument. We refer to Case and Shiller (2003) and Shiller (2008) for more detailed reviews of the U.S. real estate market peaks. Our data example provides a sequential monitoring framework to see how this "amplification mechanism" evolves in the 2000s. In this paper we develop and study a sequential monitoring scheme to detect changes in the parameters of a model which contains linear as well as autoregressive terms. The assumptions on the regressors and the errors are mild, and they are satisfied by nearly all linear as well as nonlinear time series processes. Roughly speaking, they are well approximated with finitely dependent sequences. Under the null hypothesis the model describing the price changes is stable, i.e. it is a stationary process. Following Chu et al. (1996), the proposed monitoring is based on a detector and a boundary function. When the detector reaches the boundary function, a change is detected. The detector is based on the sum of residuals, but only the training sample is used to estimate some unknown parameters. The boundary function is chosen such that the probability of a false detection under the stability of the parameters null hypothesis is fixed. We also provide results for the consistency of the monitoring under various types of changes in the original model. In the sequential setup consistency means that we stop in finite time with probability one if a change occurred. We also provide several results on the distribution of the stopping time under the alternative. The limits can be normal or not normal depending on the type of the change and the size of the change. We focus on the autoregressive parameter and after the change we can have a new stationary regime, random walk or explosive autoregressive process. The paper is organized as follows: in Section 2 we formulate our model and the detection scheme. We also detail the conditions which are needed in the paper and obtain the limit distribution of the monitoring under the null hypothesis. Section 3 contains the distributions of the stopping time introduced in Section 2 under three types of alternatives. Detailed proofs are given in Appendices A and B. We study the empirical size and power of the sequential scheme in Section 4. Section 5 provides in illustration for our method using data on three U.S. real estate markets. The conclusion of our research is in Section 6. Mathematical model to sequentially detect changes in real estate prices In our model we assume that a training (historical) sample of size M is available where β 0 ∈ R d and x t = (x t,1 , x t,2 , . . . , x t,d−1 , x t,d ) ⊤ ∈ R d with x t,1 = 1 and x t,d = y t−1 .    y M +s = x ⊤ M +s β 0 + ǫ M +s , 1 ≤ s < ∞ and {ǫ t , 1 ≤ t < ∞} is a stationary sequence. (2.3) This means that the structure of the observations y t is the same during the training sample and the observations collected after the training sample obey the same model. Under the alternative the structure of the observations changes at an unknown time M + s * : y M +s = x ⊤ M +s β 0 + ǫ M +s , 1 ≤ s ≤ s * , y M +s = x ⊤ M +s δ M + ǫ M +s , s * + 1 ≤ s < ∞ with β 0 = δ M . (2.4) The first monitoring scheme to find changes in the regression parameter was introduced by Chu et al. (1996) and it has become the starting point of substantial research. Zeileis et al. (2005) and Aue et al. (2014) studied monitoring schemes in linear models with dependent errors. Kirch (2007Kirch ( , 2008 and Hušková and Kirch (2012) provided resampling methods to find critical values for sequential monitoring. Hlávka et al. (2012) investigated the sequential detection of changes of the parameter in autoregressive models, i.e. no regression terms are included in their theory. Homm and Breiting (2012) compared several methods to find bubbles in stock markets, detecting a change in an autoregressive process to an explosive one. Horváth et al. (2019+) showed that sequential methods will detect changes when the observations change from stationarity to mild non-stationarity. The least square estimator for β 0 is given bŷ If τ M < ∞, we stop at time τ M and we say that the null hypothesis is rejected. We choose the detector Γ(M, s) and the boundary g(M, s) such that where 0 < α < 1 is prescribed number and lim M →∞ According to (2.5), the probability of stopping the procedure and rejecting H 0 , when H 0 , is α. We stop in finite time under the alternative. The definition of the detector follows Chu et al. (1996) and Horváth et al. (2004). The residuals of the model are defined aŝ i.e. in the definition of the residuals we also useβ M even after the training period. The ǫ t 's are stationary in the training sample under the null as well as under the alternative. Our detector is We use the boundary function where c = c(γ, α) is chosen such that (2.5) holds under the null hypothesis and We discuss the choice of γ in Section 4. Following Brown et al. (1975), Horváth et al. (2004) also used recursive residuals to define the detector in case of linear regression (β 0,d = 0 under the null and the alternative). Homm and Breiting (2012) applied fluctuation detectors when they wanted to test if a random walk changes to an explosive autoregression. They did not allow regression terms. Next we discuss some conditions which will be needed to find c = c(γ, α) for our boundary function such that (2.5) holds. Let The Euclidean norm of vectors and matrices is denoted by · . is a nonrandom functional defined on S ∞ with values in R d−1 and S is a measurable space. Also, η t = η t (s, ω) is jointly measurable in (s, ω), −∞ < t < ∞ and η t , −∞ < t < ∞ are independent and identically distributed random variables in S. The sequences z t , −∞ < t < ∞ can be approximated with m-dependent sequences z t,m in the sense that with some κ 1 > 4, κ 2 > 2 and c > 0, . .) and the η * t,m,n 's are independent copies of η 0 , independent of {η t , −∞ < t < ∞}. Assumption 2.1 appeared first in Ibragimov (1959Ibragimov ( , 1962 in the proof of the central limit theorem for dependent variables. Billingsley (1968) also utilized m-decomposability. Nearly all time series, including linear and several nonlinear processes satisfy Assumption 2.1 (cf. Hörmann and Kokoszka, 2010 and Aue et al., 2014). We note that γ = 1/2 is not allowed in Theorem 2.1 since in this case the limit distribution would be infinity. Horváth et al. (2007) studied the "square-root-boundary" case, i.e. when γ = 1/2, and they obtained a Darling-Erdős type extreme value result for the limit distribution of the stopping time under the no change null hypothesis in linear regression. Chu et al. (1996) obtained an upper bound for the probability of false stopping under the null hypothesis (cf. Homm and Breitung, 2012 Asymptotic distribution of the stopping time under the alternative In this section we investigate the properties of the sequential detection rule when the regression is not stable. Our procedure is tailored for early changes, i.e. s * is small, so we assume in this section that the changes occur early. We concentrate on the autoregressive parameter β 0,d . We consider the cases (i) the observations stay stationary after the change, (ii) they change to a "unit root" sequence and (iii) explosive autoregression after the change. First we assume that the regression parameter at time M + s * changes from β 0 to δ = δ M = (δ M,1 , δ M,2 , . . . , δ M,d ) ⊤ satisfying So for any fixed M, the sequence changes from a stationary segment to an other stationary one. We allow thatδ i = β 0,i , i.e. the difference between the regression parameters can be small. We measure the size of change with Under the alternative y t converges in distribution to y A . The assumption says that the size of the change cannot be too small: Analogue of Assumption 3.2 first appeared in retrospective change point detection in Picard (1985) and Dümbgen (1991) when the time of change in the mean was estimated. Next we show that the upper bound for τ M in Theorem 3.1 is the best possible when we get the asymptotic normality of τ M . Let where N is a standard normal random variable. Next we consider the case when y t changes to a random walk at time M + s * : and the other parameters in the regression also might change To describe the size of change we introduce whereβ 0 = (β 0,1 , β 0,2 , . . . , β 0,d−1 ) ⊤ . and Remark 3.1. Ifδ =β 0 , i.e. only the autoregressive parameter changes, thenā 1 = 0. In this case for all x > 0. Let In Theorem 3.3 and Remark 3.1 the change to a random walk in the autoregressive part dominates the limit distribution. Hence y t is a partial sum after M + s * and the limit is determined by the sums of partial sum processes. In the next result the change in the regression parameters are larger than in Theorem 3.3 and while y t is still a random walk after the change, we have the same limit as in Theorem 3.2. then we have for all x that where N is a standard normal random variable. Next we consider the case when the sequence y t turns explosive after the change at time M + s * . Now we replace Assumption 3.3 with Assumption 3.5. δ M,d =δ d and |δ d | > 1. Let and define F (x) = P {Z M +s * ≤ x}. It follows from Assumption 2.1 that the infinite series defining Z M +s * is finite with probability 1. hold, then we have for all x that Assumption 3.5 is often used to find "bubbles" in financial data. Phillips and Yu (2011) and Phillips et al. (2014Phillips et al. ( , 2015a,b) estimated the autoregressive parameter in an AR(1) sequence and if the estimate is significantly larger than 1, a "bubble" is detected. For a survey on "bubble" detection we refer to Homm and Breiting (2012). Monte Carlo simulations In this section we investigate the performance of our limit theorems in case of a finite training sample of size M. Preliminary results showed that the boundary g(M, s) of (2.8) over rejects when H 0 holds. The false positive rates were improved when the boundary function where c = c(γ, α). The values of c(γ, α) are defined from the equation The critical values of (4.2) were reported in Horváth et al. (2004) and for convenience we provide them in Table 4.1. The results in Table 4 Under the null hypothesis we considered the following data generating processes: DGP(i) where the η t,k 's are independent, identically distributed standard normal random variables. Also, the ǫ t forms a GARCH(1,1) process defined by where the h t,ǫ 's are independent, standard normal random variables, independent of DGP(iii) Now in addition to (4.6), the explanatory sequences are also given by GARCH(1,1) processes where the innovations {h t,k , −∞ < t < ∞, 2 ≤ k ≤ 5} are standard normal random variables, independent of {h t,ǫ , −∞ < t < ∞} of DGP(i). We used (ω 2 , . . . , ω 5 ) = (.3, .5, .4, .6), (φ 2 , . . . , φ 5 ) = (.5, .3, .2, .6) and (ψ 2 , . . . , ψ 5 ) = (.2, .3, .6, .2). DGP(iv) The explanatory variables satisfy (4.7) but now h t,2 = h t,3 = h t,4 = h t,5 which are independent and identically distrubuted standard normal random variables. The variables In our Monte Carlo simulations the variables {(x t,2 , . . . , x t,5 ), −∞ < t < ∞} and {ǫ t , −∞ < t < ∞} are independent. In case of DGP(i) and (iii), the coordinates of (x t,2 , . . . , x t,5 ) are independent while strongly dependent under DGP(ii) and (iv). The simulation results in Next we consider the behaviour of the monitoring scheme under the alternatives discussed in Theorems 3.1-3.5. We recall that under H A The explanatory variables (x t,2 , x t,3 , x t,4 , x t,5 ) are generated as in DGP(ii), i.e. dependent AR(1) sequences. The variables ǫ t are independent standard normals or GARCH (1,1) sequences. As before, we used the boundary functionĝ(M, s) of (4.1). The significance levels were α = .10, .05, .01 and s * = 1, 10. We considered the following data generating processes: The data generating process is as in DGP(v) but now ǫ t is given by the GARCH (1,1) sequence where {h t,ǫ , −∞ < t < ∞} are independent standard normal random variables, independent of {(x t,2 , . . . , x t,5 ), −∞ < t < ∞}. DGP(vii). In this caseβ 0 = (.02, .20, .25, .15, −.20) ⊤ =δ M , but β 0,6 = .25 changes to δ M,6 = .9, .95, .99 and 1. As in DGP(v), the ǫ t 's are independent standard normals, independent of {(x t,2 , x t,3 , x t,4 , x t,5 ), −∞ < t < ∞}. DGP(viii) We have the same parameters as in DGP(vii) but now ǫ t is a GARCH(1,1) sequence satisfying (4. 49 which is very close to the boundary case. The rate of convergence to the limit slows with the increase of γ which is a possible explanation for the unexpected slight drop in power. Also the results show that our method is tailored to detect early changes, i.e. when s * is small. As expected, the power is increasing in Tables 4.3-4.6 as δ M,6 gets closer to 1. Allowingδ M to differ, we increased the power substantially for δ M,6 = .9 and .95 but only mildly for δ M,6 = .99 and 1. In this case the change to partial sum dominates the power. Based on our simulation study, we recommend γ = .45 to achieve fast and reliable detection. This recommendation is also confirmed in can be approximated with normal densities as M → ∞. The empirical densities have longer right tails than a normal density but they are clearly approaching a normal density. By Theorem 3.3, the limits of the empirical densities on Figure 4.6 are not normal densities (cf. Remark 3.1). The limit distribution in Theorem 3.5 is not necessarily normal. However, if the {w t , ǫ t , −∞ < t < ∞} are jointly normal, then the variable Z M +s * of (3.7) is normally distributed. In Figure 4.8 the exhibited density is not derived from a normal distribution due to (4.9), the errors ǫ t are only conditionally normal. Comparing Figures 4.5-4.8, one sees that the limit distributions are getting less spread as δ M,6 increases, i.e. we need less and less observations to detect the change. 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 0.25 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 0.45 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 0.49 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 δ M,6 = 1.25 0 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 0.25 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 0.45 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 0.49 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 We used the S&P CoreLogic Case-Shiller Home Price Index series, which is the leading measure of U.S. residential real estate prices and tracks changes in the value of residential real estate, as the proxy of housing prices. We studied the housing prices in U.S. at the national level and at two metropolitan areas: Los Angeles and Boston. The S&P CoreLogic Case-Shiller Home Price Index series for these three markets are exhibited in Figure 5.1. Figure 5.1 depicts an upward housing price trend in the U.S. at the national level, as well as in Los Angeles and Boston between January 1994 to December 2000. The set of macroeconomic fundamental variables included in our model: x t,2 : the lagged disposable personal income per capita change, we used the national level data as a proxy for Los Angeles and Boston since only yearly data of personal income per capita for states and Metropolitan Statistical Areas are available by the U.S. Bureau of Economic Analysis. x t,3 : the lagged change of 30-year fixed rate of mortgage average in U.S., transformed from weekly frequency to monthly. x t,4 : the lagged all non-farm employment change in terms of the national level and corresponding Metropolitan Statistical Areas level originally released by the U.S. Bureau of Labor Statistics. x t,5 : the lagged change of housing starts at the national level and the U.S. Census Bureau Regions (West Region series was used for Los Angeles and Northeast Region series was used for Boston). We use the lagged term of these variables here to mitigate the endogenous problem because of the interactive effect among the housing prices and these macroeconomic fundamentals (Case and Shiller, 2003 Figure 5.4 shows the boundary function and the detectors. We note that on Figure 5.4 the monitoring starts at the same point but it is a different physical time for the three markets. It is clear from Table 5.2 that the autoregressive parameter changes if there is a change and it is increasing with time. However, with the exception of the national market, the autoregressive parameter stays far away from 1. The estimates are .93 and .84 for the national market and for Los Angeles, respectively. During the second monitoring phase, structural breaks were detected almost two years before the prices peaked in 2006 during the 2000s real estate boom. Our monitoring process finds increasing autoregressive parameters in the three markets and hence it confirms the "amplification mechanism" advocated by Case and Shiller (2003). The "amplification mechanism" is the strongest in Los Angeles, which was undergoing faster price changes than Boston. Since the autoregressive parameters are below 1 in the first and also in the second phase of our monitoring, it is unlikely that "bubbles" formed in the sense of Linton (2019). It is also useful to note that the estimated R-square is increasing with the autoregressive parameter, so the autoregressive part explains more and more of the changes in the housing prices. The momentum effect, caused by the herding behavior of transactions, tends to disengage the log of housing price index changes from the macro fundamentals. Conclusion In this paper we consider a model which includes linear and autoregressive terms to model changes in real estate prices. The observations and errors are weakly dependent, including the most often used linear and nonlinear time series sequences. We propose a sequential method to detect possible changes in the parameters of the model. The monitoring scheme is based on a detector and a suitably chosen boundary function. The limit distribution of the sequential monitoring scheme is established under the null hypothesis of stability of the model. We determine the asymptotic distribution of the stopping time when structural break is present. We focus on the possible changes in the autoregressive parameter. Using Monte Carlo simulations we illustrate that our results can be applied in case of finite sample sizes. We suggest a boundary function which provides the right size of the monitoring even in case of small and moderate historical (training) samples. We also study the power of the procedure and the time to detect the structural break. A data example is also given. We sequentially looking for possible structural breaks in the real estate markets of Boston, Los Angeles and at the U.S. national level. We find structural breaks in the data, and find stationary segments. The autoregressive parameter of the segments is increasing but it stays below 1. Hence the "amplification mechanism" of Case and Shiller (2003) is confirmed by the data analysis but no bubbles in the sense of Linton (2019) were found. Proof. Elementary arguments give that where w t is defined in (3.1) andβ 0 = (β 0,1 , β 0,2 , . . . , β 0,d−1 ) ⊤ . By the stationarity of w t and ǫ t we have Using now Assumption 2.1, the Bernoulli representation for y t is established. According to the definition of y t,m we have that and therefore Using Assumption 2.1 Assumption 2.1 and (A.2) imply that where N 1 is a d-dimensional normal random vector with EN 1 = 0 and N 1 N ⊤ 1 = D. Hence the proof the first part of Lemma A.2 is now complete. It follows immediately from the independence of {x t,ℓ , −∞ < t < ∞, 2 ≤ ℓ ≤ d − 1} and {ǫ t , −∞ < t < ∞} and Assumption 2.2 that Ex 0,1 x t,1 ǫ 0 ǫ t = Eǫ 0 ǫ t = σ 2 , if t = 0 and 0 if t = 0, Similarly, By the definition, x t,d = y t−1 . Using the representation in (A.5) we get that where C 1 is a constant. We write that Hence for any v > 0 we have that P sup u=M +1 u)) γ ] is bounded on (0, ∞). Using now (A.6) and (A.8) we conclude we get that We note that a is the first row (column) of A so a ⊤ A −1 = (1, 0, . . . , 0) and x t,1 = 1 for all t by definition. Hence since we can assume without loss of generality that 0 < δ < 1/2 − γ. By the scale transformation of the Wiener process we have that where W 1 and W 2 are independent Wiener processes. It is shown in Chu et al. (1996) (cf. also Horváth et al., 2004) that where W stands for a Wiener process. Hence Let . We showed that We can assume without loss of generality the ∆ = ∆ M > 0. Let . |σW (s) + ∆s| where Φ(x) denotes the standard normal distribution function. |σW (s) + ∆s| We can assume without loss of generality that a 1 > 0. for all x. Proof. Since by condition (3. We note that It follows from Lemma B.8 that where N stands for a standard normal random variable. Using now (B.21), the lemma is proven. Proof. Since y M +s * +u =δ u d y M +s * + + u z=1δ z−u d (w ⊤ M +s * +z (δ −β 0 ) + ǫ M +s * +z ) the result follows immediately from Assumption 3.5 and the mean stationarity of w z and ǫ z .
2020-02-12T02:00:39.512Z
2020-01-30T00:00:00.000
{ "year": 2020, "sha1": "052077a2960f5826a4546efd04fa297877b3514d", "oa_license": null, "oa_url": "https://arxiv.org/pdf/2002.04101", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "052077a2960f5826a4546efd04fa297877b3514d", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics", "Mathematics", "Computer Science" ] }
56958027
pes2o/s2orc
v3-fos-license
Role of Magnetic Resonance Imaging in Assigning Sex in an Ambiguous Genitalia Child: A case report A three years old child with ambiguous genitalia since birth had been referred to Muhimbili National Hospital (MNH),a tertiary referral hospital, in order to be evaluated and assigned sex correctly. Due to periphery location of the referring center, social and economic constraints, the child was not presented earlier. Physical examinations were done followed by imaging studies. Magnetic resonance imaging (MRI) performed, showed female sex which was confirmed by karyotyping. In conclusion, use of MRI plays apotential key role in sex assignment of ambiguous genitalia children. Introduction Ambiguous genitalia, is among a group of disorders of sex development (DSD) characterizedby abnormal appearance of external genitalia. This condition pause both medical and social emergency and requires timely management. Sex assignment among children with ambiguous genitalia remains a great challenge and requires multidisciplinary approach. Previous studies and reports indicate thatambiguous genitalia is a cause of great anxiety and confusion to health care providers, family and society at large. However, the use of MRI in assigning sex of ambiguous genitalia child is limited. We report a case of ambiguous genitalia child whose sex was determined by use of MRI and confirmed by karyotyping. Case Report We present a case of 3-year-old child referred to MNH, a tertiary referral hospital facility with a history of ambiguous genitalia since birth. The child presented with history of absence of testicles and urinary opening was seen on the inferior aspect of the phallus. No any other abnormality was reported. Maternal drug history was unremarkable during this child's pregnancy. There was negative history of ambiguous genitalia in the family. Male sex was assigned at birth. Pelvic exam revealed phallic structure with urethra at its inferior aspect. Labia majora was seen; however, there was no vaginal opening. No masses were felt in the inguinal region suggestive of testes. There was no pubic, armpit or facial hair seen. The rest of exam did not giveancillary findings. A clinical diagnosis of female pseudo-hermaphrodite with vaginal synechia was made with a differential diagnosis of male gender with hypospadias and undescended testis. This was because of close resemblance of the phallic structureto penis andurethra being visualized on its underside. Pelvic ultrasound revealed two fluid containing structures of which one was the urinary bladder, the other was suggestive of distended fluid filled uterus. Ovaries were not visualized. B U Testes were not visualized. Both kidneys had normal sonographic appearance. No supra renal mass lesion suggestive of adrenal mass was seen on either side of the abdomen. The remaining abdominal organs had normal sonographic findings. Ultrasound results were therefore inconclusive. Pelvic MRI revealed a phallic structure which was enlarged clitoris (Figure 1). Clitoromegaly was demonstrated supported by lacking corpora carvenosum and corpus spongiosum which normal supporting penile tissues5 are. Radiological diagnosis of female gender with right unicornuate fluid filled uterus, clitoromegaly and vagina synechiae was made. Karyotyping confirmed female sex. Final diagnosis was female pseudo-hermaphrodite (46XX, with two ovaries) with vaginal synechiae. The Child was planned to have vaginoplasty with clitoroplasty after appropriate counseling and consent from parents. The parents defaulted however. Discussion Disorders of sex development constitute social and medical emergency requiring multidisciplinary approach in management. It is of extreme importance to accurately establish genital anatomy prior to definitive management of these disorders 1,2,4 . DSDs are classified into female pseudo-hermaphroditism (46 XX, with 2 ovaries), male pseudo-hermaphroditism (46 XY, with 2 testes), true hermaphroditism (both ovaries and testes present) and gonadal dysgenesis 1 . Imaging plays a crucial part in illustration of internal organs and urogenital anatomy in children with DSDs 6 . Ultrasound being cheap, non-invasive, not involving radiation or sedation and readily accessible is the usual first investigation of choice for the assessment of internal sex organs. Uterus and ovaries can relatively be easily identified when are under maternal hormones influence during the neonatal period (1,3,6,7) . During pregnancy, ultrasound assessment of fetal sex is recommended only when medically specified and in twin situation 8 . Our patient's age was outside neonatal period range. Pelvic sonography was the least beneficial in describing internal genitalia as timing was past maternal hormone influence period; Ultrasound showed two cystic structures; one was identified as urinary bladder. The other was suggestive of the distended fluid filled uterus. Ultrasound failed to locate both ovaries in our case. There was no supra renal mass seen suggestive of adrenal mass bilaterally. Further and more informative imaging evaluation was necessary to determine internal genital anatomy. While not be the first modality of choice in pelvic imaging, MRI is instrumental tool due to its superior soft tissue contrast in providing detailed internal pelvic anatomy. It lacks sonographic limitations of body habitus, depth of ultrasound waves penetration and ability to distinguish tissue types of specific types. Information provided by MRI is vital and may change course of management 5,9 . Magnetic resonance imaging identified the uterus, vagina, penis and ovary in 93%, 95%, 100% and 74% of cases respectively. MRI can differentiate between clitoral hypertrophy and penis in female pseudo-hermaphrodite as the former lacks or has poorly developed penile structures 3 . In our case MRI accurately described the fluid filled right unicornuate uterus, ovaries, enlarged clitoris and absence of vagina.
2018-12-23T00:40:53.434Z
2016-11-14T00:00:00.000
{ "year": 2016, "sha1": "92b89e2f2b94ab358d9a35f12af106d6cc5cdbe0", "oa_license": null, "oa_url": "https://www.ajol.info/index.php/ecajs/article/download/147700/137203", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "50d8acead8ab99cfd81569228c28c6f155af95a3", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
131932870
pes2o/s2orc
v3-fos-license
HydroMix v1.0: a new Bayesian mixing framework for attributing uncertain hydrological sources . Tracers have been used for over half a century in hydrology to quantify water sources with the help of mixing models. In this paper, we build on classic Bayesian methods to quantify uncertainty in mixing ratios. Such methods infer the probability density function (PDF) of the mixing ratios by formulating PDFs for the source and target concentrations and inferring the underlying mixing ratios via Monte Carlo sampling. However, collected hydrological samples are rarely abundant enough to robustly fit a PDF to the source concentrations. Our approach, called HydroMix, solves the linear mixing problem in a Bayesian inference framework wherein the likelihood is formulated for the error between observed and modeled target variables, which corresponds to the parameter inference setup commonly used in hydrological models. To address small sample sizes, every com-bination of source samples is mixed with every target tracer concentration. Using a series of synthetic case studies, we evaluate the performance of HydroMix using a Markov chain Monte Carlo sampler. We then use HydroMix to show that snowmelt Abstract. Tracers have been used for over half a century in hydrology to quantify water sources with the help of mixing models. In this paper, we build on classic Bayesian methods to quantify uncertainty in mixing ratios. Such methods infer the probability density function (PDF) of the mixing ratios by formulating PDFs for the source and target concentrations and inferring the underlying mixing ratios via Monte Carlo sampling. However, collected hydrological samples are rarely abundant enough to robustly fit a PDF to the source concentrations. Our approach, called HydroMix, solves the linear mixing problem in a Bayesian inference framework wherein the likelihood is formulated for the error between observed and modeled target variables, which corresponds to the parameter inference setup commonly used in hydrological models. To address small sample sizes, every combination of source samples is mixed with every target tracer concentration. Using a series of synthetic case studies, we evaluate the performance of HydroMix using a Markov chain Monte Carlo sampler. We then use HydroMix to show that snowmelt accounts for around 61 % of groundwater recharge in a Swiss Alpine catchment (Vallon de Nant), despite snowfall only accounting for 40 %-45 % of the annual precipitation. Using this example, we then demonstrate the flexibility of this approach to account for uncertainties in source characterization due to different hydrological processes. We also address an important bias in mixing models that arises when there is a large divergence between the number of collected source samples and their flux magnitudes. HydroMix can account for this bias by using composite likelihood functions that effectively weight the relative magnitude of source fluxes. The primary application target of this framework is hydrology, but it is by no means limited to this field. Introduction Most water resources are a mixture of different water sources that have traveled via distinct flow paths in the landscape (e.g., streams, lakes, groundwater). A key challenge in hydrology is to infer source contributions to understand the flow paths to a given water body using a source attribution technique. A classic example is the two-component hydrograph separation model to quantify the proportion of groundwater and rainfall in streamflow, often referred to as "preevent" water vs. "event" water (Burns et al., 2001;Klaus and McDonnell, 2013;Schmieder et al., 2016). Other examples include estimating the proportional contribution of rainfall and snowmelt to groundwater recharge (Beria et al., 2018;Jasechko et al., 2017;Jeelani et al., 2010), fog to the amount of throughfall (Scholl et al., 2011(Scholl et al., , 2002Uehara and Kume, 2012), and soil moisture (at varying depths) and groundwater to vegetation water use (Ehleringer and Dawson, 1992;Evaristo et al., 2017;Rothfuss and Javaux, 2017). The primary goal of such attribution in hydrology is to infer the contribution of different sources to a target water body; the tracer can be an observable compound like a dye, a conservative solute, or even a proxy for chemical composition such as electrical conductivity. The key requirement is that the concentration of the tracer is distinguishable between different sources. The stable isotope compositions of hydrogen and oxygen in water (subsequently referred to as "stable isotopes of water") are used as tracers in hydrology. Other commonly used tracers include electrical conductivity (Hoeg et al., 2000;Laudon and Slaymaker, 1997;Lopes et al., 2018;Pellerin et al., 2007;Weijs et al., 2013) and conservative geochemical solutes such as chloride and silica (Rice and Hornberger, 1998;Wels et al., 1991). Classically, attribution analysis is done by assigning an average tracer concentration to each source, typically estimated from time or space averages of observed field data (Maule et al., 1994;Winograd et al., 1998), and then solving a series of linear equations. In order to express uncertainty in the attribution analysis, a tracer-based hydrograph separation approach was first proposed in the work of Genereux (1998) and has subsequently been used in many studies (Genereux et al., 2002;Koutsouris and Lyon, 2018;Zhu et al., 2019). Bayesian mixing approaches offer a useful alternative to classic hydrograph separation, as Bayesian approaches explicitly acknowledge the temporal variability of source tracer concentrations estimated from observed samples (Barbeta and Peñuelas, 2017;Blake et al., 2018). Rather than a single estimate of source contributions, Bayesian approaches yield full probability density functions (PDFs) of the fraction of different sources in the target mixture (Parnell et al., 2010;Stock et al., 2018), hereafter referred to as "mixing ratios". Bayesian mixing was first developed in ecology to estimate the proportion of different food sources to animal diets (Parnell et al., 2010;Stock et al., 2018). Hydrological applications of such models are still rare (Blake et al., 2018;Evaristo et al., 2016Evaristo et al., , 2017Oerter et al., 2019). In a Bayesian mixing model, a statistical distribution is fitted to both the measured source tracer concentrations and to the measured tracer concentrations from the target (e.g., river, groundwater, vegetation). The distribution of the mixing ratios is then inferred via Bayesian inference. With recent advances in probabilistic programming languages like Stan (Carpenter et al., 2017), Bayesian inference has become a relatively simple task. However, the key limitation with the above approach is that the source compositions are assumed to come from standard statistical distributions. Typically, the sources are assumed to be drawn from Gaussian distributions, which can be fully characterized by the mean and variance of the data available for each source . This limits both the potential applicability and the insights that can be gained from tracer information in hydrology because the sample mean and variance may not accurately reflect the statistical properties of the actual source composition, and the Gaussian approach represents an unnecessary simplification in cases in which a large amount of information on source composition is available. An additional complication in hydrology comes from the fact that observed point-scale samples do not necessarily capture the tracer concentrations in the actual sources, which are distributed heterogeneously in space and whose contribution can be temporally variable depending on the state of the catchment (Harman, 2015). For instance, if we were to characterize the contribution of snowmelt to groundwater, we would need to capture (1) the temporal evolution of the isotopic ratio of snowmelt, which strongly varies in space (Beria et al., 2018;Earman et al., 2006), and (2) the temporal evolution of the area actually covered by snow. This spatially and temporally distributed nature of the sources can be hard to account for in both analytical and Bayesian mixing approaches. To overcome the limitations of source heterogeneity and the previously discussed restriction to Gaussian distributions, we present a new mixing approach for hydrological applications called HydroMix. This approach does not require a parametric description of observed source or target tracer concentrations. Instead, HydroMix formulates the linear mixing problem in a Bayesian inference framework similar to hydrological rainfall-runoff models (Kavetski et al., 2006a), wherein the mixing ratios of the different sources are treated as model parameters. Multiple model parameters can be inferred in such a setup, allowing for the parameterization of additional hydrologic processes that can modify source tracer concentrations (shown in Sect. 3.5). A more detailed account of the advantages and limitations of this new approach is given in Sect. 5. In this paper, we first describe the theoretical details of HydroMix for a simple case study with two sources, one mixture and one tracer (Sect. 2). Section 3 presents synthetic and real-world case studies that demonstrate the accuracy, robustness, and flexibility of HydroMix. In the synthetic case study, we use a conceptual hydrologic model to simulate tracer concentrations. We also introduce a composite likelihood function that accounts for the magnitude of the different source fluxes. The real-world case study applies HydroMix in a high-elevation headwater catchment in Switzerland. The results of these applications are presented in Sect. 4 before summarizing the main outcomes, applicability, and limitations of HydroMix in Sect. 5. Model description and implementation A system with n sources mixing linearly in a target water body can be written as where Y k is the concentration of the kth tracer in the target mixture, and S k i is the concentration of the kth tracer in source i; ρ i (i = 1,...,n) represents the fractions of all sources in the mixture, with n i=1 ρ i = 1, corresponding to the aggregation of different sources in the mixture. In order to solve this system of linear equations, n − 1 different tracers are required. Section 2.1 details the general modeling approach for a simplified system with two sources and one tracer. This is followed by a detailed discussion on the choice of the parameter inference approach used. H. Beria et al.: HydroMix v1.0 2435 Linear mixing model with non-concomitant observed data For a system with two sources that combine linearly to form a mixture, the mixing model can be formulated as where S 1 (t −τ 1 ) is the tracer concentration in source 1 at time step t −τ 1 , S 2 (t −τ 2 ) is the tracer concentration in source 2 at time step t −τ 2 , Y (t) is the concentration of the mixture (i.e., the tracer concentration in the target) at time step t, ρ is the mixing ratio, and τ i is the time delay between the time when source i enters the system and the time when it is observed in the mixture. As an example, for a case in which the two sources are snowmelt and rainfall and the mixture is groundwater, ρ represents the proportional groundwater recharged from snowmelt and τ represents the average time lag for rain or snowmelt to reach groundwater once they enter the soil. In other words, the time lag (τ ) stands for any delay caused by tracer transport from the source to the output; we assume that the source components are conservative in nature. The two parameters in this system, the mixing ratio (ρ) and the time delay (τ ), can be inferred via classical Bayesian parameter inference, which is widely used in hydrology (Kavetski et al., 2006a, b;Schaefli and Kavetski, 2017). This implies taking an observed time series of the target (e.g., the tracer concentration in groundwater) and building a vector of model residuals: whereỸ t represents the observed mixture concentration and Y t represents the simulated mixture concentration. However, in real environmental systems like that of groundwater recharge from rainfall and snowmelt, there are four major difficulties that can prevent the inference of ρ and τ from the observed data. i. ρ and τ strongly vary in time depending on catchment conditions such as soil moisture (as previously discussed in the context of the "inverse storage effect"; Benettin et al., 2017;Harman, 2015). ii. Long time series of the tracer concentration in both the sources and mixture are rare. iii. The effect of seasonality in precipitation can make the inference of τ very difficult in the case that the goal is to understand intra-annual recharge dynamics. iv. The tracer concentrations in the different sources are generally measured at point scales, whereas the tracer concentration in the target integrates inputs over the entire source area. Our practical solution to limitation (iv) is to assume that tracer concentrations in the two sources are functions of observable point processes: where the function f i represents the transformation from the point to the catchment scale for source i. Limitation (iii) can be relaxed by assuming a long enough time step (e.g., longterm groundwater recharge dynamics), for which the observed samples are samples from the long-term (>>1 year) source and target compositions. This allows us to replace the time step t and t + τ with t and write Eq. (2) as where the signifies the new time-integrated variables. Now, any observed point-scale tracer concentration p i in a given source i or in the output (e.g., the isotopic ratio of snowmelt) can be assumed to represent a sample from a stationary process (from S 1 , S 2 , or Y ). This assumption is in fact implicitly underlying most of the existing hydrological mixing models in which point samples are used to characterize a spatial process and the time reference of the samples is discarded. By utilizing all the available measurements {p 1 } i=1...n and {p 2 } j =1...m of the two sources in the above model, with n samples of source 1 and m samples of source 2, we can build n × m predictions and compare them with the q observed samples of the target as whereỸ k obs is the kth observed target concentration out of a total number of q target concentrations. Assuming that the residuals can be described with a Gaussian error model with a mean of zero and constant variance σ 2 , we can compute the likelihood function of the residuals as the joint probability of all the residuals: where θ represents all the model parameters and P i (i = 1, 2) is the observed point process (see Eq. 4). The above Gaussian error model could in principle be replaced with any other stochastic process. However, the Gaussian error model has been shown to be relatively robust in this kind of application (Lyon, 2013;Schaefli and Kavetski, 2017). In the case of linear mixing between two sources, the two model parameters considered at this stage are the mixing ratio ρ and the error variance σ 2 . The error variance can either be computed from the observed residuals or treated as a model parameter (Kuczera and Parent, 1998;Schaefli et al., 2007). For the examples shown in this paper, the error variance is computed from the residuals. In order to avoid numerical problems, we use the loglikelihood form of Eq. (8), for parameter inference in a Bayesian framework. Following the general Bayes' equation, the posterior distribution of the model parameters can be written as where p (θ ) is the prior distribution of the model parameters and p Ỹ |θ, P 1 , P 2 is the likelihood function. The denominator of Eq. (10) can generally not be computed as that would require integration over the whole parameter space, which is computationally expensive, and that is why Eq. (10) is reduced to p θ |P 1 , P 2 ,Ỹ ∝ p Ỹ |θ , P 1 , P 2 p (θ) . Two methods are traditionally used in hydrology to sample from the posterior distribution from Eq. (11): Markov chain Monte Carlo (MCMC) sampling (Hastings, 1970;Metropolis and Ulam, 1949) and importance sampling (Glynn and Iglehart, 1989;Neal, 2001). In the case of MCMC sampling, a common approach is the Metropolis algorithm (Kuczera and Parent, 1998;Schaefli et al., 2007;Vrugt et al., 2003). In importance sampling, the posterior distribution is obtained from weighted samples drawn from the so-called importance distribution. For typical multivariate hydrological problems, the only possible choices for the importance distribution are either uniform sampling over a hypercube or sampling from an over-dispersed multi-normal distribution (Kuczera and Parent, 1998). A stochastic process is defined as over-dispersed when the variance of the underlying distribution is greater than its mean (Inouye et al., 2017). The sampling distributions in such cases have large variance, allowing for sufficient sampling over the entire parameter range. We implement an MCMC sampling algorithm using a Metropolis-Hastings (Hastings, 1970) criterion to infer the posterior distribution of the mixing ratio. For the synthetic case study (Sect. 3.1), we set up 10 parallel MCMC chains to monitor convergence according to the classical Gelman-Rubin convergence criterion (Gelman and Rubin, 1992). Each chain is initiated by assigning a uniform prior distribution for the mixing ratio, and the mixing ratio varies between 0 and 1. For the subsequent case studies, we use importance sampling for the sake of simplicity. The prior distributions of additional model parameters (if applicable) are discussed in the corresponding case study section. Apart from the prior distribution of the model parameters, HydroMix requires tracer concentration of the different sources and of the mixture. The error model variance is not jointly inferred with other model parameters but calculated for each sample parameter set from the residuals according to Eq. (6). Case studies We provide a comprehensive overview of the performance of HydroMix based on a set of synthetic case studies (case studies in Sect. 3.1 and 3.2) and a real-world application to demonstrate the practical relevance for hydrologic applications (case studies in Sect. 3.4 and 3.5). The first case study demonstrates the ability of HydroMix to converge on the correct posterior distribution for synthetically generated data. The second case study uses a synthetic dataset of rain, snow, and groundwater isotopic ratios using a conceptual hydrologic model and compares the results of HydroMix to the actual mixing ratios assumed to generate the dataset. It then weights the source samples and evaluates the effect of weighting on the mixing ratio (case study in Sect. 3.3). In the last two case studies, HydroMix is applied to observed tracer data from an Alpine catchment in the Swiss Alps to infer source mixing ratios and an additional parameter (isotopic lapse rate). Mixing using Gaussian distributions In this example, source concentrations S 1 and S 2 are drawn from two Gaussian distributions with different means (µ 1 , µ 2 ) and standard deviations (σ 1 , σ 2 ) and combined to form the mixture Y with a constant mixing ratio ρ: Assuming the two distributions are independent, the resultant mixture is normally distributed with mean (µ y ) and variance (σ 2 y ) defined as A given number of samples are drawn from the distributions of S 1 and S 2 and of the mixture Y . The posterior distribution of the mixing ratio, p ρ|S 1 ,S 2 ,Ỹ , is then inferred using HydroMix for (i) a case in which the two source distributions are clearly identifiable and (ii) a case in which the distributions have a large overlap. Different values of mixing ra-tios are tested, with ratios varying from 0.05 to 0.95 in steps of 0.05. The sensitivity of HydroMix to the number of samples drawn from S 1 , S 2 , and Y , along with the time to convergence, is assessed based on the sum of the absolute error between the estimated mixing ratioρ and its true value ρ. 3.2 Mixing with a time series generated using a hydrologic model In this case study, we build a conceptual hydrologic model wherein groundwater is assumed to be recharged directly from rainfall and snowmelt. Stable isotopes of deuterium (δ 2 H) are used to see how the isotopic ratio in groundwater evolves under different assumptions of rain and snow recharge efficiencies. Synthetic time series are generated for precipitation, the isotopic ratio in precipitation, and air temperature at a daily time step. For generating the precipitation time series, the time between two successive precipitation events is assumed to be a Poisson process with the precipitation intensity following an exponential distribution (Botter et al., 2007;Rodriguez-Iturbe et al., 1999). Time series of air temperature and of isotopic ratios in precipitation are obtained by generating an uncorrelated Gaussian process with the mean following a sine function (to emulate a seasonal signal) and with constant variance (Allen et al., 2018;Parton and Logan, 1981). The separation of precipitation into rainfall (P r ) and snowfall (P s ) is done based on a temperature threshold approach (Harpold et al., 2017a), whereby the fraction of rainfall f r (t) at time step t is computed as a function of air temperature T (t): where T L and T H are the lower and upper threshold bounds. A double air temperature threshold approach has been shown to be more accurate than a single temperature threshold (Harder and Pomeroy, 2014;Harpold et al., 2017a, b). In this case study, T L and T H are set to −1 and +1 • C. The evolution of the snow water equivalent (SWE) in the snowpack (h s ) is computed as where M s is the magnitude of snowmelt computed using a degree-day approach as proposed by Schaefli et al. (2014): where a s is the degree-day factor (set here to 2.5 mm • C −1 d −1 ) and T m is the threshold temperature at which snow starts to melt (set to 0 • C). Enhanced heat exchange processes happening during rain-on-snow events are not explicitly considered as this lies beyond the scope of this paper. The snowpack is assumed to be fully mixed, and the isotopic ratio of snowpack is computed as where C s is the isotopic ratio of snowpack and C p is the isotopic ratio of precipitation. The amount of groundwater recharge (R) is the sum of groundwater recharged from rainfall and snowmelt: where R r and R s are the rainfall and snowmelt recharge efficiencies. Recharge efficiency is defined as the fraction of rainfall or snowmelt that reaches the groundwater and is assumed to be a constant value. The groundwater storage is assumed to be fully mixed, and the isotopic ratio of groundwater is computed as where C g is the isotopic ratio in groundwater, G is the volume of groundwater, and Q is the amount of groundwater outflow to the stream defined as where k is the recession coefficient and G C is constant groundwater storage that does not interact with the stream (added here to avoid zero storage and thus very small outflow). This formulation follows the linear groundwater reservoir assumption used in numerous hydrological modeling frameworks (Beven, 2011). The volume of groundwater storage is computed as The model is run for a period of 100 years, allowing the system to reach a long-term steady state. The parameters used to generate daily precipitation, air temperature, and precipitation isotopic ratios are shown in Table 4. The number of yearly precipitation events is set to 30. The snow accumulation and the degree-day snowmelt models are then used to compute the number of snowfall days and snowmelt events. The static volume of groundwater that does not interact directly with the stream, G C , is set to 1000 mm. Only the last 2 years of the model runs are used to obtain the time series of isotopic ratios in rainfall, snowmelt, and groundwater. These years are then used to estimate the mixing ratio of snowmelt in groundwater, which is the fraction of groundwater recharged from snowmelt. Rainfall and snowmelt samples are the two sources and groundwater samples represent the mixture. For the HydroMix application, all the modeled rainfall and snowmelt samples generated using the hydrologic model are used, whereas for groundwater, only one isotopic ratio per month is used (randomly sampled). The mixing ratios inferred using HydroMix are compared to the actual recharge ratio obtained from the hydrologic model as where R a s represents the proportion of groundwater recharge derived from snowmelt summed over all the time steps. The numerical implementation of the evolution of the isotopic ratio in snowpack and groundwater is given in the Appendix. Weighting mixing ratios in the hydrologic model In Sect. 3.2, rainfall and snowmelt samples are not weighted by the magnitude of their fluxes while computing the mixing ratios with HydroMix. As all rainfall and snowmelt samples are used, the weights are implicitly determined by the number of rainfall and snowmelt events instead of their magnitudes. This is a general problem in all mixing approaches and has not been adequately acknowledged in the literature. Ignoring the weights may lead to biased mixing estimates if the proportional contribution of one of the components (e.g., rainfall or snowmelt) is low but the number of samples obtained to represent that component is proportionally much higher (Varin et al., 2011). For example, in a given catchment, the amount of total snowfall may be a small proportion of the annual precipitation, but the number of days when snowmelt occurs may be comparable to the total number of rainfall days in a year. If this is not specified a priori, Hy-droMix may overestimate the proportion of groundwater being recharged from snowmelt. To account for this, we introduce a weighting factor in the likelihood function originally formulated in Eq. (8) to make a new composite likelihood (Varin et al., 2011): where i and j correspond to snowmelt and rainfall samples, and the weights w i and w j reflect the proportion of snowmelt and rainfall contributing to groundwater recharge (Vasdekis et al., 2014); w i is expressed as where R i is the snowmelt magnitude and S i is the isotopic ratio of the ith snowmelt event. Rain weights (w j ) are also expressed similarly to Eq. (24). The obtained mixing ratio estimates are then compared with the unweighted estimates (in Sect. 3.2) to see if weighting by magnitude makes a significant difference. Real case study: snow ratio in groundwater in Vallon de Nant The objective of this case study is to infer the proportional contributions of snow versus rainfall to the groundwater of an Alpine headwater catchment, Vallon de Nant (Switzerland), using stable water isotopes. Catchment description Vallon de Nant is a 13.4 km 2 catchment located in the Vaud Alps in the southwest of Switzerland (Fig. 1) Vallon de Nant has a typical Alpine climate, with around 1900 mm of annual precipitation and a mean air temperature of 1.8 • C (Michelon, 2017). For this paper, long-term climate statistics are computed using the MeteoSwiss gridded precipitation and air temperature dataset for 1961-2015 (Isotta et al., 2013;MeteoSwiss, 2016MeteoSwiss, , 2017. Applying a simple temperature threshold (0 and 1 • C) to observed precipitation indicates that, on average, 40 %-45 % of the total precipitation falls as snow in the catchment. There is a small degree of seasonality in precipitation, with higher precipitation between June and August and lower precipitation in the months of September and October. Data collection Vallon de Nant has been extensively monitored since February 2016. Water samples are collected from streamflow, rain, snowpack, and groundwater at different elevations, which are then analyzed for the isotopic ratios in deuterium (δ 2 H) and oxygen-18 (δ 18 O). Vallon de Nant is remotely located with very limited winter access, frequently experiencing winter avalanches. Due to these logistical constraints, snowmelt lysimeters or passive capillary samplers could not be set up to sample snowmelt water; accordingly, grab snowpack samples are used here as a proxy for snowmelt. A summary of the isotopic data is shown in Table 1. Model implementation HydroMix is used to estimate the proportion of snow recharging groundwater (subsequently referred to as the "snow recharge coefficient"). In order to obtain a PDF of the snow recharge coefficient, isotopic ratios in all the water samples from rain, snowpack, and groundwater are used. A uniform prior distribution is assigned to the snow recharge coefficient, which varies between 0 and 1, representing the entire range of possible values. Introduction of an additional model parameter In any mixing analysis, it may be useful or desirable for users to specify an additional model parameter that is able to modify the tracer concentrations based on their process understanding of the system. In the case of Alpine catchments with large elevation gradients, stable isotopes in precipitation often exhibit a systematic trend with elevation, becoming more depleted in heavier isotopes with increasing elevation. This is also known as the "isotopic lapse rate" (Dansgaard, 1964;Friedman et al., 1964). In typical field campaigns, because of logistical challenges, precipitation samples are collected only at a few points in a catchment, with often fewer precipitation samples at high elevations. This leads to oversampling at lower elevations and undersampling at higher elevations, which can bias mixing estimates. This has been found to be especially relevant for hydrograph separation in forested catchments (Cayuela et al., 2019). To allow a process compensation for this, an additional lapse rate factor is introduced with which each observed point-scale sample (observed at a given elevation) is corrected to a reference elevation as follows: where r is the isotopic ratio in precipitation collected at elevation e, r is the catchment-averaged isotopic ratio in precipitation, α is the isotopic lapse rate factor, e j is the elevation of the j th elevation band, and a j is the catchment area under the j th elevation band; the catchment is divided into k elevation bands. These bands are obtained by constructing a hypsometric curve of the catchment (Strahler, 1952). The lapse rate factor is allowed to modify both rainfall and snowpack isotopic ratios to obtain a catchment-averaged isotopic ratio, which is then used in the mixing model. Using this formulation of an isotopic lapse rate makes the following implicit assumptions: (1) precipitation storms on aggre- Table 3. The uncertainty band represents the inferred mixing ratio plus or minus the error standard deviation obtained from Eq. (13). The number of source and target samples is 100. (b) Performance of HydroMix in terms of the absolute error between the posterior mixing ratio mean and the true mean for the low variance dataset over all tested ratios plotted as a function of the number of samples drawn for the two sources. gate move from the lower part of the catchment to the upper part of the catchment, thus creating a lapse rate effect, and (2) precipitation falls uniformly over the catchment. It is important to note that the isotopic lapse rate is different from the precipitation lapse rate; i.e., the rate of change of precipitation with elevation is different from the rate of change of the precipitation isotopic ratio with elevation. It is important to note that the precipitation isotopic ratio is not only a function of elevation, but also depends on other factors such as the source of moisture origin, cloud condensation temperature, and secondary evaporation. Similarly, strong spatial variability exists in the isotopic ratio of snowmelt water, depending on catchment aspect, snow metamorphism, and wind distribution. This case study is a mere demonstration that HydroMix allows for the inference of additional parameters that can account for various physical processes that may modify isotopic ratios. The prior distribution of the isotopic lapse rate is specified based on isotopic data collected across Switzerland under the Global Network of Isotopes in Precipitation (GNIP) program (IAEA/WMO, 2018). Using the monthly isotopic values collected between 1966 and 2014, average lapse rate values are obtained for both δ 2 H and δ 18 O. These were (−)1.94 ‰ per 100 m for δ 2 H and (−)0.27 ‰ per 100 m for δ 18 O (Beria et al., 2018). A uniform prior distribution is assigned to the isotopic lapse rate parameter, with the lower bound specified as 3 times the Swiss lapse rate for both δ 2 H and δ 18 O. The observed isotopic lapse rate data from Switzerland suggest that average lapse rates are weakly negative; however, positive lapse rates can a priori not be excluded for the case study catchment. Accordingly, we do not specify an upper lapse rate bound of zero but set it as 3 times the Swiss lapse rate (Table 2). In the case of Vallon de Nant, the elevation ranges from 1253 m to 3051 m a.s.l. For computing the Swiss lapse rate, the elevation range over which the monthly precipitation samples were collected was 300 m to 2000 m a.s.l. This difference in elevation ranges between Vallon de Nant and the GNIP network should be kept in mind during the interpretation of results. Results The results for the different case studies are discussed in the sections below. Mixing with normal distributions The mean and standard deviations used to generate the low and high variance source distributions for the synthetic case studies are summarized in Table 3. We randomly generated 100 samples from each of the two source distributions and from the target distribution and varied the mixing ratios between 0.05 and 0.95 in 0.05 increments. It should be noted that HydroMix permits using a different number of samples for the sources and for the mixture. For the low variance case, the mixing ratio inferred with HydroMix with 1000 MCMC simulations closely reproduces the theoretical mean of the mixing ratios used to generate the synthetic data (Fig. 2a). However, for the high variance case, the inferred mixing ratios do not match the true underlying mixing ratios, especially for low and high mixing ratios. This is partly due to the poor identifiability of the sources (given that their distributions are highly overlapping) and partly due to the relatively small sample size of 100. The inferred mean should reproduce the theoretical mean with increasing sample size and we clearly see this for the low variance case in Fig. 2b, where the model performance markedly improves with an increasing number of samples. The performance is measured here in terms of the absolute error between the posterior mixing ratio mean and the true mean summed and averaged over all tested ratios from 0.05 to 0.95. We did not perform inferences for sample sizes larger than 100 as the computational requirement increases exponentially with increasing sample sizes. The model converges fairly quickly for the low variance case after ∼ 100 runs as shown in Fig. 3a. The obtained model residuals have zero mean and are approximatively normally distributed as revealed by quantile-quantile plots (not shown), in line with the assumption of an unbiased normally distributed error model, as stated in Eq. (7). Table 3). Panels (a) and (b) show variations in the inferred mixing ratio and the error mean with increasing MCMC runs. Table 3. Mean and variance of the two sources, S 1 and S 2 , drawn from normal distributions. Low variance 10 (0.5) 20 (0.5) High variance 10 (5.0) 20 (5.0) 4.2 Contribution of rain and snow to groundwater recharge using a hydrologic model Figure 4 shows the variation in the isotopic ratio of groundwater over the entire 100-year period, showing that the system achieves a steady-state condition after ∼ 15 years of simulation. The mixing ratio is estimated with HydroMix using (1) samples of the isotopic ratio in snowfall and (2) samples of the isotopic ratio in snowmelt. The two sample distributions differ, as shown in Fig. 5, where the variability of the isotopic ratio is lower in snowmelt when compared to snowfall. In the model at hand, this reduction is obtained because of mixing occurring within the snowpack, leading to homogenization and thus reducing the variability in the isotopic ratio of snowmelt. In field data, such a reduction in variability Table 4. Parameters used to generate time series of precipitation, air temperature, and isotopic ratios in precipitation. µ represents the mean, A is the amplitude, and ϕ is the time lag of the underlying sine function. For the precipitation process, µ is the mean intensity on days with precipitation. The resulting mean winter length (air temperature below 0 • C) is 119.5 d. Variable Parameter values Precipitation No. of events per year: 30, µ = 33.45 mm d −1 Air temperature is also generally observed (Beria et al., 2018) as a result of the homogenization as modeled here and from more complex snow physical processes, which lie beyond the scope of this study. The mixing ratios inferred with HydroMix are very similar regardless of whether snowfall or snowmelt is used across the entire range of recharge efficiencies (Fig. 6). This provides confidence in the use of snowfall samples as a proxy for snowmelt when estimating mixing ratios. However, it is clear from Fig. 6 that an important bias emerges between the Figure 4. Evolution of the modeled isotopic ratio in groundwater over a 100-year period with R r = 0.3 and R s = 0.6. estimated mixing ratio from HydroMix and the actual mixing ratio known from the hydrologic model, especially for low mixing ratios. This bias can be expected to emerge when the source contributions are not weighted according to their fluxes, which to our knowledge has not been explicitly addressed in the hydrological literature. As already discussed in Sect. 3.3, the absence of sample weighting typically induces a bias when there is a large divergence between the number of samples taken over a certain period (e.g., 1 year) to characterize a source and the magnitude of source flux over that period (e.g., 40 snow and 10 rain samples taken to characterize the two sources, for which snow only accounts for a very small portion, e.g., 10 %, of the annual precipitation). Effect of weights on estimates of mixing ratios using a hydrologic model After taking into account the magnitude of rainfall and snowmelt events in the composite likelihood function of Eq. (23), it is clear that many of the unweighted biases can be removed (Fig. 7). The most significant improvement is seen at very low mixing ratios for which the divergence between the conceptual model and the mixing model estimates error is reduced by almost 50 %. In this study, we have used a relatively simple normalization-based weighting function (Eq. 25). Testing other weighting functions that have been proposed in the past (Vasdekis et al., 2014) is left for future research. 4.4 Inferring fraction of snow recharging groundwater in a small Alpine catchment along with an additional model parameter Using the dataset from an Alpine catchment (Vallon de Nant, Switzerland), HydroMix estimates that 60 %-62 % of the groundwater is recharged from snowmelt (using unweighted approach), with the full posterior distributions shown in Fig. 8a. This estimate is consistent for both of the isotopic tracers (δ 2 H and δ 18 O), which are often used interchangeably in the hydrologic literature (Gat, 1996). Comparing this recharge estimate to the proportion of total precipitation that falls as snow (around 40 %-45 %; see Sect. 3.4.1) suggests that snowmelt is more effective at reaching the aquifer than an equivalent amount of rainfall falling at a different period of the year. Similar results have been obtained in a number of previous studies across the temperate and mountainous regions of the world (see Table 1 in the work of Beria et al., 2018, for a summary). As can be seen from Fig. 8a, the estimated distribution of the snow ratio in groundwater is very narrow. This can be explained by the fact that we assume that the collected precipitation samples represent the variability actually occurring in the catchment. To overcome this limitation, we infer an additional parameter called the isotopic lapse rate that accounts for the spatial heterogeneity in terms of catchment elevation. As shown in Fig. 9, the posterior distributions of the isotopic lapse rate (for both δ 2 H and δ 18 O) largely overlap the spatially averaged isotopic lapse rate as estimated from precipitation isotopes across Switzerland. The overlap with the average Swiss isotope lapse rate suggests that our inferred lapse rates are reasonable, with the spread in the estimates likely reflecting the temporal variation in the catchment-specific isotope lapse rate that can develop from a wide range of moderating factors (e.g., air masses contributing precipitation without traversing the full elevation range of the catchment due to varying trajectories). The Swiss lapse rate is constructed as a long-term spatial average, whereas the inferred isotopic lapse rate in Vallon de Nant is constructed from the temporal variations in the isotopic ratios. These results demonstrate that it is relatively straightforward to jointly infer multiple parameters within the Hy-droMix modeling framework. However, an important consequence of additional parameter inference without providing additional data or constraints is an increase in the degree of freedom, which can then increase the uncertainty in source contributions. This effect is seen in Fig. 8b, especially in contrast with the previous result in Fig. 8a, where the median mixing ratios of the posterior distributions remain similar (∼ 0.6), but the spread increases drastically from 0.005 to 0.2. Figure 7. Ratios of snow in groundwater estimated using HydroMix plotted against ratios obtained from the hydrologic model for both weighted and unweighted mixing scenarios. The full range of ratios is obtained by varying rainfall and snowmelt recharge efficiencies from 0.05 to 0.95. The numbers of rainfall, snowfall, and snowmelt days are 39, 24, and 107 in the last 2 years of simulation. Limitations and opportunities As with all linear mixing models, the quality of the underlying data determines the accuracy and utility of the results. If the tracer compositions of the different sources are not sufficiently distinct, the uncertainty in the estimated mixing ratios will become very large. This means that if either the underlying data quality is poor or the source contribution dynamics are not well conceptualized, then the uncertainty in the mixing ratios will be too high to be useful. In cases in which a large number of source samples are available, the computational requirements of HydroMix outweigh the benefit from using it. These are likely cases in which the statistical distribution of the source tracer composition is well understood, and therefore fitting a probability density curve to the source and target samples and then inferring the distribution of the mixing ratio using a probabilistic programming approach is more appropriate (Carpenter et al., 2017;Parnell et al., 2010;Stock et al., 2018). Also, HydroMix might not be an appropriate method in instances in which fitting statistical distributions to source and target compositions reflects a priori knowledge of the system. A key difference between HydroMix and other Bayesian mixing approaches is that HydroMix parameterizes the error function, whereas other Bayesian approaches parameterize the statistical distribution of source and mixture compositions. Parameterizing source compositions requires large sample sizes, which is seldom the case in tracer hydrology. Error parameterization offers a useful alternative and can also be verified against the posterior error distribution. In the case studies demonstrated in this paper, a normally distributed error model was found to be appropriate. However, error models other than Gaussian can be used by formulating the respective likelihood function. HydroMix builds the model residuals by comparing all the observed source samples with all the observed samples of the target mixture, assuming that all available source and target samples are independent. Interestingly, the assumption of independence holds even if the source and target samples are taken at the same time, since the target samples result from water that has traveled for a certain amount of time in the catchment and hence is not related to the water entering the catchment. However, if a system has instantaneous mixing, then the source and target samples taken at the same moment in time will necessarily be strongly correlated. In such cases, the assumption of independent samples would not make sense and the method might give spurious results. Figure 9. Histogram showing the posterior distribution of the isotope lapse rate parameter in δ 2 H and δ 18 O. The green region shows the confidence bounds (significant at α = 0.01) of the lapse rate computed over Switzerland by using inverse variance-weighted regression. The limits of the prior distribution of the isotopic lapse rates correspond to the limits of the x axis. The slope of the isotopic ratio when plotted against elevation for the Swiss-wide data is shown in Fig. 3 of Beria et al. (2018). Finally, it is noteworthy that adding additional parameters to characterize the source tracer composition increases the degree of freedom of the model, which implies that adding such parameters leads to an increase in the uncertainty of the source contribution estimates unless new information, i.e., new observed data, is added to the model. This means that users who are interested in incorporating additional modification processes by adding parameters should ideally provide additional tracer data able to constrain this process, subject to tracer data being available. For consistency and simplicity, the case studies and synthetic hydrological examples provided here focused on the contribution of rain and snow in recharging groundwater. However, it is important to emphasize that the opportunities to implement HydroMix extend to all cases in which mixing contributions are of interest and for which it is difficult to build extensive databases of source tracer compositions. Such examples include quantifying the amount of "pre-event" vs. "event" water in streamflow; pre-event water refers to groundwater and event water refers to rainfall or snowmelt. Another interesting use case might be to quantify the proportion of streamflow coming from the different source areas in a catchment to capture the spatial dynamics of streamflow. Other uses include quantifying the amount of fog contributing to throughfall, the proportion of glacial melt vs. snowmelt flowing into a stream, the amount of vegetation water use from soil moisture at different depths vs. groundwater, the interaction between surface water and groundwater at the hyporheic zone (Leslie et al., 2017), and sediment fingerprinting to quantify the spatial origin of river sediments. In all of these cases, understanding source water contributions, both spatially and temporally, will improve the physical understanding of the system. Conclusions We develop a new Bayesian modeling framework for the application of tracers in mixing models. The primary application target of this framework is hydrology, but it is by no means limited to this field. HydroMix formulates the linear mixing problem in a Bayesian inference framework that infers the model parameters with a Metropolis-Hastingsbased MCMC sampling algorithm based on differences between observed and modeled tracer concentrations in the target mixture using all possible combinations between all source and target concentration samples. This is especially useful in data-scarce environments where fitting probability distribution functions is not feasible. HydroMix also makes the inclusion of additional model parameters to account for source modification processes straightforward. Examples include known spatial or temporal tracer variations (e.g., isotopic lapse rates or evaporative enrichment). An evaluation of HydroMix with data from different synthetic and field case studies leads to the following conclusions. 1. HydroMix gives reliable results for mixing applications with small sample sizes (< 20-30 samples). As expected, the variance in source tracer composition and the ensuing composition overlap determines the bias in the mixing ratio estimates. The bias in mixing ratio estimates increases with increasing variance in source tracer compositions. Mixing ratio estimates improve (in terms of lower error) with an increasing number of source samples. 2. As revealed by our synthetic case study with a conceptual hydrological model, at low source contributions (i.e., < 20 %), a strong divergence between the actual and estimated mixing ratios emerges. This arises if HydroMix assigns equal weights to all source samples by proportionally oversampling the less abundant source, which then leads to significant biases in mixing estimates. This problem is inherent to all mixing approaches and to our knowledge has not been adequately addressed in the literature. 3. The use of composite likelihoods to weight samples by their amounts can significantly reduce the bias in the mixing estimates. At low source proportions, the estimated mixing ratio improves by more than 50 % after accounting for the amount of all the sources. We show this using a simple normalization-based weighting function. Future studies should explore the usage of different weighting functions that have been proposed in the past (Vasdekis et al., 2014). 4. A synthetic application of HydroMix to understand the amount of snowmelt-induced groundwater recharge revealed that using the snowfall isotopic ratio instead of the snowmelt isotopic ratio leads to similar mixing ratio estimates. This is particularly useful in high mountainous catchments, where sampling snowmelt is logistically difficult. 5. A real case application of HydroMix in a Swiss Alpine catchment (Vallon de Nant) showed a clear winter bias in groundwater recharge. About 60 %-62 % of the groundwater is recharged from snowmelt (unweighted mixing approach), while snowfall only accounts for 40 %-45 % of the total annual precipitation. This has also been previously suggested elsewhere in the European Alps (Cervi et al., 2015;Penna et al., 2014Penna et al., , 2017Zappa et al., 2015). To conclude, HydroMix provides a Bayesian approach to mixing model problems in hydrology that takes full advantage of small sample sizes. Future work will show the full potential of this approach in hydrology as well as other environmental modeling applications.
2019-08-17T01:06:24.850Z
2019-03-28T00:00:00.000
{ "year": 2020, "sha1": "f31a816b4524e9976b474b94997d451078ef9f36", "oa_license": "CCBY", "oa_url": "https://gmd.copernicus.org/articles/13/2433/2020/gmd-13-2433-2020.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "dfd3d799fcb757453a768e2115393cb647f84985", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
88479234
pes2o/s2orc
v3-fos-license
The SuiteSparse Matrix Collection Website Interface The SuiteSparse Matrix Collection (formerly known as the University of Florida Sparse Matrix Collection) (Davis & Hu, 2011) has grown significantly since its introduction, with newly added matrices representing almost seven times as much data than the entirety of the original Collection. With this growth, searching the Collection for matrices with specific names, structures, and other properties has become increasingly difficult. To make the Collection more accessible to the scientific computing community, we have developed a web application that allows real-time search and filtering of the Collection matrices. of the original Collection.With this growth, searching the Collection for matrices with specific names, structures, and other properties has become increasingly difficult.To make the Collection more accessible to the scientific computing community, we have developed a web application that allows real-time search and filtering of the Collection matrices. Built on Ruby on Rails, the web application was developed with software engineering best practices such as test-driven development, continuous integration via Semaphore CI, and static analysis and code coverage via Code Climate.Collectively, the web application and data storage now serve as the canonical source of the Collection from which other services, including the Clarivate Data Citation Index (2019) and re3data (2018) reference or mirror the Collection.A variety of interfaces for accessing the Collection, including ssget in MATLAB, also obtain their data from this application. The SuiteSparse Matrix Collection has become the lingua franca of sparse matrix data and benchmarking, but its original website was written in static HTML, prohibiting any realtime searching or filtering of the Collection.With the current size, breadth, and variety of users of the Collection, this new web application provides to the scientific computing community a level of accessibility to the Collection not available before. Examples of recent work that have utilized the Collection and its website to accomplish their scientific research goals include the following: • Computing optimal solutions to the bipartitioning problem for 839 sparse matrices (Knigge & Bisseling, 2018).• The development of a novel hybrid graph partitioning library, especially effective at partitioning social networks (Davis, Hager, Kolodziej, & Yeralan, 2019).• A metric-constrained optimization method for computing lower bounds to the sparsest cut problem on undirected graphs (Veldt, Gleich, Wirth, & Saunderson, 2018). Note that these projects required identifying matrices with specific properties, which is enabled by the Collection web application. Features and Functionality The SuiteSparse Matrix Collection web application provides a variety of features to help the scientific computing community access the Collection more easily. Matrix Property Search, Sorting, and Filtering The SuiteSparse Matrix Collection web application allows real-time filtering by the following matrix properties: Matrix Size and Shape • Rows -The number of rows in the matrix. • Columns -The number of columns in the matrix. • Nonzeros -The number of nonzero entries in the matrix. Matrix Structure and Entry Type • Pattern symmetry -The percent of entries that are mirrored across the matrix diagonal.The numeric value of the entries is irrelevant.• Numerical symmetry -The percent of entries that are mirrored across the matrix diagonal with the identical numeric value.• Number of strongly connected components -The number of strongly connected components present in the resulting graph of this sparse matrix.• Rutherford-Boeing type -The type of entry in the sparse matrix.One of either Real, Complex, Integer, or Binary.• Structure -Special matrix structure, including square, rectangular, symmetric, skew-symmetric, Hermitian, and unsymmetric. Matrix Metadata • Matrix name -The specific name of the matrix. • Matrix group -The group name the matrix belongs to. • Matrix ID -The numeric identification number of the matrix (between 1 and 2833 as of this writing).• Matrix Year -The year the matrix was added to the Collection. Additionally, matrix details are displayed on each matrix's individual page, including the matrix's rank, condition number, and information regarding its singular value decomposition.A variety of visualizations are also presented, including sparsity patterns, force-directed graph (or bipartite graph) visualizations (Hu, 2005), Dulmage-Mendelsohn permuted sparsity patterns, and singular values plotted in decreasing size. Matrices can also be quickly accessed by URL route matching.For example, the information page for the matrix HB/west0479 can be accessed directly by visiting sparse.tamu.edu/HB/west0479.Information on the Mycielski group of matrices can be accessed by visiting sparse.tamu.edu/Mycielski. New Matrix Submission Additionally, new matrices can be submitted to the collection via the web application.Information regarding the author, the field or domain from which the matrix comes from, and data about the matrix itself is collected via a form and converted to an email sent to the Collection curator. Deployment The web application is deployed at https://sparse.tamu.edu,but can be viewed, downloaded, and contributed to on GitHub.
2019-03-31T13:14:00.424Z
2019-03-10T00:00:00.000
{ "year": 2019, "sha1": "0a01e827af5bbff4654df889eb073dceedce6b32", "oa_license": "CCBY", "oa_url": "https://joss.theoj.org/papers/10.21105/joss.01244.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "8292947caa16d7c9e641fedb23f3ead73421b468", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
13299215
pes2o/s2orc
v3-fos-license
A Phosphate-Regulated Promoter for Fine-Tuned and Reversible Overexpression in Ostreococcus: Application to Circadian Clock Functional Analysis Background The green picoalga Ostreococcus tauri (Prasinophyceae), which has been described as the smallest free-living eukaryotic organism, has minimal cellular ultra-structure and a very small genome. In recent years, O. tauri has emerged as a novel model organism for systems biology approaches that combine functional genomics and mathematical modeling, with a strong emphasis on light regulated processes and circadian clock. These approaches were made possible through the implementation of a minimal molecular toolbox for gene functional analysis including overexpression and knockdown strategies. We have previously shown that the promoter of the High Affinity Phosphate Transporter (HAPT) gene drives the expression of a luciferase reporter at high and constitutive levels under constant light. Methodology/Principal Findings Here we report, using a luciferase reporter construct, that the HAPT promoter can be finely and reversibly tuned by modulating the level and nature of phosphate in culture medium. This HAPT regulation was additionally used to analyze the circadian clock gene Time of Cab expression 1 (TOC1). The phenotype of a TOC1ox/CCA1:Luc line was reverted from arrhythmic to rhythmic simply by adding phosphate to the culture medium. Furthermore, since the time of phosphate injection had no effect on the phase of CCA1:Luc expression, this study suggests further that TOC1 is a central clock gene in Ostreococcus. Conclusions/Perspectives We have developed a phosphate-regulated expression system that allows fine gene function analysis in Ostreococcus. Recently, there has been a growing interest in microalgae as cell factories. This non-toxic phosphate-regulated system may prove useful in tuning protein expression levels quantitatively and temporally for biotechnological applications. Introduction Ostreococcus tauri (Prasinophyceae) is one of the simplest freeliving photosynthetic organisms described to date. This tiny eukaryotic green alga (around 1 mm in diameter) has minimal cellular organization, with only one chloroplast, a mitochondrion, a single Golgi body and no cell wall. Its small genome (12.6 Mbp in size) is extremely compact with very high gene density and an intergenic regions average of below 200 bp in size [1]. In addition, gene families and redundancy are extremely reduced. For example, several transcription factors (TFs) such as Basic Helix Loop Helix (BHLH), which are present at more than 150 copies per genome in higher plants, exist as single members in Ostreococcus, whilst O. tauri contains less than 200 TFs in toto [2]. In recent years, using functional genomics and system biology approaches, O. tauri has emerged as a promising model organism to study complex biological processes such as the circadian clock [3][4][5][6][7], the cell division cycle [8,9] and starch biosynthesis pathways [10]. We have implemented tools for in vivo monitoring of gene expression using a luciferase reporter strategy as well as tools for gene functional analysis including overexpression and knockdown strategies in O. tauri [3,9]. The pOtox overexpression vector relies on the promoter of the PHO89/PHO4-like HAPT gene [11,12]. This promoter drives the expression of firefly luciferase at high and steady state levels under constant light in O. tauri [3]. Using this approach, we have functionally identified several genes involved in the circadian clock of Ostreococcus, including Circadian Clock Associated 1 (CCA1), TOC1 and putative photoreceptors with circadian clock function [3,13,14]. Circadian clocks rely in part on transcriptional translational feedback loops in which clock genes like TOC1 activate the synthesis of their repressor (CCA1). Timely inducible overexpression systems provide an efficient means of circadian function analysis [15,16]. For example, in Neurospora, step decreases in the concentration of the clock protein FRQ completely reset the phase of the clock consistent with FRQ being a central clock component [15]. A battery of inducible promoters is currently available for most model organisms, including plants and algae. They can be induced by specific exogenous molecules such as chemicals (alcohols, herbicides) and metabolites, or by environmental stimuli such as temperature or light [17]. For example, the heat shock-induced transcription promoter (HSP) in Nicotiana tabacum is rapidly and highly induced following temperature changes. In addition, its induction can be tuned by varying the temperature and the duration of the heat stimulus induction [18]. In Pichia pastoris, the promoter of PHO89 has been used as an alternative inducible system to drive the expression of recombinant proteins for academic and industrial applications [19]. In the Prasinophyceae Tetraselmis chui, the HAPT gene was shown to be tightly regulated at the transcriptional level by inorganic phosphate [20]. In the present study we have investigated the regulation of the HAPT promoter by phosphate levels in O. tauri using a luciferase pHAPT:Luc construct. HAPT transcriptional activity was finely regulated by adjusting the phosphate concentration in the culture medium. Furthermore, this reversible overexpression system was tuned for fine functional analysis of the TOC1 circadian clock gene. Functional analysis of the Ostreococcus tauri HAPT promoter The HAPT gene is located on the first half of the atypical chromosome II (ChrII) of O. tauri (Fig. 1A), which has a different GC% content compared to other chromosomes and has low synteny with Ostreococcus lucimarinus ChrII [21]. This part of ChrII contains introns with unusual splicing sequences and encompasses most of the genome repeated sequences and transposons [1]. The putative HAPT promoter sequence is only 119-bp long and contains at position 276 a GNATATNC PhR1-Binding Site (P1BS) found in the upstream regions of phosphate starvation responsive genes [22] (Fig. 1B). The HAPT gene was recently identified in the O. tauri Virus2 (OtV2) and O. lucimarinus Virus1 (OlV1) genomes suggesting that a lateral transfer has occurred [23]. The HAPT promoter sequences of both viruses are highly conserved (Fig. 1C). It is interesting that the P1BS motif was not detected in the 59 regulatory sequences of OtV2 and OlV1, but instead a conserved putative TATA Box was found (Fig. 1C). This suggests that in the viruses the transcription of the HAPT gene may be constitutive rather than phosphate-controlled. To study the HAPT promoter regulation in O. tauri, we used a representative pHAPT:Luc reporter line, which is a transcriptional fusion between the HAPT promoter and the firefly luciferase [3]. Ostreococcus cells are usually cultivated in Keller medium that contains an organic source of phosphate (Po), provided as bglycerophosphate (10 mM). Keller medium is based on natural sea water (NSW), supplemented with vitamins and nutriments (nitrate, b-glycerophosphate, trace metals), and therefore contains low levels of inorganic phosphate (between 0.1 and 100 nM), naturally present in the seawater. Cell growth was monitored in Keller medium containing different sources of phosphate: 10 mM bglycerophosphate (K Po , standard Keller medium); 10 mM inorganic phosphate NaH 2 PO 4 (K Pi ) or 10 mMNaH 2 PO 4 +10 mM bglycerophosphate (K Po+Pi ). Similar growth curves were observed for the three different media conditions, suggesting that O. tauri cells assimilate organic or inorganic phosphate equally as well at these concentrations ( Fig. 2A). For the subsequent experiments, we developed an Artificial Sea Water (ASW) based on Keller medium, lacking Po and Pi (see Methods section). Cells were grown in ASW+10 mM Po (or Pi) to stationary phase before being diluted to a concentration of 10 6 cells/ml in ASW containing various concentrations of Po (or Pi). Under low Po and Pi conditions, cell growth was strongly inhibited as measured after 60 hours (Fig. 2B). The luminescence measurement reflected the activity of total luciferase protein accumulated between 0 and 60 hours. The luminescence per cell was the highest for low phosphate concentrations (up to 40 fold between 0 and 50 mM [Pi]). This suggests that the HAPT promoter is strongly induced by phosphate starvation (Fig. 2C). Furthermore, a stronger inhibition was observed for Pi than for P O . Therefore, for subsequent experiments, inorganic phosphate was used to modulate the HAPT promoter activity. Kinetics of the HAPT promoter activity The kinetics of HAPT promoter activity were determined by adding various concentrations of Pi to pHAPT:Luc cells grown in the presence of luciferin (Fig. 3A). In these experimental conditions of continuous in vivo monitoring of luminescence, luciferase is inactivated after enzymatic reaction with luciferin. Consequently, the luminescence pattern reflects the dynamics of HAPT promoter activity more accurately than when measuring luciferase activity in extracts of cells grown without luciferin. For concentrations above or equal to 10 mM Pi, a transient increase in the pHAPT:Luc luminescence was observed during the first hours following Pi addition. Within 24 hours, the promoter activity dramatically decreased with increasing Pi concentrations. Twenty eight hours after Pi injection at concentrations above or equal to 50 mM, the residual pHAPT:Luc luminescence per cell was about six fold lower than in the control cells (grown in 10 mM Po without Pi) (Fig. 3B). At this time a 50 percent inhibition was observed for [Pi],5 mM. Quantitative analysis by real time RT-PCR showed that at [Pi] = 10 mM the luciferase transcript quickly dropped to a stable level (corresponding to about 20% of luciferase mRNA in control cells). This occurred as early as 5 hours after Pi addition (Fig. 3C). Differences were observed between the slow kinetics of pHAPT:Luc luciferase activity (Fig. 3A) and the fast decay of luciferase mRNA. These are likely to be due to the slow kinetics of luciferase inactivation upon oxidation of luciferin and/or to the luciferase stability. Data shown in Figure 3 indicate that HAPT transcriptional activity is strongly and quickly repressed upon phosphate addition. Concentrations above or equal to 10 mM [P i ] appear to be a good compromise between the inhibition of pHAPT activity and cell growth. Activation of the HAPT promoter by phosphate starvation Ideally, a phosphate-tunable expression system should be capable not only of repressing, but also of up-regulating gene expression. In the next set of experiments, cells were grown to stationary phase in ASW medium supplemented with various concentrations of Pi between 5 and 100 mM. They were subsequently diluted 6 fold in ASW to reduce the initial Pi concentrations at time 0. Control cells were diluted in ASW containing 10 mM Po. For an initial concentration of 100 mM Pi, the HAPT promoter activity remained for at least 80 hours after dilution, most likely because the Pi concentration remained high ( Fig. 4A and 4B). Cells grown in 50 mM [Pi] displayed a small increase in luminescence 60 hours after dilution, suggesting that upon Pi assimilation, Pi concentrations decreased to levels that are sufficient to activate the HAPT promoter. The induction of HAPT transcriptional activity (luminescence of cells diluted in ASW lacking Pi/luminescence of control cells diluted in the initial culture medium) was calculated for each initial [Pi]. For a Pi concentration of 10 mM, a 25 fold induction of pHAPT activity was observed 50 hours after dilution (Fig. 4C). In these conditions, the luminescence increased at about 60 hours in control cells diluted with ASW containing Pi. This rise was likely Reversion of circadian overexpression phenotype by modulating the HAPT promoter activity In gene functional analysis it is important to tune the expression level of genes of interest, without compromising cell growth. We have found that Ostreococcus cells grow equally well in both organic and inorganic phosphate. The addition of Pi at concentrations above 10 mM strongly inhibited the HAPT promoter, as demonstrated by luciferase activity and luciferase mRNA levels. Furthermore, the HAPT promoter could be modulated in a dosedependent manner by adjusting the phosphate concentration in the culture medium. In the next experiment, we aimed to lower the level of overexpression in order to reverse strong arrhythmic circadian phenotypes. Circadian clocks rely largely on transcriptional oscillators which are based on negative feedback loops. Such an oscillator in Ostreococcus is formed by the CCA1/TOC1 couple, CCA1 repressing the transcription of TOC1. The TOC1 protein in turn activates the transcription of CCA1 through an unknown mechanism. Clamping TOC1 at high levels leads to arrhythmic CCA1 expression, consistent with TOC1 being involved in the clock. We applied a range of [Pi] on a TOC1ox/CCA1:Luc arrhythmic line grown in ASW+10 mM [Po], to modulate the level of TOC1 overexpression (Fig. 5A). Below 5 mM [Pi], the TOC1ox/CCA1:Luc were considered as arrhythmic in constant light even though one additional damped oscillation was observed at 2 mM [Pi], (Fig. 5A and 5B). For higher [Pi], the TOC1ox/ CCA1:Luc recovered a rhythmic expression pattern of CCA1:Luc. Notably, when recovering the rhythmicity of the luciferase reporter for [Pi]$5 mM, no significant period changes were observed between the different phosphate concentrations (Pv = 0.096 in an ANOVA test; Fig. 5A and 5B). This all-or-none response may be due to the extreme sensitivity of Ostreococcus rhythms to altering TOC1 levels [3,4]. The arrhythmic TOC1ox/ CCA1:Luc line also quickly recovered rhythmicity upon addition of Pi to cells in the course of the experiment (Fig. 5C). Fine circadian clock function analysis An apparent arrhythmic phenotype must be carefully interpreted since a residual circadian clock function can persist in constant light, a phenomenon known as ''masking'' [15]. This could arise from saturation of photo-transduction input pathways under constant light [24]. We reasoned that if an underlying masked clock remained active in an apparently arrhythmic line, its sensitivity to changes in the TOC1 levels should depend on the time of day. In this case, lowering TOC1 overexpression (upon Pi Figure 2. Effect of phosphate on cell growth and pHAPT:Luc promoter activity. (A) The effect of b-glycerophosphate (Po for organic phosphate), NaH 2 PO 4 (Pi for inorganic phosphate) or combined Pi+Po was monitored on cell growth as recorded by flow cytometry. Ostreococcus cells were grown in Keller medium containing 10 mM b-glycerophosphate (K Po ), Keller medium containing 10 mM NaH 2 PO 4 instead of bglycerophosphate (K Pi ), and Keller Medium supplemented with 10 mM NaH 2 PO 4 K Po+Pi ). (B) Cells were grown in ASW containing various concentrations of Pi or Po. Cell growth concentration was determined after 60 hours in culture. Similar dose response concentrations were obtained even though cell concentrations were slightly lower in ASW (Pi) (N = 3, 6SD). (C) In vitro luminescence measurement of accumulated luciferase per cell over 60 hours in a pHAPT:Luc reporter line (N = 3, 6SD). A dose dependent inhibition was observed for both Po and Pi but the inhibition by Pi occurred at lower concentration. doi:10.1371/journal.pone.0028471.g002 To test this hypothesis, Pi (30 mM final concentration) was added at different times to a TOC1ox/CCA1:Luc line synchronized by an initial 6 hour dark pulse. This relatively high Pi concentration was chosen so that [Pi] would not drop to levels sufficient to activate the HAPT promoter upon phosphate assimilation in the course of the experiment. Times of Pi addition corresponded to different phases of the CCA1:Luc cycle, including peak and trough (Fig. 6A). The arrhythmic TOC1ox/CCA1:Luc cells recovered rhythmicity quickly upon Pi addition (Fig. 6B). The peak of expression of CCA1:Luc was determined after one transitory oscillation and plotted as a function of the time of Pi addition (Fig. 6C). Data analysis showed that the peak phase was constant (29.81 h60.69, N = 3), with no significant difference between the 3 times of Pi addition (Pv = 0.505 in an ANOVA test). These results indicate that there is no masked clock gating the CCA1:Luc response upon alteration of TOC1 overexpression level. A similar approach developed in Arabidopsis using an ethanol-induced expression system demonstrated that pulses of TOC1 expression, due to a strong post-translational control, did not elicit phase shifts [16]. Our results in Ostreococcus suggest that lowering TOC1 overexpression by repressing the HAPT promoter is sufficient to restore circadian rhythms of CCA1 indicating the importance of transcriptional regulations in the Ostreococcus circadian clock. Conclusions We have demonstrated that the the O. tauri HAPT gene short promoter can be regulated by controlling the level of exogenous phosphate in the culture medium. At the transcript level changes occur within 5 hours of phosphate addition. Circadian phenotypes can be reversed by simply adding phosphate to cells and the central role of TOC1 for clock function has been highlighted. Compared to existing eukaryotic microalgal overexpression systems that are based on metal-inducible promoters such as copper or nickel [25], the HAPT promoter regulation relies on phosphate, an essential non-toxic nutriment, which usually becomes limiting when cells reach stationary phase. In the future, this phosphate-tunable system could be used to uncouple cell growth from recombinant protein expression. This would be particularly important in the development of Ostreococcus as a cell factory for biotechnological applications. Methods Algal culture and culture medium O. tauri 0TTH95 WT strain and transgenic reporter lines were grown in flasks (Sarstedt) or white 96-well microplates (Nunc, Perkin Elmer) under constant light at a light intensity of 20 mmol quanta m 22 s 21 . In the first experiment, cells were grown in standard Keller medium which contains natural seawater supplemented with trace metals and vitamins [8]. The effect of the phosphate source on cell growth was determined by adding organic b-glycero-phosphate or inorganic NaH 2 PO 4 to a final concentration of 10 mM. For subsequent experiments, a Kellerbased Artificial Sea Water (ASW) medium was developed. This modified Keller medium contains 24.55 g/l NaCl, 0.75 g/l KCl, 4.07 g/l MgCl 2 6H 2 O, 1.47 g/l CaCl 2 2H 2 O, 6.04 g/l MgSO 4 7H 2 O, 0.21 g/l NaHCO 3 . The concentration of phosphate was adjusted using organic b-glycerophosphate or inorganic NaH 2 PO 4 depending on the experiment. For circadian experiments, 5 ml of NaH 2 PO 4 (1.2 mM stock solution) was gently added to 200 ml of cell cultures grown in microplates. Cell counting was performed by flow cytometry using a Cell Lab Quanta TM SC MPL -(Beckman Coulter). Cells were fixed in 0.25% glutaraldehyde for 20 min before flow cytometry analysis. Monitoring and analysis of in vivo bioluminescence O. tauri pHAPT:Luc, TOC1ox/CCA1:Luc and CCA1:Luc lines have been described elsewhere [3]. Cells were grown to saturation in microplates under constant light. Cells were plated at equal cell density (10.10 6 cell/ml) in culture medium containing D-luciferin (10 mM). Cell synchronization was achieved by a 6 hour dark period before placing the cells under constant light. Luminescence was acquired for 5 sec every hour using an automated microplate luminometer (Berthold LB Centro). Statistical analyses of circadian rhythm were performed using BRASS software (Biological Rhythms Analysis Software System, P.E. Brown, Warwick University). FFT-NLLS analysis (Fast Fourier Transform NonLinear Last Square) was used to estimate Relative Amplitude Error (RAE) (a measure of goodness-fit to theoretical sine wave) and Free Running Period (FRP) that were taken as an objective measure of the rhythmicity of the bioluminescence traces [26]. Lines with RAE values above 0.4 displayed no detectable rhythms of luminescence and were considered arrhythmic (AR). The number of tested lines is represented by N. Error bars of standard deviation (SD) and are represented by 6SD. Statistical analyses were performed using ANOVA test, a = 5% (Pvalue is indicated). In vitro measurements of bioluminescence in cell extracts Cells cultures grown to saturation (200 ml) were extracted in lysis buffer (100 mM potassium phosphate, 1 mM EDTA, 1 mM DTT, 1% triton X-100, and 10% glycerol, pH 7.8). Luciferase essays were performed in a luminometer (Berthold LB Centro) after injection of luciferase reagent (20 mM Tricine, 5 mM MgCl2, 0.1 mM EDTA, 3.3 mM DTT, 270 mM Coenzyme A, 500 mM luciferin, and 500 mM ATP, pH 7.8). The luminescence value was normalized to the total number of cells scored by flow cytometry.
2014-10-01T00:00:00.000Z
2011-12-12T00:00:00.000
{ "year": 2011, "sha1": "909230d86efd3b7ef87304734dc43617f07d53a4", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0028471&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "909230d86efd3b7ef87304734dc43617f07d53a4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
17140556
pes2o/s2orc
v3-fos-license
Rees algebras on smooth schemes: integral closure and higher differential operators Let $V$ be a smooth scheme over a field $k$, and let $\{I_n, n\geq 0\}$ be a filtration of sheaves of ideals in $\calo_V$, such that $I_0=\calo_V$, and $I_s\cdot I_t\subset I_{s+t}$. In such case $\bigoplus I_n$ is called a Rees algebra. A Rees algebra is said to be a Diff-algebra if, for any two integers $N>n$ and any differential operator $D$ of order $n$, $D(I_N)\subset I_{N-n}$. Any Rees algebra extends to a smallest Diff-algebra. There are two ways to define extensions of Rees algebras, and both are of interest in singularity theory. One is that defined by taking integral closures (in which a Rees algebra is included in its integral closure), and another extension is that defined, as above, in which the algebra is extended to a Diff-algebra. Surprisingly enough, both forms of extension are compatible in a natural way. Namely, there is a compatibility of higher differential operators with integral closure which we explore here under the assumption that $V$ is smooth over a perfect field. Graded rings and Diff-algebras 2.1. Fix a noetherian ring B and a sequence of ideals {I k }, k ≥ 0, which fulfill the conditions: 1) I 0 = B, and 2) I k · I s ⊂ I k+s . This defines a B-algebra which is a graded subring G = k≥1 I k W k of the polynomial ring B[W ]. We say that G is a Rees algebra if this subring is a (noetherian) finitely generated B-algebra. In what follows we define a Rees algebra, say n≥0 I n W n in B[W ], by fixing a set of generators, say F = {g n i W n i /g n i ∈ B, n i > 0, 1 ≤ i ≤ m}. So if f ∈ I n , then f = F n (g n 1 , . . . , g nm ), where F n (Y 1 , . . . , Y m ) is a weighted homogeneous polynomial in m variables, each Y j considered with weight n j . Remark 2.2. 1) Examples of Rees algebras are Rees rings of ideals, say I ⊂ B. In this case I k = I k for each k ≥ 1. These are algebras generated by (homogenoeus) elements of degree one (i.e., generated by F with all n i = 1). 2) When ) is a Rees algebra, a new Rees algebra I ′ k W k is defined by setting I ′ k = r≥k I r . Note that I ′ k ⊃ I ′ k+1 . If I k W k is generated by F = {g n i W n i , n i > 0, 1 ≤ i ≤ m}. Namely, if: then we claim that A) I ′ k W k is generated by the finite set F ′ = {g n i W n ′ i , 1 ≤ i ≤ m, 1 ≤ n ′ i ≤ n i }, and B) I k W k ⊂ I ′ k W k is a finite extension. To prove the first claim we can use the fact that an element, say h N , is homogeneous of degree N in the B-subalgebra generated by F ′ if it is a B-linear combination of monomials of the form h a 1 1 · h a 2 2 · · · h as s where h i W n i ∈ F ′ , and a i · n i = N. Suppose that a 1 = 0, and express h a 1 1 · h a 2 2 · · · h as s = h 1 · h a 1 −1 1 · h a 2 2 · · · h as s where now the first factor h 1 is endowed with degree n 1 − 1. This ensures that h a 1 1 · h a 2 2 · · · h as s also appears, in the Rees algebra, as an homogeneous element in degree N − 1 (as an element in I N −1 ). This already proves the first claim. To prove B), it suffices to check that given g ∈ I k , then gW k−1 is integral over I k W k . Note that g ∈ I k ⇒ g k−1 ∈ I k(k−1) ⇒ g k ∈ I k(k−1) , so gW k−1 fulfills monic polynomial the equation Z k − (g k W k(k−1) ) = 0. Here we always assume that B is an excellent ring, so that the integral closure is also finitely generated over B. So B) shows that up to integral closure we may assume that a Rees algebra has the additional condition: If a Rees algebra n≥0 I n W n in B[W ] is the Rees ring of I 1 , then the integral closure in B[W ] is n≥0 I n W n , where each I n is the integral closure of the ideal I n . This is a Rees algebra, and not necessarily the Rees ring of the ideal I 1 . Let B be a normal excellent ring, and let Spec(B) π ←− X be a proper birational morphism, then I ⊂ π * (IO X ) ⊂ I, where I denotes the integral closure of I in B. Moreover, if π is the normalization of the blow-up at I, then IO X is an invertible sheaf of ideals, and I = π * (IO X ). Assume that the normal ring B is of finite type over a field k. If B is a one dimensional normal domain, any ideal is invertible and integrally closed. We add the following well known result for self-containment ( se [6], p.54 or [11] p. 100). Lemma 2.5. Let I, J be two ideals in a normal domain B, which is finitely generated over a field k. Then I = J if and only if IO W = JO W , for any morphism of k-schemes W → Spec(B), with W of dimension one, regular and of finite type over k. Proof. Let x ∈ W be a closed point that maps to, say y ∈ Spec(B). Then O W,x is a valuation ring that dominates O Spec(B),y . So if I = J, then IO W = JO W . In fact, for any morphism B → A, where A is a valuation ring, IA = IA. Assume that this condition holds for any morphism from a regular one dimensional scheme W . We claim now that both ideals have the same integral closure in B. Let Spec(B) π ←− X be the normalized blow up at I, and let {H 1 , . . . , H s } be the irreducible components of the closed set defined by the invertible sheaf of ideals IO X . Here each H i is an irreducible hypersurfaces in X. Let h i ∈ X denote the generic point of H i . There are positive integers a i , so that IO X can be characterized as the sheaf of functions vanishing along H i with order at least a i (i.e., with order at least a i at the valuation rings O X,h i ). Claim: The sheaf of ideals JO X also has order a i at O X,h i . If the claim holds, JO X ⊂ IO X , and In particular J ⊂ I = I. A similar argument leads to the other inclusion. In order to prove the claim we choose a closed point x ∈ H i so that: Since any sheaf of ideals has only finitely many p-primary components, such choice of x is possible. Let x , and let W be the closure of the irreducible curve defined locally by < x 1 , . . . , x d−1 >. So W is one dimensional, and regular locally at x. We may assume that W is regular after applying quadratic transformations which do not affect the local ring O W,x . By construction IO W,x has order a i , by hypothesis the same holds for JO W,x . This proves the claim. T ay(f (X)) = α≥0 ∆ α (f (X))U α . This defines, for each α, ∆ α : S[X] → S[X], which is an S-differential operators (S linear) on B = S[X]. Furthermore, for any positive integer N, the set {∆ α , 0 ≤ α ≤ N} is a basis of the B-module of S-differential operators on B, of order ≤ N. Definition 2.7. Let B = S[X] be a polynomial ring over a noetherian ring S. A Rees algebra is said to be a Diff-algebra, relative to S, when: ii) For all n > 0 and f ∈ I n , and for any index 0 ≤ j ≤ n and any S-differential operator of order ≤ j, say D j : D j (f ) ∈ I n−j . Remark 2.8. Let Dif f N S (B) denote the module of S-differential operators of order at most N. Then For this reason it is natural to require condition (i) in our previous definition. Note also that 2.6 asserts that (ii) can be reformulated as: ii') For any n > 0 and f ∈ I n , and for any index 0 ≤ α ≤ n: In fact, (i) and (ii) are equivalent to (i) and (ii'): as before, and a finite set F = {g n i W n i , n i > 0, 1 ≤ i ≤ m}, with the following properties: a) For any 1 ≤ i ≤ m, and any n ′ i , 0 < n ′ i ≤ n i : b) For any 1 ≤ i ≤ m, and for any index 0 ≤ α < n i : Then the B subalgebra of B[W ], generated by F over the ring B, is a Diff-algebra relative to S. Proof. Condition (i) in Def 2.7 holds by 2.2, 2). Fix a positive integer N, and let I N W N be the homogeneous component of degree N of the B subalgebra generated by F . We prove that for any h ∈ I N , and any 0 ≤ α ≤ N, The ideal I N ⊂ B is generated by all elements of the form (2.9.1) H N = g n i 1 · g n i 2 · · · g n ip n i 1 + n i 2 + · · · n ip = N, with the g n i i W n i i ∈ F not necessarily different. Since the operators ∆ α are linear, it suffices to prove that ∆ α (a · H N ) ∈ I N −α , for a ∈ B, H N as in 2.9.1, and 0 ≤ α ≤ N. We proceed in two steps, by proving: We first prove 1). Set T ay : , as in 2.6. Consider, for any element g n i l W n i l ∈ F , Hypothesis (b) states that for each index 0 ≤ β < n i l , ∆ β (g n i l )W n i l −β ∈ F . On the one hand T ay( and, on the other hand T ay(H N ) = T ay(g n i 1 ) · T ay(g n i 2 ) · · · T ay(g n ip ) in B[U]. This shows that for a fixed α (0 ≤ α ≤ N), ∆ α (H N ) is a sum of elements of the form: So it suffices to show that each of these summands is in and that some of the integers n is − β s might be zero or negative. Set Hypothesis (b) ensures that ∆ βr (g n ir ) ∈ I n ir −βr for every index r ∈ G, in particular: Finally, since M ≥ N − α, I M ⊂ I N −α , and this proves Case 1). extends to a smallest Diff-algebra, which is generated by the finite set Remark 2.11. (Not used in what follows) Theorem 2.9 shows how to extend any Rees algebra to a Diff-algebra, say so that the conditions of Definition 2.7 holds; namely that for any S-differential operator of order j(≤ n), say D j : D j (I n ) ∈ I n−j . A similar argument can be used to extend Rees algebras to algebras, say I k W k ⊂ B[W ] again, with the condition : (2.11.1) D j (I n ) ∈ I n for any positive n, and any differential operator of order j, with no condition on j. It is easy to check that ideals I n with this property are those generated by elements in S. Consider, as in Theorem 2.9, a finite set F = {g n i W n i , n i > 0, 1 ≤ i ≤ m}, with the following properties: a) For any 1 ≤ i ≤ m, and any n ′ i , 0 < n ′ i ≤ n i : g n i W n ′ i ∈ F . b) For any 1 ≤ i ≤ m, and for any index 0 ≤ α: ∆ α (g n i )W n i ∈ F . We claim now that the B subalgebra of B[W ], generated by F over the ring B, fulfills (2.11.1). Note here that each g n i is polynomial in X, so ∆ α (g n ) = 0 for α big enough, so F is in fact finite. In order to prove the claim it suffices to show that ∆ α (a · H N ) ∈ I N , for a ∈ B, H N as in 2.9.1. As in the previous Theorem we proceed in two steps, but proving now that: is a sum of elements of the form: ∆ β 1 (g n i 1 )·∆ β 2 (g n i 2 ) · · · ∆ βp (g n ip ), 1≤s≤p β s = α. So, to prove 1), it suffices to show that each of these products is in I N . This follows from (2.9.1) and the assumption on F . The proof for 2) is similar. by setting T ay(X) = X + U, is an S-algebra homomorphism. In fact the proof of the Theorem reduce to showing that ∆ α (H N ) ∈ I N −α (that ∆ α (H N ) ∈ I N −α in the case of Remark 2.11), where H N = g n i 1 · g n i 2 · · · g n ip is a product of elements in a finite set of generators F . An interesting alternative S-algebra homomorphism is defined by setting T ay X (X) = X + XU. In this case If a finite set F = {g n i W n i , n i > 0, 1 ≤ i ≤ m} is such that: a) For any 1 ≤ i ≤ m, and any n ′ i , 0 < n ′ i ≤ n i : g n i W n ′ i ∈ F . b) For any 1 ≤ i ≤ m, and for any index 0 ≤ α: X α ∆ α (g n i )W n i ∈ F . As each g n i is polynomial in X, X α ∆ α (g n ) = 0 for α big enough, so F is in fact finite. The same argument used above show that for these algebras: Rees algebras with this property are considered in toric geometry. They are also characterized by the fact that if f (X) = s r X r (∈ S[X]) is in I n , then each s r X r ∈ I n . 3.1. A sequence of coherent ideals on a scheme Z, say {I n } n∈N , such that I 0 = O Z , and I k · I s ⊂ I k+s , defines a graded sheaf of algebras n≥0 I n W n ⊂ O Z [W ]. We say that this algebra is a Rees algebra if there is an open covering of Z by affine sets {U i }, so that each restriction In what follows Z will denote a smooth scheme over a perfect field k, and Dif f r k (Z), or simply Dif f r k , the locally free sheaf of k-linear differential operators of order at most r. Definition 3.2. We say that a Rees algebra defined by {I n } n∈N is a Diff-algebra relative to the field k, if: ii) There is open covering of Z by affine open sets {U i }, and for any D ∈ Dif f (r) (U i ), and any h ∈ I n (U i ), then D(h) ∈ I n−r (U i ), provided n ≥ r. Due to the local nature of the definition, we reformulate it in terms of smooth k-algebras. Definition 3.3. In what follows R will denote a smooth algebra over a perfect field, or a localization of such algebra at a closed point ( a regular local ring). A Rees algebra is defined by a sequences of ideals {I k } k∈N such that: 1) I 0 = R, and I k · I s ⊂ I k+s . 2) I k W k is a finitely generated R-algebra. We shall say that the Rees algebra is a Diff-algebra relative to k, if 3) I n ⊃ I n+1 , and 4) given D ∈ Dif f We now show that any Rees algebra extends to a smallest Diff-algebra (i.e., included in any other Diff-algebra containing it). Theorem 3.4. Fix a smooth scheme Z over a perfect field k. Assume that G = I k W k is a Rees algebra over Z. Then there is a natural and smallest extension of it, say G ⊂ G(G), where G(G) is a Diff-algebra relative to the field k. Proof. The problem is local, so we will assume that R is the local ring at a closed point, and show that a finitely generated subalgebra of R[W ] extends, by successive applications of differential operators, to a finitely generated algebra. We will argue in steps. Assume that the local ring R is of dimension 1, and let x denote a parameter. Set T ay :R →R[[U]] the k-algebra morphism at the completion defined by setting T ay(x) = x+U. ] is a ring of formal power series over a finite extension For The operators ∆ r , r ≥ 0, are a basis of the k-linear differential operators on R. The same argument used in Theorem 2.9 shows that if Let now R be a localization of an arbitrary smooth algebra at a closed point, and fix a regular system of parameters {x 1 , . . . , x n }. Define as the continuous morphisms of algebras defined by setting T ay( This morphism defines, by restriction, T ay : R → R[[U 1 , . . . , U n ]], and we set The assumption that k is perfect ensures that {∆ α /α ∈ (N) n , 0 ≤ |α| ≤ n} is a basis of the free R-module Dif f n (R), and in order to show that a Rees algebra I k · W k is a Diff-algebra, it suffices to check that given g ∈ I m : is defined in terms of the differential operators ∆ α i 0 . For any α = (α 1 , α 2 , . . . , α n ) ∈ (N) n : ∆ α = ∆ α 1 1 · · · ∆ αn n , is a composition of partial operators defined above. And I k W k is a Diff-algebra if the requirement in (3.4.1) holds for each of these partial differential operators. So again, the arguments in Theorem 2.9 ensure that if I k W k is generated by generates the smallest extension of I k W k to a Diff-algebra relative to the field k. Remark 3.5. (Not used in what follows.) In the previous discussion we reduce the proof of the Theorem to the case of one variable, and we make use of Theorem 2.9. There are interesting variations in the one variable case discussed in Remark 2.12, of particular interest in the case of differentials with logarithmic poles. Such is the case when we fix an integer s, 1 ≤ s ≤ n, and consider, for each index 1 ≤ i 0 ≤ s, the modified function: T ay There is an natural analog of Diff-algebras with Rees algebras which are closed by differential operatores with logarithmic poles. This follows from Remark 2.12, and it is simple to extend the outcome of (3.4.2) to this context. Corollary 3.6. Given inclusions of Rees algebras, say is the Diff-algebra spanned by G, then G(G) is also the Diff-algebra spanned by G ′ . 3.7. Fix now a smooth morphism of smooth schemes, say Z → Z ′ . Let Dif f r Z ′ (Z), or simply Dif f r Z ′ denote the locally free sheaf of relative differential operators of order r. We say that the Rees algebra it follows that any Diff-algebra relative to k is also relative to Z ′ . Theorem 3.4 has a natural formulation for the case of Diff-algebras relative to Z ′ . Given an ideal I ⊂ O Z , and a smooth morphism Z → Z ′ , we define an extension of ideals Note finally that a Rees algebra I k W k over Z (3.1) is a Diff-algebra relative to Z ′ , if and only if, for any positive integers r ≤ n, Dif f r Z ′ (I n ) ⊂ I n−r . In particular, for Z ′ = Spec(k), condition ii) in Def 3.2 can be reformulated as: ii') Dif f r k (I n ) ⊂ I n−r . 4.1. The notion Diff-algebra relative to a perfect field k, on a smooth k-scheme Z, is closely related to the notion of order at the local regular rings of Z. Recall that the order of a non-zero ideal I at a local regular ring (R, M) is the biggest integer b such that is the closed set of points of Z where the ideal has order at least b. We analyze this fact locally at a closed point x. Let {x 1 , . . . , x n } be a regular system of parameters for O Z,x , and consider the differential operators ∆ α , defined on O Z,x in terms of these parameters, as in the Theorem 3.4. So at x, The operators ∆ α are defined globally at a suitable neighborhood U of x. So if is a Diff-algebra relative to the field k and x ∈ Z is a closed point, the Diff-algebra defined by localization, say ( , if and only, for each index k ∈ N, the ideal (I k ) x has order at least k at the local regular ring O Z,x . It is the set of points x ∈ Z for which all (I r ) x have order at least r (at O Z,x ). 3) Let G ′′ = I ′′ n W n be the extension of G to a Diff-algebra relative to k, as defined in Theorem 3.4, then Sing(G) = Sing(G ′′ ). 4) For any Diff-algebra I ′′ n · W n be a Diff-algebra. For any positive integer r, Sing(G ′′ ) = V (I ′′ r ). Proof. 1) The argument in 2.3 shows that there is an index N, so that G is finite over the subring I k N W N k , and G ′ is finite over I ′k N W N k . And furthermore, I N and I ′ N have the same integral closure. In these conditions Sing(G) is the set of points x ∈ Z where I N has order at least N at O Z,x , and similarly, Sing(G ′ ) is the set of points x ∈ Z where I ′ N has order at least N. Finally, the claim follows from the fact that the order of an ideal, at a local regular ring, is the same as the order of its integral closure ( [22], Appendix 3). 2) We have formulated 2) with a global condition on Z, however this is always fulfilled locally. In fact, there is a covering of Z by affine open sets, so that the restriction of G is generated by finitely many elements. Let U be such open set, so The claim is that y ∈ Sing(G) ∩ U if and only if the order of g n i at O Z,y is at least n i , for The condition is clearly necessary. Conversely, if G = , and each g n i has order at at least n i at O Z,y , then I n (generated by weighted homogeneous expressions on the g i 's) has order at least n at O Z,y . 3) We argue as in 2). Fix a regular system of parameters {x 1 , . . . , x n } at x ∈ U, and differential operators ∆ α as in the Theorem 3.4. After suitable restriction we may assume that these operators are defined globally at U. Formula (3.4.2) shows tht the Diff-algebra G ′′ , in the Theorem 3.4, is a finite extension of the Rees algebra defined by Note finally that if the order of g n i at a local ring is ≥ n i , then the order of ∆ α (g n ) is ≥ n i − |α|. On restrictions of Diff-algebras. The concept of Diff-algebra is defined here for Rees algebras over a scheme, say V , which is smooth over a field k. Let V ′ be another smooth scheme over k and let V ′ → V be a morphism of k-schemes, then there is a natural lifting of a Rees algebra over V to a Rees algebra over V ′ . The goal in this section is Theorem 5.4 which states that the lifting of a Diff-algebra is again a Diff-algebra. Proposition 5.1. Let V be smooth over a perfect field, and let G = I k ·W k be a Diff-algebra defined by ideals I k ⊂ O V . A) If V ′ ⊂ V is a closed and smooth subscheme, the restriction of G to V ′ , say is a Diff-algebra over V ′ . B) If V ′′ → V is a smooth morphism, then the natural extension, say is a Diff-algebra over V ′′ . Proof. It is clear that both G ′ and G ′′ are Rees algebras (3.1). We will show that conditions (i) and (ii) in Definition 3.2 hold. It suffices to prove both results locally at closed points, say x ∈ Sing(G). Set G x = I k ·W k where now each I n is an ideal in O V,x . We may also replace the local ring by its completion. A) Fix a closed point x ∈ V ′ ⊂ V and a local regular system of parameters, say where k ′ is a finite extension of k. For each multi-index α = (α 1 , . . . , α d ) ∈ N d , Express an element f n ∈ I n as . So it is an element in the restricted algebra. Similarly, if |α (1) | + |α (2) | ≤ n, , for each f m W m ∈ I m W m . Conditions (i) and (ii) in 3.2 are now easy to check. For our further discussion we point out that I m O V ′ W m also contains all coefficients a α (1) W n−|α (1) | of f W n ∈ I n W n , and n − |α (1) ; and the claim is that the extended algebra is a Diff-algebra. The statement follows easily in this case, for example by formula (3.4.2), which expresses generators of the Diff-algebra in terms of generators of the Rees algebra. Definition 5.2. Fix G = I k · W k , a Rees algebra over V , and a morphism of k-schemes, say V π ←− V ′ . Assume that V and V ′ are smooth. Define the total transform of G to be Namely the Rees algebra defined by the total transforms of the ideals I n , n ≥ 0. Note that the restriction in A) and the natural extension in B), are particular examples of total transforms. Assume that V is affine and that F = {g N 1 W N 1 , . . . , g Ns W Ns }, generate G . Then each g N i defines a global section, say π * (g N i ) on V ′ , and we set, say ) be a Rees algebra generated by a finite set F = {g N 1 W N 1 , . . . , g Ns W Ns }, and let V π ←− V ′ be a morphism of smooth schemes. Then π −1 (G) is generated by π * (F ). Proof. Since any element of I M is a weighted homogeneous polynomial expressions of degree M, in elements of F , the total transform of the ideal is also generated by elements that are weighted homogeneous on π * (F ). Proof. Since V ′ π −→ V is of finite type, it can be expressed locally in the form V ′ ⊂ V ′′ β −→ V, where β is smooth. So Prop 5.1 proves (i). Fix a closed point x ∈ Sing(π −1 (G)). Since Sing(G) = V (I n ) for all n ≥ 1 (4.4), it follows that π(x) ∈ Sing(G). On the other hand, if π(x) ∈ Sing(G), the order of I n is at least n at O V,π(x) , for each n ≥ 1; so the same holds at O V ′ ,x . This proves (ii). On Diff-algebras and integral closures. 6.1. The aim in this section is, essentially, the proof of Main Theorem 6.12. The proof will require a better understanding of the notions of restriction already studied in in the last section. In this previous discussion restrictions where studied for a closed immersion of smooth schemes, say Z ⊂ V . Here we will consider, at least for the first results, a closed immersion together with a retraction, say V → Z. Given Z ⊂ V as above, a local retraction at a point x ∈ Z can always be defined in ań etale neighborhood. Here, given a Rees algebra G = I k · W k (⊂ O V [W ]) (over V ), the retraction V → Z will allow us to define a new Rees algebra over Z, called the Coefficient algebra. Fix x ∈ Sing (G) ∩ Z. The retraction defines an inclusion O Z,x ⊂ O V,x . Extend a regular system of parameters of O Z,x , say {x 1 , . . . , x h }, to a regular system of parameters, say We may assume here that I(Z) is < x h+1 , . . . , x d > at O V,x . The construction of the coefficient algebra will be addressed firsts at the completion of the local rings. SoÔ V,x is a ring of formal power series, say k Set, as usual, ), which also extends to a Rees algebra overÔ V,x . Express an element f n ∈ I n as . For any such f n W n , consider the set {a α (1) · W n−|α (1) | , 0 ≤ |α (1) | < n}, which we call the coefficients of f n W n . So the coefficients of f n W n is a finite set, defined in terms of a regular system of parameters, and the weight of each coefficient depends on the index n. Claim: As f n W n varies on the Rees algebra G x , the coefficients of f n W n generate a Rees algebra, say The claim here is that the graded algebra Coeff(G) x is a finitely generated subalgebra of Assume that F = {g N 1 W N 1 , . . . , g Ns W Ns } generate G x . Express, for 1 ≤ i ≤ s: We search for a finite set of coefficients, that span Coeff(G) x . A first candidate would be The difficulty appears already if we consider the product of two elements in F , say g N i W N i · g N j W N j = f n W n (n = N i + N j ); and a coefficient, say a α (1) W n−|α (1) | , of f n W n . It follows from 6.1.1 that for β, δ, and α (1) in (N) h . Note that the previous expression cannot be formulated in the form In fact, it can happen that |δ| ≥ N j , and we only consider W with positive exponents. In particular, the previous expression of a α (1) W n−|α (1) | is not weighted homogeneous in F ′ 1 , and hence not in the graded sub-algebra of k ′ [[x h+1 , . . . x d ]][W ] generated by F ′ 1 . One way to remedy this situation is to allow a (i) β to have weight n − |α (1) | if |δ| ≥ N j . Note that in such case n − |α (1) Therefore F ′ 1 can be enlarged to say, (6.1.3) for N i and α as in F 1 ; and the coefficients of f n W n are now weighted homogeneous on F 1 (i.e., are in the sub-algebra of The argument applied here to g N i W N i · g N j W N j , also holds for the coefficients of any product of elements in F , and hence for the coefficients of any homogeneous element in the algebra generated by F = {g N 1 W N 1 , . . . , g Ns W Ns } (i.e., for the coefficients of any homogeneous element of G x ). This shows that there is an inclusion of subalgebras in 2, 2)). In particular Coeff(G) x is finitely generated. Given . We now show that it can also be defined in O Z,x [W ], and that the definition relies on the inclusion Z ⊂ V and on the retraction. Express an element f n ∈ I nÔZ,x as . For each multi-index α (1) , 0 ≤ |α (1) | ≤ n, the coefficient a α (1) can be identified with the class of ∆ α (1) (f n ) inÔ Z,x . However, ∆ α (1) is a differential operator, relative to the local retraction V → Z, ∆ α . . x ′ h , x h+1 , . . . , x d } are two extensions to regular system of parameters for O V,x , and that < x 1 , . . . , is the same. The discussion in 6.3 also shows that, of course, the definition of Coef f (G) ⊂Ô Z,x [W ] is independent of the coordinates we choose in the subringÔ Z,x . where G ⊂ (Ô V,x /I(Z))[W ] denotes the restriction. Furthermore, this inclusion is an equality if G is a Diff-algebra: Lemma 6.6. With the setting as above, the restriction of G(G) to the smooth subscheme Z is the Diff-algebra spanned by Coeff(G) (i.e., the Diff-algebra generated by Coeff(G) in O Z,x [W ]). Proof. The previous discussion shows that Coeff(G) is included in the restriction of G(G), which is a Diff-algebra over O Z,x (Prop 5.1,A)). In particular, the Diff-algebra spanned by Coeff(G) x is included in the restriction. The claim is that this last inclusion is an equality. Here G(G) = I ′ k · W k is the Diff-algebra generated by G, so to prove this equality it suffices to show that given f n ∈ I n , and α = (α 1 , . . . , α d ) ∈ (N) d , 0 ≤ |α| < n, the class of , is in the Diff-algebra generated by Coeff(G). Remark 6.8. On the one dimensional case. We discuss here some particular features of the G operator on Rees algebras, which hold when the dimension of the underlying smooth scheme is one. ) be a Rees algebra over a one dimensional smooth scheme V ′ . The aim is to prove that in the one-dimensional case G ⊂ G(G) is a finite extension. If we assume that some I k = 0, then Sing(G) is a finite set of points. Fix x ∈ Sing(G) and and a r ≥ r for each index r. Define and note that λ G ≥ 1. Let {g N 1 W N 1 , . . . , g Ns W Ns } be a set of generator locally at a closed point x ∈ Sing(G). Fix any integer M divisible by all N i , 1 ≤ i ≤ s, then where ν(I M ) denotes the order of the ideal at O V ′ ,x . Let G denote the integral closure of G. Claim 1: The integral closure of G is determined by the rational number λ G , and λ G = λ G . In fact, by usual arguments of toric geometry, we conclude that t n · W m ∈ G, if and only if n m ≥ λ G . This proves the claim. Recall that Sing(G) = Sing(G(G)). Claim 2: Locally at any x ∈ Sing(G), both G and G(G) have the same integral closure. Let ∆ r , r ≥ 0, be defined in terms of the Taylor development in k ′ [[t]], as in the proof of Theorem 3.4. We prove our claim by showing that λ G = λ G(G) . To this end note that given t a · W b ∈ G, and an operator ∆ r , 0 ≤ r < b, where d is the class of an integer in the field k ′ . Since a ≥ b > r ≥ 0 it follows that a−r b−r ≥ a b , so Claim 2 follows from Claim 1. 6.9. Let G be a Rees algebra over V , and assume, after restriction to affine open set, that it is generated by {g N 1 W N 1 , . . . , g Ns W Ns }. Let M is a positive integer divisible by all N j , extension of graded algebras, and that any Rees algebra is a finite extension of a Rees ring of an ideal (2.3). Given two Rees algebras G 1 = r≥0 I(1) r W r and G 2 = r≥0 I(2) r W r , there is always a positive integer M such that both are integral extensions of the Rees ring generated by the M-th term, say k≥0 I(1) k M W km and k≥0 I(2) k M W km . Proposition 6.10. Fix two Rees algebras G 1 and G 2 over a smooth scheme V over a field k. Assume that for any morphism of regular k-schemes, say V ′ π −→ V , where V ′ is one dimensional, both pull-backs have the same integral closure (i.e., that π −1 (G 1 ) = π −1 (G 2 )). Then G 1 and G 2 have the same integral closure in V . Proof. Fix a positive integer M and ideals ideals I(1) M and I(2) M as in 6.9. We may assume here that π is of finite type. Lemma 5.3 and the previous properties show that under the condition of the hypothesis both I(1) M and I(2) M have the same integral closure in O V (2.5). In particular, G 1 and G 2 have the same integral closure. ) be a finite extension of Rees algebras over a smooth scheme V , and let V ′ be a smooth one dimensional subscheme in V . Fix x ∈ V ′ and a regular system of coordinates {x 1 , . . . , Proof. By fixing coordinates we also fix a local retraction at anétale neighborhood of x ∈ V . Strictly speaking the coefficient algebras are defined in such neighborhood. We sometimes work at the completions to ease the notation. Express any The coefficients of f W N are {a α W N −|α| /0 ≤ |α| < N}, and we define . . , f Ns W Ns } generate G 1 locally at x, and that where P is the ideal defining the smooth subscheme V ′ . Since G 1 ⊂ G 2 is finite, it follows that also J(2) r = 0 for all r ≥ 1. Assume now that some J(1) r is not zero for some r > 0. The inclusion Coeff(G 1 ) ⊂ Coeff(G 2 ) ensures that (6.11.1) λ Coeff(G 1 ) ≥ λ Coeff(G 2 ) , and we shall prove the claim, in what follows, by showing that they are equal (see Remark 6.8). ; and this property is preserved by any change of rings. Namely, for any ring homomorphism φ : Express, for any g M j W M j ∈ F 2 : and set [W ] (see 6.1.4); in particular: or, equivalently: So equality in (6.11.1) would follow if we show that λ Coeff(G 1 ) ≤ ν(a (j) α ) M j −|α| for each fraction as above. We will assume that for some index 1 ≤ j 0 ≤ t, or equivalently, that sl V ′ (g M j 0 ) < λ Coeff(G 1 ) for some index j 0 , and show that in such case g M j 0 W M j 0 is not integral over G 1 ; which is a contradiction. Define, as before, We will show that if sl V ′ (g M j 0 ) < λ Coeff(G 1 ) , for some index j 0 , a ring S and a morphism Let a > 0 and b > 0 be positive integers such that Define l : R d → R, l(y 1 , . . . , y d ) = ay 1 + ay 2 + · · · ay d−1 + by d , which maps N d into N. It follows that for k ′′ an infinite field, and for sufficiently general λ i ∈ k ′′ : ) has order strictly smaller then aM j 0 . Finally Claim 1 in Remark 6.8, where now λ β(G 1 ) = a, asserts that β(g M j 0 )W M j 0 is not integral over β(G 1 ); so (6.11.3) can not hold. The following Theorem can also be proved by other means, which involve Hironaka's theory on infinitely near points in [7]; a theory based on the behavior by monoidal transforms. Our proof relies on the previous development in this section, which will also be used for the proof of Theorem 6.13. Theorem 6.12. (Main Theorem) Let G 1 ⊂ G 2 be an inclusion of Rees algebras over a smooth scheme V . Let G(G i ) be the Diff-algebra spanned by G i (i = 1, 2) . If G 1 ⊂ G 2 is a finite extension, then G(G 1 ) ⊂ G(G 2 ) is a finite extension. Proof. The inclusion G(G 1 ) ⊂ G(G 2 ) is clear. We will argue locally at a point x ∈ Sing(G 1 ), and we make use of the criterion in Proposition 6.10 to show that the extension is finite. Let denote the total transforms of G 1 , G 2 ; and φ −1 (G(G 1 )), φ −1 (G(G 2 )) be the total transforms of G(G 1 ), G(G 2 ). If {x 1 , . . . , x d } is a regular system of parameters for O V,x , then {x 1 , . . . , x d } extends to a regular system of parameters, say {x 1 , . . . , x d , · · · , x e } for O V ′′ ,x ′ . It is easy to check that 1) 3) φ −1 (G(G 1 )) is the Diff-algebra generated by φ −1 (G 1 ). 4) φ −1 (G(G 2 )) is the Diff-algebra generated by φ −1 (G 2 ). Therefore the setting at V and at V ′′ is the same, and hence, in order to apply Proposition 6.10 we need only to show that given a finite extension G 1 ⊂ G 2 , the restrictions of the Diffalgebras G(G i ), i = 1, 2, to a smooth one dimensional scheme V ′ , have the same integral closure. Lemma 6.6 says that the restriction of G(G i ) to V ′ is the Diff-algebra generated by Coeff(G i ) (i = 1, 2). Remark 6.8 shows that for each index i = 1, 2, the Rees algebra Coeff(G i ), and the Diff-algebra generated by Coeff(G i ), have the same integral closure. So it suffices to show that Coeff(G 1 ) and Coeff(G 2 ) have the same integral closure, which was proved in Prop 6.11. In fact, 1),2),3), and 4) ensure that the setting of Prop 6.11 hold. Theorem 6.13. Let G 1 ⊂ G 2 be an inclusion of Rees algebras over a smooth scheme V . Fix a smooth subscheme Z ⊂ V , and a local (or formal) retraction V → Z. If G 1 ⊂ G 2 is a finite extension, then Coeff(G 1 ) ⊂ Coeff(G 2 ) is also finite. Proof. Set π : C → Z where C is smooth and one dimensional, and let x ′ ∈ C map to x. Locally at x ′ , one can factor π as C ⊂ Z 1 → Z, so that φ : Z 1 → Z is smooth. The retraction of V on Z, together with the morphism Z 1 → Z, define by fiber products, a retraction say V 1 → Z 1 , and a smooth morphism, say V 1 → V . The total transform of G By further restriction of Z 1 to C, we may assume that Z 1 is one dimensional. Theorem 6.13 follows now from Prop 6.11. Theorem 6.14. Let G(= G(G)) be a Diff-algebra over a smooth scheme V . Fix a point x ∈ Sing (G), a smooth subscheme Z ⊂ V containing x, and two local (or formal) retractions, say π : V → Z and π ′ : V → Z at x. If Coeff(G) and Coeff(G) ′ are defined in terms of π and π ′ respectively, then both define same Diff-algebra in O Z,x [W ]. Proof. Let G(G) denote the Diff-algebra spanned by G in the smooth scheme V . The claim is a corollary of 6.6 . Further applications. There is a particular but natural morphism among smooth schemes, namely that defined by blowing up closed and smooth centers (i.e., monoidal transformations). Given an ideal in a smooth scheme, there are several notions of transformations of sheaves of ideals defined in terms of monoidal transformations (e.g. total transforms, weak transforms, and strict transforms of ideals.). Questions as resolution of singularities, or Log principalization of ideals, are formulated in terms of these notions of transformations. In the case of schemes over fields of characteristic zero, both resolution and Log principalization of ideals are two well known theorems due to Hironaka. If two ideals have the same integral closure, then a Log-principalization of one of them is also a Log-principalization of the other; the key point being that the transforms of both ideals also have the same integral closure. These notions of transformations of ideals extend naturally to Rees algebras. And again, if two Rees algebras have the same integral closure, then their transforms are Rees algebras with the same integral closure. Both theorems of Log-principalization of ideals and of resolution of singularities are proved by induction on the dimension of the ambient space. In the setting of Diff-algebras this form of induction relates to the notion of restriction to a smooth subschemes, say Z ⊂ V in Theorems 6.13. The outcome of Theorems 6.14 is that such form of restriction to Z is, up to integral closure, independent of the particular retraction. This result plays a role in the extension of resolution theorems to Rees algebras. Our development will be applied in [18], in relation with the study of hypersurface singularities over fields of positive characteristic.
2014-10-01T00:00:00.000Z
2006-06-30T00:00:00.000
{ "year": 2006, "sha1": "458671d75c853ddeed1ea2b55e8602ec5745b611", "oa_license": null, "oa_url": "http://www.ems-ph.org/journals/show_pdf.php?iss=1&issn=0213-2230&rank=8&vol=24", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "458671d75c853ddeed1ea2b55e8602ec5745b611", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
186700669
pes2o/s2orc
v3-fos-license
Evaluation of Vocal Fold Motion During Blocks in Adults Who Stutter with age [1,2]. Blocks occur unexpectedly, even for AWS themselves, and can also halt normal inspiration and ex -piration. This symptom resembles the chief complaints (“voice not coming out” and “clogged voice”) made by patients with muscle tension dysphonia (MTD) and ad ductor spasmodic dysphonia (AdSD). A systematic re view of voice therapy in MTD showed that there were positive changes to outcome measures immediately following a period of therapy [3], and that therapy for MTD continued to be effective for 6 months after the completion of therapy [ 4]. However, speech therapy for adults who stutter (AWS) is limited [ 5 ]. Stuttering is readily modified during treatment in the clinic, but this gain is difficult to transfer outside mo-Abstract Background: Stuttering is a speech disorder; the primary symptom in adults who stutter (AWS) is blocks, which halt both speech and breathing. This study aimed to evaluate vocal fold motion during blocks in AWS, in order to better understand this condition. Methods: We used data obtained through flexible fiberop tic endoscopy and measurements of airflow and voice ob tained from speech phonogram waveforms for 58 blocks in 12 AWS who were asked to read a set text for measure -ments. We compared the number of blocks with glottal closure and glottal opening during stuttering. Results: In most AWS, blocks were accompanied by both glottal closure and glottal opening. The proportions of blocks with glottal closure and glottal opening were 46.6% and 53.4%, respectively. Thus, vocal fold positions during stuttering blocks varied among individuals. Conclusion: Our study shows that stuttering with cessation of voice can occur both when vocal fold is open and when it is closed. Introduction Shortly after the onset of stuttering, preschool-aged children often show prolongation and repetition of initial syllables; these symptoms decrease with age.However, blocks that halt speech become more common were conducted for measurements obtained during 58 blocks recorded from the 12 participants. The stuttering blocks were classified on the basis of the vocal fold position as follows: Dystonic type, intermediate type, and lateral type, whose words were followed by vocal fold paralysis positions [17].The dystonic type was represented by glottal closure, whereas the intermediate and lateral types were represented by glottal opening.Numbers of blocks with glottal closures and openings were recorded for all participants. Results Table 1 shows the frequency of the dystonic (glottal closure), intermediate type (glottal opening), and lateral type (glottal opening) blocks in the 12 AWS.In most AWS, blocks were accompanied by both glottal closure and glottal opening.In three AWS (Cases 10, 11, and 12), blocks were accompanied only by glottal opening.Thus, there was no consistent trend.The proportions of blocks with glottal closure and opening were very similar, at 46.6% and 53.4%, respectively. Figure 1 shows the aerodynamic test results and speech waveforms during stuttering blocks.In the dystonic type, gradual cessation of the expiratory airflow was observed after the disappearance of speech waveforms, pitch, and sound pressure.In the intermediate type, there was no speech waveform, although slight respiration was observed.In the lateral type, there was no expired or inspired air and no phonation. Figure 2 shows representative cases of MTD and expiratory flow rates.The expiratory flow of MTD decreased considerably; however, the waveforms for sound pressure and pitch were retained. Figure 3 is a schematic diagram showing the differences among normal speech, MTD, and stuttering blocks.In normal speech, the respiratory centers in the pons and medulla oblongata are synchronized with the laryngeal motor neurons, and timing cues for the initia-tion in AWS.It has been reported that glottal closure occurred during 100% of blocks in a study of only six blocks [12].However, repetition and prolongation during stuttering present both glottal closure and opening [12,13].Since speech therapies for AWS are different from voice therapy in MTD, we suspected that vocal fold functions in stuttering differ from those of MTD and AdSD.Therefore, we hypothesized that stuttering blocks could also involve both glottal closure and opening.The present study aimed to evaluate vocal fold position during stuttering blocks using a larger dataset from AWS. Methods This study was approved by the Kyushu University Ethics Committee and conformed to the tenets of the Declaration of Helsinki.All patients provided written informed consent to participate in the study.The present study included 12 AWS (age range, 20-49 years; mean age, 31.2 years) and 10 adults with MTD (age range, 20-64 years; mean age, 35.3 years).In free conversation, it is sometimes difficult to discern whether stuttering has occurred, and people who stutter rephrase words to avoid stuttering.Therefore, in the present study, evaluations were conducted while the participants read a set text.To assess the frequency of stuttering, all participants were instructed to read 84 morae (seven sentences) from the book "Jack and the Beanstalk".Symptoms specific to stuttering (repetition of syllables, prolongations, and blocks) were recorded [14][15][16]. A flexible fiberoptic endoscope was inserted transnasally for the observation of vocal fold positions during the occurrence of blocks.Pitch, sound pressure, and expiratory flow rate measurements were obtained, and speech waveforms were recorded with an inserted mouthpiece, for all participants.Measurements were obtained using Lab Chart 8 Software (AD Instruments, Colorado Spring, CO, USA), followed by superimposition of laryngeal endoscopic images and phonogram signals (PS-77, Nagashima.Medical Instruments Co., Ltd., Tokyo, Japan).The analyses by the generation of speech.However, in stutter, there is a block in the timing cues for the initiation. tion are emitted from the speech centers in the cerebral hemispheres, resulting in vocal fold adduction, followed The expiratory flow has considerably decreased, but the waveforms for sound pressure and pitch are retained (dotted line). ration and expiration were present.The occurrence of blocks accompanied by voice can be attributed to limited respiration.In contrast to lateral blocks that are considered pure silent blocks, the intermediate blocks exhibited slight inspiration and expiration; these may be a combination of blocks and other types of stuttering.The above findings indicate that the position of the vocal folds is not constant during stuttering blocks and that both glottal opening and closure can accompany blocks in the same AWS.Therefore, treatment regimens, which are only useful to weaken vocal glottal closure similar to direct (specific laryngeal relaxation, Yawn-sing method, reduction of vocal loudness) and indirect therapy (managing the contributing and maintaining aspects of vocal abuse patterns or poor vocal hygiene) for MTD [3,4] and surgical treatment (type 2 thyroplasty) for AdSD [18], may not be effective in the treatment of stuttering. Figure 3 shows the differences among normal speech, MTD, and stuttering blocks in AWS.In normal speech, the respiratory centers in the pons and medulla oblongata are synchronized with the laryngeal motor neurons, and timing cues for the initiation are emitted from the speech centers in the cerebral hemispheres, resulting in vocal fold adduction followed by the generation of speech [19][20][21].MTD may be due to excessive neural commands responsible for vocal fold adduction, which are emitted by laryngeal motor neurons in the pons and medulla oblongata.However, blocks in AWS occur in various vocal fold positions, which suggests that the origin of this disorder is at a level higher than the respiratory centers in the pons and medulla oblongata, and that it does not involve the laryngeal motor neurons.The findings of this study are related to a deficit in brain timing networks of speech planning and timing cues for the initiation and execution of motor sequences in stuttering [22][23][24]. Discussion In the present study, we conducted measurements during 58 blocks in 12 AWS, and found 46.6% blocks with glottal closure and 53.4% blocks with glottal opening.Both glottal closure and glottal opening were observed in the same AWS, demonstrating that the vocal folds assume varied positions during stuttering blocks.This finding was not consistent with the findings of Conture, et al. [12], who examined only six blocks and found glottal closure in all of the examined blocks.Their study was limited by a small sample size, and the results should thus be interpreted with caution.We analyzed data related to 58 blocks in the present study and found an adequate number with glottal opening.In addition, vocal fold position was not constant during blocks in the same AWS. In our study, pitch, sound pressure, and expiratory flow rate were measured simultaneously with speech waveform recordings.Our findings confirmed the presence of sound disruption during each stuttering block.During dystonic stuttering blocks, the disappearance of pitch, sound pressure, and speech waveforms were followed by a gradual decrease in the expiratory flow rate and its eventual disappearance.This phenomenon is characterized by a block accompanied by voice and was in contrast to the findings for the representative case of dystonic dysphonia described in Figure 2, which showed that expiratory flow was considerably lower in patients with MTD.The findings from the pitch and sound pressure waveforms suggest that voice disruptions occurring during dystonic stuttering blocks are not due to a blocked throat.In the lateral-type blocks, both voice and respiration were absent.In the intermediate-type blocks, voice disruptions occurred, although slight inspi- In conclusion, we found that stuttering blocks represent both glottal closure and opening, similar to prolongations and repetitions. Figure 1 : Figure 1: Representative cases of the three types of blocks classified in this study.The dotted lines indicate the timing of the occurrence of blocks. Figure 3 : Figure 3: Differences among normal speech, MTD and stuttering blocks. Table 1 : Frequency of each type of block, classified according to the vocal fold position and the proportion of blocks with glottal closure and opening, in the 12 adults who stutter included in this study.
2019-06-13T13:17:30.282Z
2018-12-31T00:00:00.000
{ "year": 2018, "sha1": "e4997991f433ff37469aa1676793edc26ec814fc", "oa_license": "CCBY", "oa_url": "https://www.clinmedjournals.org/articles/iacod/international-archives-of-communication-disorder-iacod-1-006.pdf?jid=iacod", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "163b29797a2ea08c28e6aa187e923e86de65ceee", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259701404
pes2o/s2orc
v3-fos-license
Metal-Based Nanoparticles in Food Packaging and Coating Technologies: A Review Food security has continued to be a topic of interest in our world due to the increasing demand for food. Many technologies have been adopted to enhance food supply and narrow the demand gap. Thus, the attempt to use nanotechnology to improve food security and increase supply has emerged due to the severe shortcomings of conventional technologies, which have made them insufficient to cater to the continuous demand for food products. Hence, nanoparticles have been identified to play a major role in areas involving food production, protection, and shelf-life extensions. Specifically, metal-based nanoparticles have been singled out to play an important role in manufacturing materials with outstanding properties, which can help increase the shelf-life of different food materials. The physicochemical and biological properties of metal-based nanoparticles, such as the large surface area and antimicrobial properties, have made them suitable and adequately useful, not just as a regular packaging material but as a functional material upon incorporation into biopolymer matrices. These, amongst many other reasons, have led to their wide synthesis and applications, even though their methods of preparation and risk evaluation remain a topic of concern. This review, therefore, briefly explores the available synthetic methods, physicochemical properties, roles, and biological properties of metal-based nanoparticles for food packaging. Furthermore, the associated limitations, alongside quality and safety considerations, of these materials were summarily explored. Although this area of research continues to garner attention, this review showed that metal-based nanoparticles possess great potential to be a leading material for food packaging if the problem of migration and toxicity can be effectively modulated. Introduction Food security in developing countries faces challenges such as low agricultural productivity, inadequate farming practices, natural resource degradation, high post-farming losses, limited value addition, and rapid population growth. Many methods are thus being adopted with newer technologies such as genetic modification, methods for improving soil fertility, biofortification, synthetic biology, artificial intelligence, and irrigation technologies to enhance food supply and narrow the demand gap, according to reports made by the United Nations Conference on Trade and Development (UNCTAD) in 2017 [1]. The attempt to use nanotechnology for agricultural purposes has been thought to emerge from the inferences that conventional farming technologies are insufficient to increase the ever-growing need for productivity while maintaining an eco-friendly approach [2]. For instance, the long-term use of "miracle seeds" with other farming techniques and agents such as pesticides, fertilizers, and irrigation has been questioned at the scientific and policy levels and proposed to be phased out due to the many health and environmental concerns [2]. Furthermore, foodborne illnesses are also a global public health concern. The Centers for Disease Control (CDC) estimated 47.8 million foodborne diseases, 127,839 hospitalizations, and 3037 deaths in 2011 in the United States alone [3], which ever, the concept of smart packaging combines the benefits of both active and intelligent packaging technology [39]. The different nanomaterial that has been applied thus far in food packaging can be generally classified as organic or inorganic. The organic materials include whey proteins, polysaccharides, quaternary ammonium salts, chitins, halogenated compounds, and phenols [40,41]. On the other hand, inorganic nanomaterials are often metal-based, which are further categorized into pure metals, metal oxides, and metal and/or metal oxide composites [42,43]. These are incorporated into compositing materials, usually polymers, to make nanocomposite films and nanofibers [42,43]. Hence, many organic, inorganic, and composite nanomaterials have been effectively developed for the qualitative and quantitative losses of food materials. Some prominent examples that have been successfully applied in food packaging are nanocellulose, nanostarch, protein nanoparticles, chitosan nanoparticles (CNPs), carbon nanotubes, silver nanoparticles (Ag-NPs), nanoclay, zinc oxide nanoparticles (ZnO-NPs), titanium oxide (TiO 2 -NPs) [44]. General Synthetic Approaches for the Preparation of Nanomaterials Detailed synthetic approaches have already been well established in literature over the years. These methods are generally categorized as "top-down" or "bottom-up" approaches [45][46][47]. The top-down method involves size reduction from a starting material via different types of physical or chemical treatment [48]. However, this method produces materials with limited surface chemistry and physical properties due to the introduction of imperfections on the surface of the material [16]. In bottom-up methods, the nanomaterials are built up from small particles like atoms to form a new entity in the nano regime [49]. In this approach, the nanostructured building blocks of the nanoparticles are formed as the first step and then assembled to produce the final particle [16]. Methods under the bottom-up approach are primarily chemical and biological reliant. Figure 1 summarizes various synthetic methods under the "top-down" or "bottom-up" approaches. Based on the projected application of choice, these different synthetic approaches have been employed in synthesizing material with unique and exciting characteristics [51]. Nevertheless, the associated toxicity accompanying most of these methods has been a major environmental concern in recent years because of toxic organic solvents, reducing substances, and stabilizers. The waste and ecological concerns material has led to the desire for a more biologically compatible, clean, reliable, efficient, and environmentally friendly synthetic route, such as using plant extracts or microorganisms [45]. These biological methods are generally referred to as the biogenic/green method. A detailed review of the biogenic synthesis and mechanisms involved in the preparation of metal-based nanoparticles has already been published and reviewed [52,53]. Based on the projected application of choice, these different synthetic approaches have been employed in synthesizing material with unique and exciting characteristics [51]. Nevertheless, the associated toxicity accompanying most of these methods has been a major environmental concern in recent years because of toxic organic solvents, reducing substances, and stabilizers. The waste and ecological concerns material has led to the desire for a more biologically compatible, clean, reliable, efficient, and environmentally friendly synthetic route, such as using plant extracts or microorganisms [45]. These biological methods are generally referred to as the biogenic/green method. A detailed review of the biogenic synthesis and mechanisms involved in the preparation of metal-based nanoparticles has already been published and reviewed [52,53]. Metal-Based Nanoparticles in Food Packaging Technology Food packaging materials of nanomaterial origin for shelf-life extension and quality retention are generally synthesized majorly by incorporating nanoparticles, which may be derived from either metal or metal oxide, into conventional food packaging materials such as films or containers, composite multilayer materials, organic, inorganic, and combined coating material [54]. Nanotechnology thus helps produce materials with improved properties like enhanced physical and mechanical properties while also preferring solutions to food deterioration by exhibiting biological properties such as antibacterial, antioxidative, and UV absorption properties [44]. Furthermore, these materials also perform a smart packaging property by actively monitoring and controlling food conditions within the enclosed package [55]. Some currently used nano-based food packaging materials that have already gained acceptance and are commercially used have been summarized in Table 1. As projected from the statistical report of the Vantage Market Research, the global smart packaging market size is projected to reach an estimated $33 billion by 2028, with a compound annual growth rate (CAGR) of 12% during the forecast period from the year 2021 till 2028 [56]. Further projection placed the smart packaging market as the fastest expanding material in the coming years and has alluded to its growth rate to its unique, interactive, customer-friendly features at a less expense [44]. The food packaging industry has become an important sector in food production due to the emergence of new technologies for retaining the nutritional and organoleptic properties of stored food [44]. Hence, in recent time food packaging scope has gone further than conventional food preservation to include the preservation of sensitive bioactive compounds from unfriendly environmental and physical damages [44]. This new scope has, in turn, led to the extensive search for material with functional properties such as thermal strength, stability, durability, and improved barrier properties, which possess the capacity to extend shelf-life of food products [54,57]. It is noteworthy to state that, generally, food packaging material should be made from cheap, hard, flexible, lightweight, inert, and strong sources, amongst other useful properties [58]. Two notable materials that fall into this class are polypropylene and polyethylene. Nevertheless, these materials are nonbiodegradable because they are plastic-based materials that may take more than a hundred years to break down and non-recyclable [58], which thus constitutes environmental hazards. Consequently, other solutions that are sustainable, biodegradable, safe, and can prevent or reduce environmental concerns are currently being sought [58]. Hence, materials that do not pose health or environmental concerns and are easily disposed of are the most desired. Natural products from renewable materials of animal, plant, and other biological origins are used [58]. Nevertheless, despite the benefits of natural biopolymers in food packaging application, they do not have the optimal required barrier, physical, and mechanical properties [58]. This, therefore, has brought about the introduction of nanomaterials with relevant properties capable of improving the currently conventional available food packaging material. Nano-based packaging materials for shelf-life extension and quality retention have thus been majorly synthesized either by the incorporations of nanoparticles into some already available traditional food packaging materials, which include films and containers or through the design of new nanocomposite with multi-layered materials and inorganic, organic, or the combination of both by nanocoating through, spraying, rubbing and immersion [44]. Over the years, metal-based nanoparticles have been extensively studied because of their benign nature and outstanding physical, biological, and physicochemical properties. Their ease of preparation also accounts for the numerous documentation in literature [26,52]. The optical properties, which are mainly influenced by the localized surface plasmon resonance (LSPR) of the noble metal nanoparticles, are significant properties that make this class of valuable nanomaterial in sensing and, ultimately, as smart packaging material [59,60]. This property has also allowed their use as material in drugs and gene delivery, photothermal therapy, molecular labeling, and bioimaging [61]. Furthermore, their ease of bioconjugation and low toxicity made them highly suitable and sought after for various biological studies (antimicrobial, anti-inflammatory, antiviral, anti-platelet, antidiabetic, anti-angiogenesis, and anticancer agents) and bio-nanotechnology [62]. Metal-based nanoparticles are, therefore, of particular interest in nanotechnological research for food packaging materials [58]. This is because they can be easily incorporated into natural biopolymers to form hybrid materials called nano-biocomposite, which possess significantly better properties than their respective ones. Metal-based nanoparticles have been reported to actively participate in the efficient design of active, novel, and efficient packaging [63,64]. For instance, loading varied from 0 to 5 wt% of ZnO Nps on a glycerol plasticized-pea starch film and the use of carboxymethyl cellulose as a stabilizer has been reported by Yu et al. [65]. This report showed an increase in the mechanical strength (tensile) by 9.81 MPa and 42% elongation at break, according to the report by Yu et al. [65]. Furthermore, higher UV-visible absorption properties were observed for this material. Therefore, the loading of nanoparticles on natural biopolymers often results in the formation of active packaging, which often performs other roles other than the conventional packaging material. Table 2 gives a summary of metal-based nanoparticle and their respective role in the design of some active food packaging materials using some natural biopolymers [58]. Poly(3-hydroxybutyrate-co-3-hydroxyvalerate)/ZnO Decreased a*, b*, and L* value and EB. Enhanced TS, TDT, transparency, and toughness. [75] UVA and UVB: types of UV radiation. Prominent Examples of Metal-Based Nanoparticle and Their Food Packaging Applications Silver nanoparticles (AgNP) have remained at the forefront of the most studied metalbased nanoparticles due to their unique physical and chemical properties, which have led to their application in several fields of endeavor [52]. Over the years, silver has shown to be a valuable material for food protection against microorganisms in the production of liquid food substances such as wine, water, and milk [33]. Its application in medicine and biotechnology remains amongst its notable applications due to its ability to inhibit the growth of microorganisms attacking humans, such as those in burns, catheters, cuts, and wounds, to protect them from infection [76]. Due to its large surface area, silver, in its nano form, has been reported to possess a broad spectrum of biological activities such as antimicrobial, antifungal, anti-yeasts, antioxidant, and antiviral compared to its bulk counterpart [77][78][79]. The two forms of silver, Ag 0 and Ag + species, have been suggested to account for the antagonistic action of silver nanoparticles (AgNPs) against microorganisms [37]. Furthermore, they have been reported to have the capacity to break down lipo-polysaccharide by binding to the surface of the cell [33]. Hence, there has been extensive interest in studying their different synthetic routes. In the past, silver NPs have been prepared using conventional methods such as the solvothermal synthetic route, requiring many hazardous, pricey, and environmentally unfriendly chemicals [80]. These concerns have led to the discovery of many more accessible, easy-to-prepare, cheap, and ecologically friendly approaches, such as using plant extracts as a mediating agent. Although using biologically significant extracts in synthesizing these nanomaterials have been found to confer enhanced bioactivity on these materials; nevertheless, the conventional methods have been reported to possess the potential to control the shape of the nanoparticles more readily. However, toxic chemicals, cost, and wastes, which influence their biocompatibility, remain a significant concern [10]. Many plants such as Musa balbisiana (banana), Azadirachta indica (neem) and Ocimum tenuiflorum (black tulsi), Phyllanthus emblica, Dovyalis caffra, Clitoria ternatea, Solanum nigrum, and Jasminum officinal have been used in the preparation of silver nanoparticles with different biological properties [52,[81][82][83]. Silver has been one of the most explored in the class of metal-based nanoparticles due to its already established acts as an antimicrobial agent against several commensal and pathogenic strains alongside fungi and viruses [84,85]. They act by targeting metabolic activities through their binding to DNA, proteins, and enzymes, which results in bacteriostatic effects [86]. This then disrupts and destabilizes the outer and cytoplasmic membranes [87]. They have also been found to stimulate the production of reactive oxygen species (ROS) and inhibit some enzymes responsible for the respiratory chain, as seen in Figure 2 [88]. The influence of physiochemical properties such as shape, size and crystal structure, as seen in Figure 2b, on the antimicrobial activities of metal-based NPs is well established in literature [89,90]. Nevertheless, other factors such as aggregation, dissolution and surface charges have been implicated in the biological activities of these materials. The dissolution process, for instance, is a crucial process in which the nanoparticles release metal ions, which can interact with bacterial cells, disrupting their vital functions and leading to cell death [91]. This enhances the antibacterial activity of metal nanoparticles and contributes to their effectiveness against Gram-positive and Gram-negative bacteria [92]. Although agglomeration has been dubbed to enact both positive and negative effects on the performance of metal-based nanoparticles against microorganism, its impact on the activities have been highlighted in literature. Agglomerated nanoparticles can increase surface area for interactions with bacterial cells due to the larger structure provided by the aggregated material, allowing for increased contact and interaction with bacterial membranes [92]. Nevertheless, the large, agglomerated structures may also limit the penetration of nanoparticles into bacterial cells, reducing their effectiveness while still altering the physicochemical properties of the nanoparticles, such as size, shape, and surface charge, which may affect their interaction with bacteria and their mode of action [92]. The surface charges of nanoparticles have been thought to significantly influence antimicrobial activities. The surface charges of nanoparticles, such as silver nanoparticles, affect their interactions with bacteria and contribute to their antibacterial properties [93][94][95]. However, the exact mechanisms underlying these interactions and the specific effects of nanoparticle surface charges on antimicrobial activities require further research and exploration [93][94][95]. Silver nanoparticles have been embedded in porous zeolite, which is used in producing plastic (low-density PE) with the capacity to extend the shelf life for storing beverages such as orange juice [88,97]. The active nanocomposite has been reported to be highly effective in conferring antimicrobial properties and possessing heat treatment capacity [88,97]. Most biopolymers prepared uses polysaccharides and protein for food packaging. Many studies have been carried out in which nanoparticles like silver have been impregnated into polymeric matrixes. For instance, about 15.3 mgmL -1 of silver NPs impregnated into cellulosic food packages was found to significantly enhance the shelf life of tomatoes Figure 2. (a) mechanism of antimicrobial action [44] and (b) the impact of some physicochemical properties on the interaction of metal-based nanoparticles with microbial cells (images copied from [96] with permission from Elsevier (Copyright 2023). Silver nanoparticles have been embedded in porous zeolite, which is used in producing plastic (low-density PE) with the capacity to extend the shelf life for storing beverages such as orange juice [88,97]. The active nanocomposite has been reported to be highly effective in conferring antimicrobial properties and possessing heat treatment capacity [88,97]. Most biopolymers prepared uses polysaccharides and protein for food packaging. Many studies have been carried out in which nanoparticles like silver have been impregnated into polymeric matrixes. For instance, about 15.3 mgmL -1 of silver NPs impregnated into cellulosic food packages was found to significantly enhance the shelf life of tomatoes and cabbage, according to the report by Singh and Sahareen [98]. Also, Vieira et al. used about 0.25% (w/w) Ag-NPs to inhibit the proliferation of Colletotrichum gloeosporioides on stored fruits (Carica papaya L.) for 14 days at 20 • C, thus extending the shelf life [97]. Likewise, using PVP as a coating material, 72 and 98 mgmL -1 of silver nanoparticles were used to inhibit the growth of E. coli and Bacillus cereus, respectively, which in turn prevented the development of grey molds for 15 days at 15 • C on stored pepper chilli [99]. Similarly, fresh stored tomatoes' shelf life has been extended by 30 days using 100 mgmL -1 of Ag NPs compared to the used control, according to [100]. Other studies using silver nanoparticles as food packaging materials and their notable properties have been summarized and presented in Table 3. polylactic acid (PLA)/AgNPs 1-10% Preserved ascorbic acid in strawberries. Decreased the reduction rate of polyphenols in the same fruit. PLA/Ag 5% film showed better preservative properties than the other counterparts. [102] Polyvinyl alcohol/clay/AgNPs nanocomposites film Not indicated Enhanced mechanical, light barrier and water-resistant properties were observed. Antimicrobial action against S. Typhimurium and S. aureus enabled it as active food packaging material. Fabricated Pouches of PVA/clay/Ag nanocomposite prevented microbial spoilage in chicken sausages. [103] Polyethylene/Ag/TiO 2 Ag/TiO2 nanopowder (9 g) Showed strong antibacterial activity because of the interaction between Ag and TiO 2 . This film retarded the changes in the pasting qualities and texture of rice. [104] (AgNPs) encapsulated in gelatin-montmorillonite(M), cellulose acetate (CA), and/or thymol. CA/Ag/M film 3-5% tensile properties, UV blocking, and oxygen barrier properties of the films were enhanced. Good antioxidant activity was recorded, including those having thymol. Synergistic effects of AgNPs and thymol on the films' antimicrobial and antifungal activities. [105] Zinc oxide nanoparticles, just like silver NPs, have remained at the forefront of oxides nanoparticles that have received considerable attention due to their attractive physicochemical, cost, and role in many biological systems because of the presence of Zn, which is an essential element for both plants and animals [100]. Zinc oxide and silver NPs have been extensively studied and used as antibacterial, anti-inflammatory, antifungal, antioxidant, cancer therapy, wound healing, bioimaging, antidiabetic, drug delivery, and drug targeting purposes [106]. In its oxide form, Zn has been reported to become readily available for assimilation compared to its ionic form. It was found during a foliar study of coffee plants that the Zn content of the plant was three times more in the plant in which ZnO was used compared to the ionic form, which in turn led to the high assimilation of carbon dioxide and high photosynthetic rate [107]. This thus highlights the importance of using the oxide than the ionic form. It has been reported that using ZnO NPs on the surface of plant material and upon exposure to light generates reactive oxygen species such as H 2 O 2 , *OH, and O 2 *-). These active oxygen species attack the microbial wall of microorganisms, inhibiting their proliferation and growth on such plant material. Hence, reducing or completely stopping the spoilage of perishable food materials [7,108]. Nevertheless, ROS also tends to impede the homeostasis of many plant systems [109]. There is, therefore, a need to find the right balance in its postharvest application to serve the desired purpose and not generate a new problem entirely. Moreover, the U.S. FDA has acknowledged ZnO as a "generally recognized as safe" (GRAS) material (21CFR182.8991) (FDA,2011) [7]. Due to the broad-spectrum potential against several microorganisms, ZnO has been used in food packaging in compositing with other polymeric materials. The most common material matrix used with ZnO is Chitosan [110][111][112][113]. In a study carried out to compare the antimicrobial performance of Ag-Chitosan film against ZnO-Chitosan counterpart, using S. aureus, E. coli, S. typhamrium, B. cereus, and L. monocyte ranged, the inhibition diameter for silver was between 10-15 mm while ZnO was between 15-19 mm. In another study involving the screening of an incorporated ZnO NPs in a matrix mixture of chitosan/calcium silicate/polyethylene glycol against S. aureus, P. aeruginosa, C. albicans, and A. nigera a higher inhibition diameter was reported which was superior to those of the control used in the study [114]. The incorporation of chitosan into other biopolymers has also been extensively studied. For instance, incorporating ZnO into polypyrrole-modified bacterial cellulose as a polymer matrix, a material currently used in food packaging applications, revealed a remarkable improvement as an antioxidant material [115]. Furthermore, the nanorod of ZnO has been composited with grapeseed extract to form a film that showed UV-blocking and enhanced vapor barrier properties [116]. Other studies in which ZnO has been incorporated into polymeric matrices are summarized in Table 4. Enhanced water barrier, UV barrier, mechanical and antimicrobial properties. In the antimicrobial study, an inhibition zone of 28 mm was recorded against Salmonella typhimurium. [119] Grape seed extract (GSE, 5 wt% of CMC)/ZnO composite films 3% The inclusion of GSE enhanced antioxidant activity to the CMC-based films, exhibiting about 95% and 25% scavenging activity against ABTS and DPPH oxidative free radicals. The film exhibited 100% UV protection. Furthermore, upon the addition of ZnO NPs, the composite film showed enhanced mechanical and water vapor barrier properties. Also, the composite film displayed potent antibacterial properties against foodborne pathogens of E. coli and L. monocytogenes. [120] Pectin/ZnO composite films 0.5-1.5% The UV-light barrier property of the pectin/ZnO films was significantly enhanced as the concentration of ZnO increased. [121] Another notable metal-based nanomaterial that has been considered and is currently being investigated is titanium oxide nanoparticles (TiO 2 NPs) [122]. This is because, alongside ZnO and silver, they have been approved as safe materials by The Food and Drug Administration (FDA) for biomedical, food, and cosmetics applications [123,124]. Specifically, TiO 2 NPs, within the size range of 20-400 nm, have been widely used as packaging material in the food industry because of their biocompatibility, non-toxicity, high surface area, UV absorptivity, high refractive index, photocatalytic, and biological properties [125][126][127]. Titanium oxide has already been approved as a food additive in many countries, including the USA. However, the stipulated amount by the FDA has been limited to 1% of the total food mass. Contrary to the USA, the EU has approved titanium oxide use to quantum satisfaction, meaning no maximum level is specified [122]. These laws, therefore, show the suitability of the material in food packaging and an approved food additive (listed as E 171) in quantum satisfaction, which means that no maximum level is specified [128]. In China, up to 10 g/kg of TiO 2 can be used in food substances as a coloring agent [128]. In medicine, food, cosmetics, and electronics, TiO 2 NPs have been widely used due to their valuable properties [129,130]. It has been specifically used in the food sector to manufacture active packaging composite film with improved functional properties [127,131]. Titanium oxide interacts with the film matrixes, which in turn leads to enhanced physical strengths, improved gas barrier, and, in some cases when confers a secondary function of decomposing ethylene, which in turn leads to enhances the shelf life of fruits after harvest [125,127,129,132]. Hence, its outstanding properties, such as ethylene scavenging abilities [133,134], antimicrobial properties [135][136][137], compatibility with biopolymers [131,138], and UV shielding [136,139] have exceptionally made them useful in the design of active food packaging materials [127]. Upon its addition to biopolymers in the preparation of composite films, these properties are generally enhanced [132,134]. This also leads to the concurrent enhancement of the physical, barrier, mechanical, thermal, functional, and chemical properties of the polymeric matrix [131,137,138,140]. These properties are measured based on solubility, thickness, and moisture content for physical properties; water vapor and oxygen for barrier properties; color coordinates and transparency for optical properties; glass transition temperature (Tg), melting point, and degradation temperature for thermal properties; tensile strength, elongation at break, Young's modulus for mechanical properties; gas scavenging, antioxidant, antimicrobial, and UV shielding, for functional properties [127]. The functional properties of biopolymers containing titanium oxide used in active food packaging material have been widely studied and summarized in Table 5 [122]. Table 5. Few examples of TiO 2 NPs-based materials and their effects on the polymer matrix [122]. Effects/Functions of TiO 2 in the Prepared Food Packaging Material References Hydroxypropyl methylcellulose 0.5-2 Enhanced opacity and elongation-at-break (EB). [143] k-carrageenan/xanthan gum/ gellan gum 1-7 Enhanced tensile strength and antimicrobial properties; reduced water vapor permeability and water content. [139] Gelatin 3-5 Enhanced tensile strength, opacity, elongation-at-break, and antibacterial properties; Reduced water vapor permeability. [148] Sweet potato starch/lemonwaste pectin 0.5-4 Enhanced tensile strength. Reduced the water vapor permeability, water content, and water solubility. [149] Gellan gum 1-20 Enhanced thickness, tensile strength, opacity, and antimicrobial properties; reduced the permeability of water vapor. [150] Hydroxypropyl methylcellulose 0.04 Enhanced elongation-at-break. [151] Chitosan 0.25-2 Enhanced tensile strength and antimicrobial properties; reduced the permeability of water vapor. [152] Kefiran/whey protein isolate 1-5 Enhanced elongation-at-break. Reduced the water vapor permeability, water content, and water solubility. [131] CMC/guanidinylatedchitosan 1-5 Enhanced tensile strength, opacity, and antibacterial properties. Reduced the water vapor permeability, water content, and water solubility. [153] Wheat starch 1-4 Enhanced opacity. Reduced the water vapor permeability and water solubility. [154] Also, copper oxide is among the notable FDA-approved metal-based nanoparticles that have garnered attention. Their potential as antimicrobial agents (such as bacteria, fungi, viruses, and algae) has made them highly desirable for several applications [155]. This is because the high surface area of the nanoparticles allows for interaction with cell membranes, which confers an excellent antimicrobial action [156][157][158]. Furthermore, the increased interest stems from the observed properties such as shape, size, and composition [159] and outstanding physical properties like high-temperature superconductivity, electron correlation effects, and spin dynamics [160]. This has led to its application in several scientific and technological fields, including electronics [161,162], agriculture [163,164], medicine [165,166], and solar energy [167,168]. Its function as an antimicrobial agent stems from the fact that copper ion destroys and interrupts microbial cell components by redox reactions. Their antimicrobial potential has thus been highly studied and applied in improving some polymers used in food packaging [157,158,169,170]. In a study carried out by Saravanakumar et al. [171] to produce an antimicrobial film (APF), CuO NPs have been incorporated into cellulose at different compositions using sodium alginate (SA) as a plasticizer to provide flexibility. Both constituent materials, cellulose nano-whiskers (CNW) and the CuO NPs, acted synergistically by limiting moisture penetration and preventing microbial activity on freshly cut pepper. In this study, the standard characterization technique of XRD, UV, FTIR, EDX, and SEM was carried out to ascertain the resulting physicochemical properties of the newly prepared material. This material was found to exact the active food packaging of antimicrobial and barrier actions at the optimum compositions of CNW (0.5%)-SA (3%)-CuO NPs (5 mM), which showed the potential to be a functional food packaging material capable of overcoming the limitations of the conventional ones [171]. The prospects of two or more FDA-approved metal-based nanoparticles have been studied to examine the possibility of synergism in their application as food packaging materials. According to the report made by Dehghani et al. [172], some FDA-approved metal nanoparticles of Ag, ZnO and CuO at different combination ratios at a reduced concentration were incorporated into LDPE to prepare an active food packaging material. The physicochemical characterization confirmed uniformly distributed nanoparticles on the surfaces of the prepared nanocomposites. It was found that at a combination of up to 1% (w/w) of any two NPs, improved tensile strength and elongation at break properties of the films were observed. Furthermore, in some specific combinations containing ZnO NPs, the UV transmission was reduced, which means they possess the potential to prevent the adverse effect of UV deterioration. Against Staphylococcus aureus and Escherichia coli, these materials showed increased antimicrobial action in the various combinations without increasing concentrations. It was concluded that the LDPE without the combination involving Ag (i.e., ZnO-CuO combination) showed the best food packaging potentials regarding strength and antimicrobial actions. This showed the potential of the combination of metal-based nanoparticles than the individual ones seeing that enhanced activities were recorded [172]. Practical Application of Metal-Based Nanoparticles Composites to Food Materials As already established in many studies carried out in literature, which showed the potential of metal-based nanoparticles in improving the properties of food packaging materials, many studies have already applied these materials to real-life food substances such as fruits, oils, and meats. For instance, upon embedding ZnO nanoparticles at varying compositions into CMC-based functional films with grape seed extract to high-fat beef and investigating for 15 days, the number of psychotropic bacteria in the composite coating contains 3% ZnO film was within the acceptable range of 5.9 Log CFU/g. Additionally, it was observed that the same composite film with the 3% ZnO prevented the oxidation of lipid in the meat upon refrigeration (reducing by 88%), which thus suggest that this material could be useful as an active packaging material for high-fat meat such as beef [120]. In the fruit industry, a notable concern plaguing the preservation of fruits for longer storage is the problem relating to the generation of hormones that enhances the natural ageing and decaying of perishable foods like fruits and vegetables during postharvest storage and transportation. Thus, removing this from the surrounding environment can significantly improve their shelf life and reduce the damage to food materials [173]. In this study by Zhang et al., TiO 2 nanoparticles composited alongside polyacrylonitrile (PAN) were examined for the degradation potential of fruit-emitted ethylene using photocatalysis [173]. The prepared PAN@TiO2 composite showed enhanced photocatalytic activity in ethylene degradation under low-intensity UV light irradiation (2.9 µWcm −2 ), which in turn slowed the color change and the softening of the tomatoes during storage for 14 days (see Figure 3) [173]. Approximately 65% of the ethylene was degraded within 25 h. The report thus showed the potential of TiO 2 -coated PAN nanofibers as a valuable material for shelf-life extension for food materials such as tomatoes [173]. It was noted alongside other literature that photocatalytic processes can remove acetaldehyde, ethanol, and off-flavours generated by red tomatoes during storage [174]. Furthermore, the shelf life of fresh-cut food has been reported to be significantly reduced from several weeks to days due to the various metabolic activities on the tissues of the fruits, which include damages during grating, peeling, and shredding; and the exposure of the cut surfaces to external surroundings [175]. Thus, modifying the environment around the food could offer a solution for shelf-life extension by adjusting the barrier properties of the packaging film [176]. Edible coatings like those in which metal-based nanoparticles are embedded have been documented to show a promising approach to this problem. Li et al. [175] have used PVC film with ZnO nanoparticles to examine the shelf-life extension effects on freshly cut 'Fuji' apples at 4 • C for 12 days. It was observed that, upon comparing with the ordinary PVC film, the fruit decay was significantly reduced alongside the accumulation of malondialdehyde (MDA) from 74.9 nmol/g in the control to 53.9 nmol/g in the nano-packaging [175]. Although the cutting process was reported to bring about the increased generation of ethylene, suggesting wound-induced ethylene production, this was significantly suppressed in the fruit packaged with the ZnO composites. Additionally, it was found that both the pyrogallol peroxidase and polyphenol oxidase activities were also decreased in the prepared nanocomposite. The initial appearance of apple slices was retained, and the browning index was prevented in nano-packaging samples. that photocatalytic processes can remove acetaldehyde, ethanol, and off-flavours gene by red tomatoes during storage [174]. Furthermore, the shelf life of fresh-cut food has been reported to be significan duced from several weeks to days due to the various metabolic activities on the tiss the fruits, which include damages during grating, peeling, and shredding; and the sure of the cut surfaces to external surroundings [175]. Thus, modifying the environ around the food could offer a solution for shelf-life extension by adjusting the b properties of the packaging film [176]. Edible coatings like those in which metalnanoparticles are embedded have been documented to show a promising approach t problem. Li et al. [175] have used PVC film with ZnO nanoparticles to examine the Other studies using ZnO nanoparticles alongside polysaccharides as safe coating materials and their respective activities are summarized in Table 6 [177]. Reduced the production of CO 2 and weight loss. Maintained total acidity, colour, and textural appearance better. [181] 0.5% w/v of CMC. ZnO 0.1% and 0.2% w/v (30-100 nm). Dipping 12 days, 4 • C and RH 90% Pomegranate arils Reduced weight loss and loss of vitamin C. Reduced the loss of anthocyanin and phenolic content. Showed higher antioxidant activities [182] 10 g of Pectin in 1 L solution. ZnO 0.1 g/L in solution Dipping 8 days at 25 • C Star fruit Reduced browning index, redness value, and weight loss. Reduced physical damage. [183] According to many reports in literature, silver nanoparticles (AgNPs) have already proven to be one of the most effective antimicrobial nano-based materials with a broadspectrum activity against different microbial pathogens, including bacteria, fungi, yeasts, and viruses [184][185][186]. This has made them one of the most sought-after nanomaterials in material science. Hence, they have been composited with many polymeric materials including biopolymers and plastic materials for various food packaging applications. In this study by Kumar et al. [186] packaging film of Ag nanoparticles-based nanocomposite with both chitosan and gelatin bases were formulated. This report showed that, at varying composition of 0.05% and 0.1%, the addition of Ag nanoparticles (obtained from a green synthetic route using plant extract of fresh Mimusops elengi fruit) to the polymer matrixes led to an enhanced mechanical property and decrease in light transmittance in the visible light region. Its application on red grapes gave an extension of shelf life by a fourteen day period. In another report by Kowsalya et al. [185] electrospun silver nanoparticles/poly(vinyl alcohol) composite was prepared by incorporating the synthesized Ag nanoparticles (10% (w/v) PVA and 0.5% (w/v) of Ag), which was also prepared using plant extracts of Vitis vinifera (black grapes), in poly(vinyl alcohol) matrix, for fruits preservation. This material showed a good antimicrobial action against different food pathogens upon coating on lemon and strawberry, extending the shelf life and preventing the decay for up to 10 days. Like silver, Cu-based nanomaterials are highly sort after due to their biological potentials. Although few, most of the application in which Cu-based nanoparticles have been employed in literature as food packaging material involves those in which antimi-crobial action is imposed on the material, mainly in nonbiodegradable plastic matrices and few biopolymers [187]. For instance, the shelf-life elongation of freshly cut yellow bell pepper has been examined using composites of CuO nanoparticles and cellulose/SAbased biodegradable polymer, according to Saravanakumar et al. [171]. This coating was reported to significantly reduce the propagation of bacterial growth (Salmonella spp. and Listeria spp.) while lowering the total fungi count in the bell peppers for seven days. Furthermore, in another report, CuO nanoparticles embedded in a bilayer pouch for the preservation of coconut oil were found to reduce the oxidation of coconut oil for over three months [188]. Nevertheless, prolonged usage may be a health risk due to potential bioaccumulation, even though the films are made of edible coating materials. Other reports in which CuO nanoparticles incorporated into methylcellulose films have been applied in food material, including their usage as material for prolonged shelf-life extension of hard cheese [160], which resulted in the inhibition of microbial growth during storage at 35 • C for one week. Limitation of Nanotechnology in Food Packaging As the application of nanoparticles in food, drug, and cosmetics continues to grow, several agencies like the FDA, IFAS, and USEPA have started considering the potential risk of using nanoparticles in general in different products [189]. For instance, the FDA 2006 has initiated a task force to determine the human, animal, and plant risks of using this class of materials which seems to be gaining serious attention in research. Furthermore, the environmental impact and the sources of the nanomaterial have also been of concern [190]. Consequently, the FDA, and other international bodies, like the EU, have provided adequate information and guidance to evaluate the safe use of nanoparticles in food packaging alongside standardized procedures to analyze the risk to humans and the environment [191]. Many concerns have been raised over the years regarding the continuous usage of nanomaterial in the area involving food and drug due to concerns around toxicity and bioaccumulation [192]. As the application of nanoparticles in food, drug, and cosmetics continues to grow, several agencies like the FDA, IFAS, and USEPA have started considering the potential risk of using nanoparticles in general in different products [189]. For instance, the FDA 2006 has initiated a task force to determine the human, animal, and plant risks of using this class of materials which seems to be gaining severe attention as years go by. Their environmental impact and the sources of the nanomaterial have also been of concern [190]. One concern in the area of toxicity, which has raised many unanswered questions over the years concerning the use of nanomaterials in food substances, generally, especially in the design of novel food packaging materials, is the migration of harmful components into food [44]. Over the past few years, extensive research has focused on the migration of nanoparticles into food substances. Silver nanoparticles have received significant attention due to government concerns regarding their safety and health implications. These studies have revealed that nanomaterials can enter the body through various pathways, leading to their distribution across different organs. Moreover, they can adversely affect human cells by altering mitochondrial function, generating reactive oxygen species, enhancing membrane permeability, and inducing toxic effects. As a result, nanoparticles such as silver have been implicated in the development of chronic diseases, including allergies, asthma, inflammations, cardiovascular disorders, and cancer [193]. Some studies have already attributed the toxicity brought about via migration in food substances through the large-surface-area-to-volume ratio of these nanomaterials [194]. Nevertheless, toxicity has been thought to vary depending on factors such as time of exposure, the concentration of material, and individual reactivity [195]. Generally, the migration process of nanomaterials in food packaging can be divided into two stages. The initial stage of migration occurs when nanomaterials encapsulated within the surface layers of the packaging material are released. The subsequent stage involves the release of nanomaterials from the interior part of the packaging, which must pass through voids and gaps between the polymer molecules [196]. The extent and speed of this migration process depend on various factors. The migration of nanomaterials into food depends on the chemical and physical properties of the food and the polymer used in the packaging. Factors such as the initial concentration of nanomaterials, particle size, molecular weight, solubility, and diffusivity of the specific substance in the polymer, as well as pH value, temperature, polymer structure and viscosity, mechanical stress, contact time, and food composition, are the main parameters that control the migration process [197]. Studies have shown that the encapsulated nanomaterials inside the film may sometimes need to oxidize and migrate out through the polymer matrices. These encapsulated nanomaterials are primarily responsible for the release of nanomaterials at later times. The solubility of metallic nanoparticles in aqueous solutions increases with higher temperatures and lower pH values, which can lead to an increased migration of metals in the system [198]. Therefore, identifying and characterizing nanomaterials in the food are necessary due to the potential risks they pose to consumers. the ability of the nanomaterials to migrate from food packaging to the food itself makes it crucial to employ specific techniques to evaluate and analyze these materials [193,197]. To accurately measure nanomaterials in complex matrices, it is essential to use analysis techniques that can distinguish between nanoparticles and other components present. Furthermore, these techniques should be sensitive enough to detect low concentrations of nanomaterials while providing sufficient information about their concentration, composition, and physicochemical properties within samples. However, the determination of their exact quantity of food materials is currently impossible. Nevertheless, in such a complex situation, and at this moment, the amount of migrated nanoparticles in food, synthetic methods are necessary to determine the quantity of migrated nanoparticles and detect them, as independent methods cannot provide all the required information [193,199]. As such, conventional chromatography methods are limited and unsuitable for analyzing polymer additives, as they cannot measure the physicochemical properties of nanoparticles. Consequently, only a few methods effectively detect nanoparticles and determine their properties. These include Microscopic Methods, Quantitative Analysis Methods, and Spectroscopy Methods [193,200]. Nanostructured materials exhibit characteristics that can help kill bacteria, such as generating reactive oxygen species (ROS), the release of heavy metal ions, or increasing the specific hydrophobic surface area. However, these same characteristics can also lead to cytotoxicity or debilitation of mammalian cells [95]. Hence, one notable concern that must be considered when using metal-based nanoparticles is the issue regarding dissolution, especially when exposed to biological molecules such as thiols [89]. Dissolution is an important characteristic that affects the bio-durability and persistence of nanoparticles, which may be employed to predict the possible environmental or health effect [201]. Biomolecules can influence the dissolution of metal-based nanoparticles, and this interaction has implications for human health. This action thus proceeds when metallic ions, in the presence of an aqueous medium, slowly get discharged from their oxides, followed by absorption by the cell membranes, which then leads to interaction with nucleic acids and proteins. This consequently brings about variations and aberrant enzymatic actions, which ultimately disturb the expected physiological properties of the cell [89]. Thiol-containing molecules can interact with the surface of nanoparticles, forming bonds between the metal atoms and thiol groups. This interaction can either enhance or inhibit the dissolution of nanoparticles depending on various factors such as the specific metal, nanoparticle properties, and environmental conditions [201]. The dissolution process of ZnO, for instance, often involves the release of Zn 2+ ions into the surrounding medium, which is usually influenced by various factors, including pH, temperature, and the presence of biomolecules. Specifically, Wang et al. investigated the interaction of ZnO nanoparticles with thiol-containing molecules, such as glutathione (GSH) or cysteine [202]. It was observed that the biomolecules interacted with the surface of the nanoparticles to form stable complexes via Zn-S bonds, slowing down the dissolution process and effectively inhibiting the release of Zn 2+ ions by forming a surface passivation layer [202]. Similarly, thiol groups from molecules like cysteine or mercaptoundecanoic acid can bind to the surface of Ag nanoparticles, forming Ag-S complexes. This interaction can either passivate the surface and reduce dissolution or, under certain conditions, enhance the dissolution of Ag nanoparticles [203,204]. Likewise, l-cysteine or glutathione have been reported to retard the dissolution rate of TiO 2 by forming surface complexes which in turn affects its dissolution kinetics [205]. The consequences of metal-based nanoparticle dissolution for human health depend on various factors, including the type of metal, the concentration of metal ions released, and the route and duration of exposure. Metal ions released from nanoparticles can interact with biological systems and potentially induce adverse effects. Some metal ions, such as cadmium, lead, and mercury, are toxic to humans, even at low concentrations. Their presence in the body can disrupt cellular processes, cause oxidative stress, and lead to various health problems [201]. It is important to note that the behavior of metal-based nanoparticles and their interaction with biomolecules is still a complex area of research, and the specific outcomes for human health can vary depending on the nanoparticle characteristics, exposure conditions, and the specific metal involved. Further studies are needed to understand the mechanisms and potential risks associated with the dissolution of metal-based nanoparticles in the presence of biomolecules. Apart from solubility, agglomeration of nanoparticles has also been identified to affect the toxicity of nanoparticles, which also plays an intrinsic role in their solubility. Agglomeration thus refers to a phenomenon where a group of NPs aggregate via weak forces, like van der Waals or electrostatic forces [206]. It has been reported that the degree of agglomeration in nanoparticles plays a crucial part in the distribution in the living tissues, exposure, and uptake, thus influencing the observed toxicity of NPs [207,208]. Different factors have been thought to affect the agglomeration of nanoparticles in solution, including size, surface structure, chemical composition, and shape [209][210][211]. Furthermore, its occurrence highly depends on parameters such as pH, temperature, and solution chemistry [209,212,213]. Large agglomerate of TiO 2 , according to a study, has been reported to cause a strong toxicity response than small agglomerates for glutathione depletion, IL-8 and IL-1β increase, and DNA damage in THP-1. It was concluded in this study that agglomeration influences their toxicity/biological responses, and large agglomerates do not appear less active than small agglomerates [214]. Although sufficient data are not yet available that can sufficiently paint the picture of the toxicity profile of most nanoparticles over a long period, organic nanomaterials derived from lipids, starch, protein, and chitosan have been suggested to be non-toxic, seeing that they are wholly digested are not bio-persistent in the gastrointestinal tract system of humans [215]. This, however, does not exclude them from potentially bringing harm. The large-surface-area-to-volume ratio has thus been implicated as a possible concern due to increased bioavailability brought about by the size [44]. This, therefore, suggest the importance of both in vivo and in vitro studies in ascertaining their safety for humans. Similarly, testing to determine the level of migration of nanoparticles into food substances is essential in ensuring the use of metal-based material in food packaging. For instance, some reports have already examined the migration of silver NPs into food products and have found that, although silver plays various significant roles in the design of packaging materials, it can cause genotoxicity and neurotoxicity [195]. Silver NPs have been reported to get deposited in the kidney, liver, testicles, and brain even though their migration in food is very low due to the low concentrations often applied [216]. Other reports have suggested that hydrophilic nanoparticles that are positively charged possess the ability to increase blood circulation in an intense manner which can lead to organ compromise. Still, these reports have been thought to need further verifications [217]. Despite the various concerns surrounding the use of this material, reports have also been made to suggest that if the nanoparticles are adequately embedded in the matrix of the used polymeric materials, migration may be significantly reduced. However, external factors may still bring about their migration into food materials. Hence, in applying nanomaterial, especially metal-based ones, into food packaging material, it is crucial to conduct studies that ascertain their toxicity, migrations, permissible limits, and their interaction with polymer matrices before they can be applied as food packing material [44]. In other words, every engineered nano-based material should be scrutinized, from manufacturing to storage and distribution to disposal. Furthermore, there is a need for continued exploration of the cause and mechanism of nanotoxicity in other to gain adequate knowledge and understanding [192]. Adequate and deliberate laws and policies relating to the manufacturing, application, and recycling of nanomaterials should be set to avoid any concerns that may arise in their applications [192]. Consequently, the FDA, and other international bodies, like the EU, have provided adequate information and guidance to evaluate the safe use of nanoparticles in food packaging alongside standardized procedures to analyze the risk to humans and the environment [191]. Conclusions Nanotechnology has provided a plethora of user-friendly alternative platforms to researchers in different field of endeavor, including agriculture. Its promising solutions in improving agricultural productivity and reducing losses have made this technology highly sought after. This technology, likewise, has benefited industrial food processing sectors with enhanced food production, excellent market value, high nutritional and sensing properties, improved safety, and better antimicrobial protection. This has led to its wide application in recent years in post-harvest technology. Specifically, its application in food packaging has been on the rise owing to the vast ease of preparation, alongside the accompanying useful physicochemical and biological properties. Metal-based nanoparticles, especially Ag, ZnO, TiO 2 and CuO, have generally been at the forefront of these applications due to their ability to nicely integrate into different polymer matrices, including biopolymers, which confers a superior property. Hence, metal-based nanoparticles, in the design of functional food-packaging material, have been found to confer different biological properties such as antioxidant, antimicrobial, and anti-inflammatory activities while still enhancing the mechanical, physical, barrier and optical properties of the base material. Although many concerns have been raised regarding their continuous usage in food and drugs, owing to toxicity and bioaccumulation via migration, adequate guidance alongside standardized procedures for analyzing the risk-benefit index in humans and the environment are currently being explored and discussed. Metal-based nanoparticles thus offer a promising platform in food packaging technology if the issues regarding toxicity are carefully and deliberately allayed through other technological approaches.
2023-07-12T05:29:41.499Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "1f6d6cf89ff4127e30f4fd18ecb558278b510d30", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-273X/13/7/1092/pdf?version=1688743948", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a6ba3f23083c9b78837a8e2091705c25042bd82d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
243252392
pes2o/s2orc
v3-fos-license
A Study on the Teacher-Student Relationship and its Impact on the Behaviour of High School Students High school students are in the stage of Adolescence and it is the time for developing independence. Typically, adolescents exercise their independence by questioning and sometimes by breaking rules. Parents and teachers must play a major role in supporting & influencing the children positively by their ethical & appropriate approaches. Teachers in school as well as parent at home, often wonder how to disciple a child and to mould their behaviour so to bring up the child with virtues. Although some children truly have challenging behaviours regardless of what strategies to try, many children just need to have the adults in their lives make changes in the way they react, respond, or interact with them. It is also a great responsibility of the teacher in school to have positive approach towards students. If not there are possibilities in change of behaviour among students & leads to several problems. For example, frequent episodes of fighting, scholastic backwardness, substance abuse; antisocial or institutional activities, destructive behaviour and change in attitude in students are much more significant than isolated episodes of the same activities. Other warning signs include deterioration of performance at school and running away from home. This research paper’s aim is the teacher-students relationship and its impact on the behaviour of High school students. The objectives are to know the teachers attitudes both positive and negative towards students and its impact to bring positive as well as negative behaviour change in the students. The study has reported that students are often facing emotional problems by the negative approach of the teachers. It is recommended that to create awareness among teachers in the school for the smooth handling the children with the positive approaches. 50 high school students; 25 girls and 25 boys were taken and interview schedule is used. Both the primary and secondary methods are used and the study is descriptive in nature. INTRODUCTION : To find out the behavioural changes in students by the approach of teachers in the classroom, it is necessary to understand the several behavioural problems experienced by students and deferent approaches done by teachers.Normally in Indian schools the teachers follow some traditional & unethical approaches to influence the students towards academic or activities [1].These approaches will negatively influence the chid to get discouraged, misbehaved and also it will impact on the performance of the child in the academic or activities [2].This is a main cause behind individual's behaviour in the society.The time students complete their education and come to the social life, the impact of the negative approach in the school by the teacher will be exhibited through its personality [3][4].By this negative and inappropriate behaviour will spoil the respect, identity and social acceptance of the individual.So it is very essential to change the approach in the school which brings appropriate behaviour and social acceptance to the child in the society [5].Gururaj SRINIVAS PUBLICATION 2. OBJECTIVES OF THE STUDY : The relationship between teachers and students is one of the important tools used in shaping the students personality.To know the status of the relationship and its effect on students behaviour in the classroom learning process, the following objectives are been discussed. (1) To identify the behavioural problems of the high school students (2) To understand the status of relationship between teachers and students (3) To find the causes behind the student's behavioural problems (4) To explore the impact of the relationship between teachers and students on students' academic performance (5) To find out the deferent approaches of the teachers on student's misbehaviour APPROACHES AT THE SCHOOL : Physical abuse: Two out of three school going children in India are physically abused says the national report on child abuse by the Ministry of Women and Child Development in 2007 [6].The crime is increasingly noticed in every single district of the country [7].In India boys are marginally more likely to face physical abuse (73 per cent) than girls (65 per cent).Corporal punishment in both government as well as private educational institutions is deeply ingrained as a tool to discipline children and as a normal action [8][9][10].But most children do not report or confide about the problem to anyone and suffer silently.To help and encourage children with several emotional and a behavioural problem a set of approach is formed.The positive and healthy parenting and classroom management by the teacher model is based on the great work done by the Alfred Adler and Rudolf Dreikurs which started in the year of 1920s.It consists of a specific set of techniques for inculcating positive behaviour and curtailing negative behaviours.It is a well-known approach designed to teach children to become responsible, respectful and resourceful and inculcates a spirit of self-discipline [11].Social influence: Due to the change in the life style, work stress, family related issues individual do not have a healthy communication, He frequently fall in to imbalance in his emotions and thoughts [12].These are the major facts to decide the individuals approach towards others.The researchers say that in India majority of the schools reported that they consider corporal punishment as better way to correct the child in its behaviour.Even though corporal punishment has been classified as an act of violence and abuse on children, until today in India children are abused physically and mentally in the schools [13][14].Corporal punishment is one of the major approaches by the teacher such as beating, pinching, hitting the child with tools like belts, hands, sticks etc.Such type of violence may be a deliberate act of punishment or simply the impulsive reaction of the an irritated teacher, no matter what form the violence however it will have an negative impact in the child's behaviour.This should be considered seriously so that corporal punishment is banned in every school [15][16].Mental abuse: Not only physical abuse, even mental abuse of the child in the school is one of the causes behind inappropriate behaviour of the child [17].One of the challenging problems in developing India is that suicidal thoughts in children.High expectation by the teacher in their academic, creating unhealthy computation between low and high academic performance, study pressure, non-availability of a conducive environment in schools by the teachers for the child to share feelings can also influencing the Child's mental state [18], [10].Any type of abuses of the child (physical, mental, emotional) is against the child rights and punishable.But also repetitive irritation by the child in the classroom or inability of the child to cope with the teacher's expectation is making the teachers fall in taking action against child by abusing them physically or mentally.Such as scolding, threaten to complaint the parents and principal, issue the transfer certificate, insulting the child in class, scolding the parents, not giving importance to the child, indirectly taunting at them and being partial to the particular child.These types of mental abuse is also will damage the behaviour of the child to develop several unhealthy attitude and suicidal thoughts [19][20]. BEHAVIOUR CHANGE AND PROBLEMS IN CHILDREN : Beyond violating a fundamental right of the child causes pain, injury, and humiliation, anxiety and anger in child's behaviour that could have long term psychological effect and destroy the personality SRINIVAS PUBLICATION of the child [13].Repeated negative approach by the teacher in the school may exhibit dysfunction behaviour such as poor communication and child may report aggressive behaviour towards itself and others [21].These are the unethical approaches of the teacher may a significant reason for child to dropping out of school.Emotionally or physically effected children by the negative approach of the teachers may also refuse to return and think against the teachers and subject [22][23].Deferent problems among children: Child abuse in the school can produce feelings of guilt, violation, loss of self-control and degraded self-esteem [14].It may influence the child in to the feeling of hopelessness, worthlessness and uselessness which will leads suicidal thoughts [11].Physical and mental abuse of the children in the schools may reinforce them in to revenge and identity of failure.The children interviewed also reported that, when they experience negative approach of the teacher they feel hurt exhibit their pain through anti-social thinking or activity [24].Negative approach by the teachers in the school towards children is totally against to the law.Violation in child right leads to permanent damage in the child's behaviour.In India many schools are considering some unethical & traditional approaches are as recognized law; though it leads to negative behaviour among children the teachers in schools tend to have these negative approaches repetitively [25]. RELATIONSHIP BETWEEN APPROACH AND BEHAVIOUR : Deferent approaches: Approach of the family, peers, society and teachers in the school will play the major role in shaping the behaviour of children [26].There is a direct relationship between approach of the external world and behaviour of the individual as most of the behavioural corrections done by deferent approaches.Mother use the love and care as an approach to train the child to get adopted the same behaviour towards others [27].In the other hand teacher use hitting or scolding asan approach to make child get corrected in its mistakes; but during the interview majority of the children have reported that they also use the same approach towards others done by the teachers.If there is a change in the behaviour of the child as negative or positive in the school it is definitely the approach made by teachers, peers or by the family [26]. Behaviour change by approach: There are several reasons behind the teacher's approach towards children.That may be the work pressure given by the management or heads of the organization, emotional disturbance by the family, lack of patience to understand the child its behaviour, sometime lost humanity due to the modern life style may influence the teachers to have an unhealthy approach towards children [28] [12].The other hand human behaviour is a varied aria of study but in this study the miner part of the child's behaviour in the school is taken such as lying, negative attitude towards teacher and subject, Destructive behaviour, Bullying and isolation by the social involvement [29].For example: if mathematics teacher scold or hit the child in front of class mates for the reason of scoring the less marks; the child starts developing negative attitude towards that particular subject and teachers.A common behavioural problem in the children is lying; which is also one of the problems caused by approach of the teacher.Problems of behaviour change: When the child shows the interest to share the mistake committed to the teachers, if teacher encourage and give them an opportunity to tell their problem they feel to tell the truth if not they lie.Sometime fear on teacher also will influence the child to develop lying behaviour [29].During the interview children reported that the time teacher negatively approach the children they get angry and emotionally disturbed; but they are not able to exhibit it in front of teachers.In such time they develop a destructive behaviour by exhibiting such behaviour towards friends in the school and neighbourhood, family members, school properties and things [9], [10].If the teachers approach is negative and damaging the emotional wellbeing of the child may that make the child falls down to the social isolation [30]. RELATIONSHIP BETWEEN TEACHER AND STUDENT : Those students have close, positive and supportive relationship with their teachers will reach the higher level of success in academics, extra-curricular activities as well as in their social life [6].The students with more conflict in relationship with their teachers will slowdown in their wellbeing at day today's life.A positive relationship builds healthy communication.It may motivate, encourage and bring the students up to the activeness and more involvement in learning, improve positive behaviour SRINIVAS PUBLICATION in both at class and home [22].A student will spend valuable time in school with teachers then spending at home and it is a time to learn social values, ethics, culture and many more.Teachers play the ultimate role in influencing the students positively to get adapted all the good and appropriate behaviour [6].Behavioural impact by negative relationship: If there is unhealthy relationship between teacher and student, student may get increased anger, demotivation, discourage, miss behaviour, negative changes in attitude, disrespect towards others may block the active and positive growth of overall personality of the child [9].It is very essential to the teacher to understand and create healthy, supportive and positive relationship with their students [22].Important responsibility of the teacher is to create flexible environment that students feel free to share their feelings.Once they start talking, teacher should develop a confidence in students [18].Teacher should appreciate the student for selecting them as right person to share their feelings by maintaining confidentiality and respecting student's feelings.Teacher should play the major role in guiding, by making them understand about the mistake committed, miss behaved or by telling them about their great responsibilities in the school, home and society [6].Approach based on empathy will make the students to go beyond the intentions of the guidance, expectation and the responsibility given by both parents and teachers."In every child's life parents are important, for student's teachers".To have healthy academic experience it is very essential for both teachers and students to understand that the importance and necessity of the mutual understanding and have positive relationship between them [18], [22]. ANALYSIS : Personal Data: Number of respondents taken for the study is 50.Male respondents were 50% and female were 50%.40% of respondents aged between 15 and 16 years, 30% between 14 and 15 years and 30% between 13 and 14 years of age.94% of samples belong to Hindu religion and 6% of samples from Muslims.Education vice distribution of the sample is 30% from the class 8 th , 30% from class 9 th and 40% samples from class 10 th . Behavioural problems in adolescents: The report of the data collected from the samples indicates that 54% of the male and 38% of female respondents often fall in to the depression by the academic pressure given by teachers.68% of male respondents and 64% of female respondents developed negative attitude towards subjects and teachers.18% respondents expressed that they often fall in to isolation and 48% of respondents showed aggressive behaviour towards peers.32% of respondents are scholastically backwards and 22% of respondents have hyperactive disorders.Teacher's approach towards students: The approach related problems experienced by the respondents were identified for 72% of scolding by the teacher, 34% physical abuse and 38% mental abuse.64% of male respondents have reported that they often get beatings by the teacher for the mistakes committed the other hand 20% of respondents are getting punishments without any mistakes and it is very less in female as 14%.70% of males and 68% of female have reported that most of the time they experience scolding by their teachers.40% of male and 14% of female respondents are feeling uncomfortable to express their feelings to the teachers.66% of male and 78% of female respondent are ready to accept the guidance given by teachers.Scholastic backwardness: in total samples 38% of male and 22% of female respondents are scholastically backwards, repetitive negative academic performance.72% male respondents are medium and 28% are very good in their academic.44% of female respondents are medium and 56% are very good in their academic achievement.30% of male respondents responded that they are demotivated and 12% of females are also demotivated by continues academic pressure from their teachers.32% of male respondents are very serious about their academic future and 68% are not so serious.28% of female respondents reported they are also serious about their academic future and 72% are not. RECOMMENDATIONS AND SUGGESTIONS : The children with several behavioural problems in the school are not given importance to understand the real cause behind their problems.There is a concept in the schools that negative approaches are the best methods to control the children; so frequently the same methods are used to approach the child which is again and again damaging the psycho-social wellbeing of the children in the school SRINIVAS PUBLICATION [14].About 70 % of the population has reported that they are often facing emotional problems by the negative approach of the teachers.It is recommended that to create awareness among teachers in the school for the smooth handling the children with the positive approaches.Different programs must be planned to strengthen the children's capacity to cope with challenges and to make them emotionally strong [5], [22].A flexible environment in the school must be created to make the child develop academically and socially.
2021-03-25T20:45:22.053Z
2019-03-10T00:00:00.000
{ "year": 2019, "sha1": "6ceae9b71b1f2ff11b277ecc7c997805535c06ec", "oa_license": null, "oa_url": "https://doi.org/10.47992/ijcsbe.2581.6942.0034", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9d6a3cb0f160f5c6204dd66bc43652098a5b0e46", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
270302540
pes2o/s2orc
v3-fos-license
Knowledge and Attitudes Toward Obstructive Sleep Apnea Among Korean Pulmonologists: A Nationwide Survey Background: Obstructive sleep apnea (OSA) significantly impacts cardiovascular, metabolic, and respiratory health. In Korea, OSA patients are treated by specialists in internal medicine, otolaryngology, neurology, and psychiatry, but the participation rate of pulmonologists in OSA management is relatively low compared to other specialties. This study investigated the knowledge and attitudes about OSA among Korean pulmonologists. Materials and methods: An online survey was conducted, targeting respiratory specialists listed in the Korean Academy of Tuberculosis and Respiratory Diseases directory. The survey used the validated "Obstructive Sleep Apnea Knowledge and Attitudes" (OSAKA) questionnaire, which consists of questions about knowledge and attitudes on OSA. To maximize participation, email invitations were sent three times to the target audience. Results: Out of 634 queried pulmonologists, 127 (20%) responded to the survey. The mean age of respondents was 45.4 ± 8.6 years. The respondents' years of specialty acquisition ranged from the 1980s to the 2010s. Additionally, 74 (58.3%) held a doctor’s degree, and 96 (75.6%) worked in hospitals with a sleep center. Furthermore, 71 (55.9%) of the pulmonologists reported having experience with OSA patients. Pulmonologists with experience managing OSA patients had significantly higher knowledge and attitude scores compared to those without such experience. Interestingly, older respondents and those who completed their pulmonology training earlier had higher attitude scores. In addition, the knowledge score significantly correlated with responses to the five items of the attitude questionnaire. Conclusion: This study provides valuable insights into the knowledge and attitudes of Korean pulmonologists regarding OSA. The findings indicate that their knowledge levels are comparable to or better than those in previous studies. These results underscore the need for targeted educational programs and practical training, especially for younger pulmonologists, to enhance their proficiency in managing OSA and to encourage a more active role in its treatment. Introduction Obstructive sleep apnea (OSA) is a common disease that develops in 4%-10% of adults and is associated with significant morbidity and mortality [1].Globally, the prevalence of OSA is 20%-50% among those aged ≥65 years and >40% in the obese population [2,3].OSA is associated with hypertension, diabetes mellitus, atrial fibrillation, heart failure, coronary heart disease, stroke, and death [4].The relationship between OSA and cardiovascular diseases has frequently been reported by epidemiological and clinical studies [4,5].Intermittent hypoxemia, sympathetic activation, oxidative stress, and inflammation have been proposed as the underlying mechanisms of OSA onset [6][7][8].The effects of OSA on pulmonary diseases, such as chronic obstructive pulmonary disease (COPD) and idiopathic pulmonary fibrosis, have also been studied [9,10]. Despite its high prevalence and clinical significance, OSA is often underdiagnosed.This is probably because many primary care physicians are not familiar with OSA.In 2003, Helena et al. developed a questionnaire, known as the Obstructive Sleep Apnea Knowledge and Attitudes (OSAKA) questionnaire, to assess physicians' knowledge and attitudes about OSA [11].Several studies investigating knowledge and attitudes regarding OSA among physicians have been published to date.Among 92 cardiologists in the United States, 80% agreed that identifying patients at risk for OSA was very important, but only 18% felt confident in managing OSA patients [12].Similarly, in a recent study, primary care physicians reported awareness of the importance of OSA, but only a few felt confident in managing OSA patients [13].Also, 321 anesthesiologists in China felt they lacked adequate knowledge about OSA and had low confidence in managing OSA patients [14]. In Korea, the number of sleep studies has increased sharply since national health insurance coverage began in 2018.Korean patients with OSA are approached and treated differently by specialists in internal medicine, otolaryngology, neurology, and psychiatry.However, the participation rate of Korean pulmonologists in treating OSA patients is low compared to participation by other clinical departments.In this study, we investigated the knowledge and attitudes about OSA among pulmonologists in Korea. This article was previously presented as a meeting abstract at the SLEEP 2024 meeting on June 3, 2024. Study design An online survey was performed in February 2023.A total of 634 respiratory specialists registered in the online directory of the Korean Academy of Tuberculosis and Respiratory Diseases were invited to participate.The survey used the OSAKA questionnaire, which was previously validated, employing its original English version without modification.We obtained permission to use the OSAKA questionnaire by contacting the Washington University School of Medicine and paying the required licensing fee.The survey was distributed along with a concise study overview, explicitly stating its anonymous nature and inviting participation.Emails were sent to all participants requesting participation in the survey on three separate occasions.The study was approved by the Institutional Review Board (IRB) at Catholic Medical Center (UC24QISI0002). Questionnaire The OSAKA questionnaire consists of questions about knowledge and attitudes on OSA [11].Its knowledge section is composed of 18 true-false statements, including five domains covering the epidemiology, pathophysiology, symptoms, diagnosis, and treatment of OSA."Don't know" was included as a third response to minimize the effect of speculation.Separately, the attitude section of the OSAKA questionnaire contains five questions scored using a five-point Likert score; the first two questions are used to evaluate the importance of OSA, while the remaining three questions are used to assess confidence in its diagnosis and treatment.In this study, we additionally explored other variables, such as sex, age, degree, year of medical school graduation, year of specialization attainment, hospital distribution and classification, inpatient bed count, presence of a sleep center in the affiliated hospital, and OSA treatment experience. Statistical analysis The mean and standard deviation were computed for normally distributed continuous variables, and median and interquartile range (25th-75th percentiles) were determined for non-normally distributed continuous data.Categorical data are presented as numbers and percentages.To compare clinical data between two subgroups, Student's t-test was performed for normally distributed data, while the Mann-Whitney U test was used for non-normally distributed data.To compare clinical data among groups, normally distributed data were subjected to a one-way analysis of variance with the Tukey post-hoc test.The Kruskal-Wallis test and Dunn post-hoc test were employed to compare non-normally distributed data.Categorical variables were compared using the chi-square or Fisher's exact test, as appropriate.Pearson's correlation analysis was used to assess the relationship between knowledge and attitude scores.Statistical analyses were performed using R software (ver.4.0.4;R Foundation for Statistical Computing, Vienna, Austria).P < 0.05 was considered significant in all analyses. Knowledge Among 18 total questions, the average correct answers ratio was 80% (Figure 1).The highest percentage of correct answers (100%) was observed for question 18, which stated that there exists an association between cardiac arrhythmias and untreated OSA.The lowest percentage of correct answers (35%) was noted for question 8, which asked whether laser-assisted uvuloplasty is an appropriate treatment for severe OSA.The mean (standard deviation (SD)) total OSA knowledge score was 15.0 (2.0).Table 2 shows the associations between socio-demographic factors and mean OSA knowledge score.Sex, year of specialty acquisition, and experience with OSA patients were significantly associated with the mean OSA knowledge score.Male respondents had significantly higher mean OSA knowledge scores compared to female respondents (15.0 vs. 14.0 points, P = 0.045).Those who acquired their medical specialty in the 1980s and 1990s showed higher mean OSA knowledge scores than those who acquired their specialty in the 2000s and 2010s.However, after post-hoc analysis, there was no significant difference between the four groups according to the decade of specialty acquisition.Respondents with experience treating OSA patients had significantly higher mean OSA knowledge scores than those without experience treating OSA patients (15.0 vs. 14.0 points, P = 0.034). Attitude Table 3 shows the associations between socio-demographic factors and mean attitude score.Age, year of graduation from medical school, year of specialty acquisition, and experience with OSA patients were significantly associated with mean OSA attitude score.Older respondents tended to have higher mean OSA attitude scores (P = 0.001); after post-hoc analysis, the mean OSA attitude score of respondents aged ≥60 years (19.8 ± 2.9 points) was significantly higher than that of respondents in their 30s (15.2 ± 3.6 points, P = 0.006) or 40s (17.0 ± 2.9 points, P = 0.040).Similar to this trend, groups with earlier graduation years tended to have higher mean OSA attitude scores (P = 0.011), although the difference was not significant in the posthoc analysis.For the year of medical specialty acquisition, the mean OSA attitude scores were significantly lower among those who acquired their specialty in the 2000s (15.8 ± 3.5) compared to those who acquired it in the 1980s (17.9 ± 3.0 points, P = 0.047) or 1990s (18.0 ± 2.8 points, P = 0.023).Overall, OSA attitude scores tended to be higher among older respondents and those who graduated from medical school and completed pulmonology specialist training earlier in their careers in Korea. N = 127 P-value Table 4 shows the association between experience with OSA treatment and attitude.The total attitude score of the respondents with OSA treatment experience was 17.8 ± 2.8 points, significantly higher than that of respondents without OSA treatment experience (15.8 ± 3.5 points, P < 0.001).Attitudes toward the importance of OSA as a clinical disorder and identifying patients with OSA showed no significant differences according to OSA treatment experience.However, respondents with OSA treatment experience showed higher values.In particular, respondents with OSA treatment experience were significantly more confident in identifying at-risk patients, managing patients with OSA, and managing patients on continuous positive airway pressure than those without OSA treatment experience (P = 0.006, P < 0.001, and P = 0.003, respectively). Association between knowledge and attitude When attitudes toward OSA were analyzed by knowledge scores, a significant correlation was seen (r = 0.38, P < 0.001) (Table 5).In addition, the knowledge score significantly correlated with responses to the five items of the attitude questionnaire. Discussion In this study, we assessed knowledge and attitudes about OSA among pulmonologists in Korea.While previous studies have deployed the OSAKA questionnaire among various types of healthcare professionals, this was the first such study conducted among pulmonologists in Korea [12][13][14][15][16]. Globally, we were unable to find any research papers that have used the OSAKA questionnaire to specifically target pulmonologists.The main finding of our investigation is that the knowledge levels of Korean pulmonologists regarding OSA are comparable to or better than those reported in previous studies.Notably, pulmonologists who had experience managing OSA patients exhibited significantly higher knowledge and attitude scores than their counterparts without such experience.Interestingly, attitude scores tended to be higher among older respondents and those who had completed pulmonology specialist training earlier in their careers. OSA impacts 3.2%-4.5% of the population in Korea and is linked to significant health complications and increased mortality rates [17].Despite having sufficient access to healthcare services, up to 80% of patients with moderate or severe OSA remain undiagnosed [18,19].Since the inclusion of polysomnography in the national health insurance coverage in 2018, there has been a sharp rise in the number of polysomnography examinations conducted in Korea.However, the field of OSA has traditionally been led by neurologists, otolaryngologists, and psychiatrists in Korea, with pulmonologists often playing a less prominent role.Based on a survey of the knowledge and attitudes of pulmonologists regarding OSA, our goal was to understand the current status of Korean pulmonologists in the management of OSA, aiming to increase interest and participation in sleep medicine among pulmonologists. The relationship between OSA and cardiovascular diseases has been investigated in several studies [4,5,20].However, OSA also has a significant impact on lung health.The effects of OSA on pulmonary diseases, such as COPD, lung cancer, and idiopathic pulmonary fibrosis, have been reported in several studies [9,[21][22][23][24]. The co-existence of OSA and COPD, known as "overlap syndrome," led to greater morbidity and mortality rates than either COPD or OSA alone [25].An association between OSA and lung cancer has also been suggested by human and animal studies [26][27][28].Intermittent hypoxia, swings in intrathoracic pressure, and recurrent collapse of the upper airway alter the anatomy and physiology of the respiratory system, leading to localized inflammation, structural changes, and increased reactivity [29][30][31].Therefore, pulmonologists, as individuals who possess a thorough understanding of the respiratory system structure and physiology, actively treat respiratory illnesses, and are familiar with the use of equipment such as oxygen delivery systems and ventilators, offer numerous advantages and strengths in the treatment of OSA. From our study, the total knowledge score calculated from the original 18 items of the OSAKA questionnaire was 15.0 (13.0-16.0)points, with an 80% correct answer ratio.Previous studies have reported score variations depending on the country, profession, and clinical department.Among cardiologists in the United States, the correct answer ratio was 76% [12].Separately, a 60% correct answer ratio was reported among anesthesiologists in China and primary care physicians in Latin America [14,15].Hence, although direct comparisons of knowledge scores between this study and others are challenging, the knowledge scores among the pulmonologists participating in this research were not substantially low.This may be due to the high proportion of pulmonologists working at university hospitals and increased concerns about sleepbreathing disorders.In Canadian otolaryngology, head and neck surgery residents, an exceptionally high knowledge score of 88.9% was recorded [32]. The correct response rate for question 8 in the knowledge section (laser-assisted uvuloplasty is an appropriate treatment for severe OSA: false) was notably low at 35%, in contrast to rates for other items.This pattern has also been observed in other studies, with one study demonstrating a correct answer rate of 33% [13].However, Washington University, from which we obtained permission to use the OSAKA questionnaire, has indicated that this question should no longer be included in the most up-to-date version of the questionnaire.This suggests that the low correct answer rate observed in our study may not be significantly meaningful. In the present study, respondents with experience treating OSA patients had higher OSA knowledge scores than those without experience treating OSA patients.This result is in agreement with those of several previous studies [13,33].Physicians who have access to sleep centers had higher knowledge scores [33].Physicians without experience in a department that manages OSA patients had lower OSA knowledge scores compared to those with such experience [13]. Our study indicated that higher OSA attitude scores were associated with older age, more years of practice, and prior experience with OSA patients.These results are similar to those of previous studies [12,13,15,34].These findings imply that effectively managing OSA patients requires experience in clinical practice.The respondents without experience in OSA treatment had lower attitude scores regarding confidence in identifying at-risk patients and managing patients with OSA or those receiving continuous positive airway pressure therapy.This result reinforces the need for more education and training about OSA. In the present study, we demonstrated a significant correlation between knowledge and attitude scores of OSA.Also, the knowledge score was associated with the five-item attitude questionnaire score.In a study of knowledge and attitudes of primary care physicians toward OSA in the Middle East and North Africa region, a positive but weak correlation between knowledge and attitude scores was noted [13].Other studies have similarly reported correlations between attitude and knowledge scores [12,14]. As limitations of our study, first, despite sending three rounds of participation invitation emails to all respiratory specialists registered in the online directory of the Korean Academy of Tuberculosis and Respiratory Diseases within Korea, the study participation rate was not high at just 20%.Second, there is a possibility that the responses were biased toward pulmonologists who are familiar with or have a keen interest in OSA or sleep medicine.Third, the majority of pulmonologists in Korea work in university hospitals and general hospitals, resulting in >80% of respondents in this study being employed in hospitals with >500 beds, making it difficult to capture the opinions of pulmonologists working in smaller hospitals or private clinics.Finally, in Korea, some pulmonologists have access to sleep centers in their hospitals, and some do not.This aspect was not covered in our current study.However, considering that our results showed significantly higher knowledge and attitude scores among those with OSA management experience, it can be inferred that pulmonologists with access to sleep centers may exhibit higher knowledge and attitude scores. Conclusions Consequently, this study represents the first investigation into the knowledge and attitudes about OSA among pulmonologists in Korea.The knowledge levels of Korean pulmonologists regarding OSA were comparable to or better than those reported in previous studies.Nevertheless, there remains a need for targeted education and practical exposure to OSA management, especially for younger respiratory physicians, to enhance their proficiency in treating OSA patients. Have you ever experienced patients with OSA? Is there a sleep center in the working hospital? TABLE 3 : Association between socio-demographic factors and attitude score of OSA Values are mean ± standard deviation; p-values that are in italics are statistically significant (p < 0.05) OSA, obstructive sleep apnea TABLE 4 : Attitude score of OSA according to the experience of OSA treatment Values are mean ± standard deviation, number of patients or median (first quartile, third quartile).p-values that are in italics are statistically significant (p <0.05). TABLE 5 : Correlations among attitude items and between attitudes and knowledge Pearson's correlation analysis was used to assess the relationship between knowledge scores and attitude scores.
2024-06-07T15:23:40.098Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "d3f50dcf7a213e5b72fd1c8c986986af345f3090", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/original_article/pdf/259926/20240605-8522-1fa19t1.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e4e8ab75dd00a07df694f582b4ad5f71d651bce2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236630596
pes2o/s2orc
v3-fos-license
DOES CONTENT DISSEMINATION THROUGH FACEBOOK MATTER FOR GOVERNMENT DEPARTMENTS: A STUDY OF KERALA POLICE DEPARTMENT’S FACEBOOK PAGE Government organisationsare generally known to be lagging behind their citizens in the use of social networking sites; with half of the departments having little or no presence in such platforms. This study aims at understanding what type of content affects citizen‘s attitude towards a Government Department‘s social media pages and how they perceive the message disseminated through such platforms. The researchers have selected the case of the Kerala Police Department‘s (KPD) Facebook page for the present study. Respondents who are post graduate students from various departments of Mahatma Gandhi University, Kerala were subject to different type and formats of content stimuli to assess their attitude towards the content type/format itself and attitude towards the department through a questionnaire. Not only will such a study contribute towards the knowledge base on social media research, this study will also help police department across states in India to develop appropriate content for social media to reach their citizens. In order to test the hypotheses and identify interactions between the dependent and independent variables, one-way ANOVA and independent sample t-tests were used. The results revealed significant differences in attitude towards KPD across stimuli and also attitude towards the stimuli itself. Among the various content type provided as stimuli, ―Hilarious memes‖ Government organisationsare generally known to be lagging behind their citizens in the use of social networking sites; with half of the departments having little or no presence in such platforms. This study aims at understanding what type of content affects citizen's attitude towards a Government Department's social media pages and how they perceive the message disseminated through such platforms. The researchers have selected the case of the Kerala Police Department's (KPD) Facebook page for the present study. Respondents who are post graduate students from various departments of Mahatma Gandhi University, Kerala were subject to different type and formats of content stimuli to assess their attitude towards the content type/format itself and attitude towards the department through a questionnaire. Not only will such a study contribute towards the knowledge base on social media research, this study will also help police department across states in India to develop appropriate content for social media to reach their citizens. In order to test the hypotheses and identify interactions between the dependent and independent variables, one-way ANOVA and independent sample t-tests were used. The results revealed significant differences in attitude towards KPD across stimuli and also attitude towards the stimuli itself. Among the various content type provided as stimuli, -Hilarious memes‖ generated the most favourable attitude towards the department. The attitude of the respondents towards -post re-posting user content‖ was the most favourable. Image/ photo content generates more positive attitude towards the department compared to video content. The study has also revealed that there is a significant difference between the attitudes of respondents who have had priorinteractionwith the Facebook page of KPDversus those who have had no priorinteraction. ISSN: 2320-5407 Int. J. Adv. Res. 9(04), 272-283 273 agencies and departments are also disseminating digital content through SNSs like Facebook to provide information and engage in dialogue with citizens, with varying degrees of success. (Hofman et. al., 2013;Singh, 2016). However, Government organisations are generally known to be lagging behind their citizens in the use of social networking sites with half of the departments having little to no presence in such platforms (Bonson et al., 2012). Though the benefit of communication through SNSs has been well recognised by the corporate world and therefore widely used, the public sector is yet to fully leverage its potential. (Hofmann, et al., 2013). Government organisations and departments often lack the ability to communicate with their biggest stakeholders, the citizens, which increases the ‗distance' between the two. Though Facebook is regarded as an instrument to engage stakeholders in public utility firms, issues which affect citizens are not given due emphasis on most such accounts (Bonson et al. 2012). The adoption and use of SNSs varies across various government departments as this depends on both interest of social media practitioners and guidance of top management, which are not uniform across countries, states, or at local governmental levels (Mergel 2013). Although many researches have tried to understand how social media, especially presence inSocial NetworkingSites helps government and political organisations to improve communication, interaction and engagement with users, little is known about theiruse by government and government departments in an Indian context. Studies have concentrated on what are the most frequently used content format and type by Government Facebook accounts and how these formats and types influence the participant's engagement in a social media environment (Cvijikj et al., 2011;Kim et al., 2015), What researchers have not yet examined is to identify what type of message format leads to the creation of the most favourable attitude towards the government department with relation to its Facebook page. This paper tries to examine how a local government department uses social media presence to reach citizens. The researchers have selected the case of the Kerala Police Department's (KPD) Facebook page for the present study. This page, managed by a six-member social media cell of the department (Balakrishnan, 2018), has over 1.1 million followers, the largest following amongst all other state police departments in the country (Singh, 2016). The page created on 12 th August 2011 post the following type of contents -(1) Self-promoting photos and videos, (2) photos and videos providing general information/ awareness/tips, (3) photos and video of hilarious memes, 4) postings of user post mentioning the Police department (Facebook, 2019). Respondents who are post graduate students from various departments of Mahatma Gandhi University, Kerala were subject to these content stimuli to assess their attitude towards the content itself and their attitude towards the department. The study tries to answer questions like -What is the attitude of the respondents towards the Facebook page of the KPD? Which content type generates more positive attitude towards the KPD?; Which content format generates more positive attitude towards the KPD? Does having prior interaction with the Facebook page of KPD influence the user's attitude towards the department? According to a study conducted by Centre for the Study of Developing Societies (CSDS) conducted in 2018 (Devulapalli, & Padmanabhan, 2019), less than 25% of Indian population -trust the police highly‖. Researches have shown that e-governance in general and social media interaction in particular would help such government organisations to create an environment of trust and transparency (Bonsón et. al., 2012;Morgeson et. al., 2011).Not only will such a study contribute the knowledge base on social media research, this study will also help police department across states in India to develop appropriate content for social media to reach the citizens. Social Media With the advent -new media‖ during the 1990s, the way entities interact and communicate with each other have undergone major changes -be it individuals, business or non-profit organisations or government. New media which includes web sites, internet applications, CDs, DVDs, PC games and similar media have three distinct characteristics -(1) integration of telecommunications, data communications and mass communications into a single platform, (2) interactive content and (3) digital format (van Dijk, 2014). Social Media, which is a sub group of new age media can be defined as -a group of internet-based applications that build on the ideological and technological foundations of the Web 2.0, and that allow the creation and exchange of user generated content‖ (Kaplan & Haenlein, 2010). Types of social media include -collaborative projects, blogs, content communities, social networking sites, virtual game worlds, and virtual social worlds‖ (Kaplan & Haenlein, 2010) 274 The best known and widely used social media is -Social Networking Sites (SNSs)-such as Facebook, Instagram, Twitter, MySpace, etc. Social networks are -web-based services that allow individuals to construct a public or semipublic profile within a bounded system, articulate a list of other users with whom they share a common interestand view and traverse their list of connections and those made by others within the system‖ (Boyd and Ellison, 2007). Together these platforms provide users a tool to create and share information and collaborate interactively with other users. (Banday and Matoo, 2013). Often, the term social media is understood by many as referring to Social Networking Sites. Hence in this paper, we have used the term social media and SNSs interchangeably in many places. Facebook as a marketing tool To study the use of Social media/ SNSs by government, this research uses the medium of Facebook and its use by the Kerala Police Department. The selection of Facebook as an underlying platform wasbased on the reasoning that Facebook has the largest number of active users. (Facebook, 2019). A Hubspot market research found that Facebook is the SNS most used by the companiesformarketing communications, in particular for B2C businesses. Many a time, search engines themselves are known to promote social media sites while generating search results (Xiang &Gretzel, 2010). Individuals use Facebook as a platform to share content for interacting with their friends (connect), completing group tasks such as games (group joy), share useful information (altruism), portraying once achievements to gain attention (achievement), self-expression and seek companionship (loneliness) (Fu 2017). Businesses, small and big, acrossindustries are using SNSs, especially to manage their brands by means of an interactive communication process. The use of Facebook page as a platform for company initiated promotional communications can have a positive causal effect on the perception about company (Haigh et al. 2013) and offline customer behaviour (Mochan 2016).Since most corporate Facebook pagesalso allow users to post content, it can be viewed as customer service platform (Grančay2013).This customer initiated social interactioncan generate word of mouth by interacting with each other and the firm on its Facebook page as well as by commenting, liking and sharing content (Berger and Schwartz 2011; Mochan 2016). Government's use of Facebook The use of Social media in Public sector can be considered as a form of technology innovation in itself as this is markedly different from the highly formalised interaction process between them and the citizens using traditional media such as press releases. (Mergel 2013). Social media in Government can be defined as -a group of technologies that allow public agencies to foster engagement with citizens and other organizations using the philosophy of Web 2.0‖ (Cvijikj 2013). With the penetration of Web 2.0 technologies into the day to day life of ordinary citizens, the government and governmental agencies and departments have also embraced Internet and Social media based interaction with its stakeholders. Thus evolved the term e-government, which can be divided into two distinct phase -Web 1.0 based or Government 1.0and Web 2.0 based or Government 2.0 (Chuan et. al. 2010). The stageGovernment 1.0 is characterised by a uni-direction flow of information, i.e. from the government to the public, with limited feedback from citizens. During this phase e-government has evolved from mere digital presence to simple web-based interactions to online transaction services (Bower and Christensen, 1995). are actively participates by creating, liking, sharing, commenting on and rating Web content. Government agencies and departments around the world have adopted Web 2.0 tools, such as social networking sites, blogs, microblogging, wikis, multimedia sharing, mashup applications, tagging, virtual worlds, and crowdsourcing, among others (Criado 2013), though SNSs remains the most used among them. Like any other ICT adoption process, the use of e-government also goes through three distinct phases of adoption (Mergel 2013). First, agencies experiment informally with social media outside of accepted technology use policies by individuals who have some experience with the technology prior to becoming members of the organization or from non-work-related activities. This phase is called the intrapreneurship and experimentation. Next, order evolves from the first chaotic stage as government organizations recognize the need to draft norms and regulations. This phase is calledorder from chaos. Finally, in the institutionalization stage, institutions evolve byclearly outlining formalized social media strategies and policies. Every government department in India that uses some form of SNSs are in one or the other stage of this evolution process. The KPD and the Delhi traffic police departments, for example, has constituted a separate cell to manage the social media pages and interact with their citizens, placing them in the second stage. However, these departments are so far from the institutionalization stage due lack of clear policy and conduct guidelines. The lack of policy guidelines in the maintenance of a social media page by public sector entities and government can lead to unaccountability. Drafting such a policy guideline for the use of social media in e-governance can itself be challenging, as a uniform policy may not fit all departments and agencies alike (Banday & Mattoo, 2013). (2012) is -what the user comes to read, learn, see, or experience,‖ from the social networking site, which helps to propel the brand [government departments image] into the hearts and minds of prospects, customers, and others. Content can be information, words, images, graphics, etc. that helps to tell the story about the organisation in order to capture or maintain the target audience's attention (Holliman and Rowley 2014). Type and format of content on Facebookpages Content, according Halvorson and Rach However, different researchers have identified different classification for these message types and formats. Leung (2017) classified Facebook posts as four message formats: word/text, picture, web link and video and three types of message content, brand related, product related, and interactive. Studies have showed that the most widely used format in Facebook is post with a photo (Cvijikj et al., 2011;Kim et. al., 2015), which also drew more consumer responses than text-only content as well as video content (Kim et. Al., 2015). Photos are preferred over text-only and video posts as they are more eye catching than text and convey a story or message more quickly than video respectively (Lev-on 2015). Reddick's (2016) study shows that the most commonly posted content on local governments Facebook page includes public information, announcements, advocacy and tips. There has been no conclusive result as to what type and format of content is preferred by user on Facebook pages. One study proved that entertainment had a stronger positive effect on value compared to informativeness (Xu 2009). This is in line with Yuki's (2015) research finding that content that made people feel happy was most likely to be shared and Humorous messages are more viral (Taecharungroj and Nueangjamnong 2015). The timing of the posts is also known to affect the engagement levels of the users with them. Posting after work, when people are in transit, and during free hours elicit better response from users (Peruta and Shields, 2018). Attitude towards content and attitude towards the Facebook page In a social media environment, it is important to learn how content, its format and type affects the user's attitude towards a Facebook page. The purposeof Facebook as a communication tool in itself is to influence how the users perceive the brand, in case of business and to improve the trust of citizens in them, in case of Public entities. Therefore, a new construct, called ‗attitude toward the page' (website) is evaluated. Belch& Belch (2003) defines attitudes as a -summary construct that represents an individual's overall feelings toward; or evaluation of; an object‖. ‗Attitude toward the page' can be considered as a users' -predisposition to respond favourably orunfavourably towards thepagecontent. The right content format has been found to favourably influence the users Attitude towards the Facebook page (Leung 2017). However, the effect of content type on the user's attitude towards the Facebook page has not been conclusively proved. Neither has it been analysed whether attitude of the users varies with the message content itself. Attitude towards the content can be understood as Facebook page user's favourable or unfavourable feelings toward the Facebook post based on the type of information provided in the message. Studies have shown that the content and its value to respondents, affects the attitude toward the object (Daugherty et al., 2008). Disseminating trustworthy content by [government] organisations thus become important as the perceived content and its credibility has a strong positive impact on their attitude towards the message (Friedman & Friedman, 1979). However, the effect of the type of content on brand attitude is yet to be explored by researchers. KPD's Facebook page post the following type of contents -(1) Self-promoting photos glorifying actions taken and achievements of the department and its personnel, (2) Self-promoting videos (3) photos providing general information/ awareness/tips, (4) videos providing general information/ awareness/tips, (5) photos/images of hilarious memes, 6) hilarious video memes (7) re-postings of user post mentioning the Police department (Facebook, 2019). 277 With following hypothesis, we try to understand how the user's attitude towards the Facebook page of KPD and the attitude towards the content itself is influences by the type of content disseminated through the page: H 1 : Respondents form more positive attitudes towards the Kerala Police Department when Self-Promoting content is displayed on the Facebook page. Method The mediators of the KPD's Facebook page post the following type of contents -(1) Self-promoting photos glorifying actions taken and achievements of the department and its personnel, (2) self-promoting videos (3) photos providing general information/ awareness/ education/ tips, (4) videos providing general information/ awareness/tips, (5) photos/images of hilarious memes (trolls), 6) hilarious video memes (7) re-postings of user post mentioning the Police department (Facebook, 2019). Since hilarious video contents were posted very rarely, this was not considered for the study. To study the influence of these content types and format on the attitude of the respondents towards the Kerala Police Department and to the content itself, four stimuli were shown to the respondents: (1) ASelf-promoting video on a recent achievement of the police department (S 1 ), (2) A video for public awareness/ education (S 2 ), (3) A hilariousmeme image(S 3 ), (4) An image of thepost re-posting a user post mentioning the Police department (S 4 ). One group was taken as control group and therefore no stimulus was shown to them (C 1 ). All the stimuli were adopted fromactual posts that appeared in the Facebook page of Kerala Police Department. In case of content that appeared in the photos/ image format (S 3 and S 4), a screenshot of the actual post was shown. This shows the image within the Facebook page of the department itself, to give a more realistic experience of the page. The respondents who were subject to the video content (S 1 and S 2), were shown the video, along with the screenshot of the Facebook CONTENT TYPE A questionnaire was administered to the respondents right after their interaction with the stimuli to measure the respondents' attitude towards the stimulus, the department and the perception of the respondents towards the Facebook page of the Department. Sample and Procedure The research was conducted among 200 post graduate students of Mahatma Gandhi University, Kottayam, Kerala. The use of college students for research in social media is quiet common (Criado 2013). They are known to engage in higher levels of self-disclosure and maintain more favourable attitudes toward social media (Chu 2011) A questionnaire was administered to the respondents to measure the respondents' attitude towards the stimulus, the department and the Facebook page of the department. The students were selected from among the post-graduate students of the University after an initial screening to identify Facebook users. Randomly selected respondents were subject to the four stimuli, S 1, S 2, S 3, andS 2 using an online questionnaire. The online form allowed the researchers to attach the images and/ or the video stimuli to the questionnaire itself. One group was taken as control group and therefore no stimulus was administered to them (C 1 ). After rejecting incomplete responses, 40 responses were obtained from each of the four stimuli and the control group. In order to test the hypotheses, one-way ANOVA and independent sample t-tests were used. To analyse if there were any significant difference in the attitude of the respondents towards the department (H 1 ) and towards the content type (H 2 ), ANOVA was conducted with S 1, S 2, S 3, S 2 and C 1 as independent variable and the towards the department and attitude towards the content type as dependent variables respectively. To test whether the attitude of the respondents towards the content varies significantly with the content type, independent sample t-tests were conducted, with one sample of photo content and another of video content. To test whether attitude of respondents who have had prior interaction with the Facebook page of Kerala Police Department varies significantly fromthose with no priorinteraction, an independent sample t-test was conducted. Instrument The data was collected using an online questionnaire. The online form allowed the researchers to attach the images and/ or the video stimuli to the questionnaire itself. The first part of the questionnaire constituted demographic details such as their name, age, sex, course they have enrolled for and semester. The respondents were also asked certain questions to understand their usage of Facebook, such as ‗with what frequency they checked their Facebook account' and ‗using what platform they checked their Facebook account'. The second part of the questionnaire employed previously developed scales, tomeasure the subjects' attitude towards the stimulus, attitude towards the department and attitude towards the Facebook page of the department. Table 1 lists the scale items used tomeasure these variables. The scales for both attitudes toward Kerala Police Department and the Facebook page of Kerala Police Department wereadaptations of the attitude towardadvertising scale developedby Muehling (1987) and adopted to study the attitude towards social media (Leung et. al, 2017;Chu, 2011). This scale use three five-point semantic differentialitems: "bad/good," "negative/positive,"and "unfavorable/favorable".The attitude towards the content type/ format measure contained six items derived from Chen and Wells, 1999. This includesfive-item measured using a five-point semantic differential scale. (boring/ interesting, unimpressive/ impressive, not attractive/ attractive, unappealing/appealing, unlikable/ likable). Descriptive Statistics This study tested the effects of different stimuli on the same set of dependent variables: attitude towards the content on the Facebook page of KPD and attitude towards the KPD. The analysis of these results involves comparing mean values and analysing variances. In order to test the hypotheses and identify interactions between the dependent and independent variables, one-way ANOVA and independent t-tests were used. To assess the reliability and validity of the various items of the scales used in this study, Cronbach's alpha coefficients were computed. All the alpha values, ranging from .918 to .963 indicate good reliability (Table 1). To ensure that the statistical methods to be used were appropriate for this dataset, the normality of data was verified by applying the Kolmogorov-Smirnov Test. The data was also checked for outliers but no extreme values were found. The sample consisted of 46 per cent (92) male and 56 per cent (108) female, ranging in age from 20 to 29 years, with an average age of 22.7 years. 35% of the respondents access their Facebook account ‗several times a day' and 22% access it ‗once or twice daily'. Only 18% of the respondents said that they ‗rarely' access their Facebook account. 97 % percent of the respondents access their Facebook account through a mobile platform, either using the Facebook App (79.5%) or logging in directly to the site from usingtheir mobile internet (17.5%). A total of 85 respondents have had some levels of previous interaction with the Facebook page of the KPD. 78% of the respondent said that they would -follow‖the Facebook page of the KPD if they come across the page through their Facebook account. Hypothesis Testing Influence of Content type on attitude towards KPD According to H 1 self-promoting content results in more positive attitude towards the KPD. The results of the ANOVA test taking the content type, S 1, S 2, S 3, S 2 and C 1 as independent variableshowed significant difference in the attitude of the respondents towards Kerala Police between these stimuli (F(200)=2.625, p<0.05). Hilarious meme‖ image (S 3 ) generates the most positive attitude towards the Kerala Police Department (Mean=11.325), followed by post re-postings a user post (S 4 ) (Mean=9.90), Self-promoting videos, S 1 (Mean = 9.775) and public awareness/ education (S 2 ), (Mean = 9.075). Thus H 1 is not supported. Though the content type affects the attitude of the citizens towards the respondents, self-promoting posts are not the type that helps to generate the most favourable attitude towards KPD. Attitude towards Content type To test whether Self-Promoting content displayed on the Facebook page of the Kerala Police Department attracts the most favourable attitudes as compared to other type of contents, the data wasanalysed using ANOVA test. The content type, S 1, S 2, S 3, and S 2 were taken as the independent variableand attitude towards the content as the dependent variable . The result showed that there is s significant difference in the attitude of the respondents towards these stimuli (F (200) = 26.325, p<0.001). The most positive attitude was marked towards -post re-posting user content‖, S 4, (Mean = 17.10). The mean value of the positive attitude generated by hilarious meme image (S 3 ) was 11.475, Self-promoting videos, S 1 was 10.975 and public awareness/ education (S 2 ) was = 9.825. Thus, H 2 is also not supported. 280 Attitude towards Content format The independent sample t-test to test H 3 that photo/ image content displayed on the Facebook page by the Kerala Police Department generates more positive attitude as compared to video content showed that the there is significant difference between the two formats of content (t = 2.351, p< 0.05). Comparing the means, it can be concluded that images/ photos (Mean = 10.612) generate more positive attitude towards the department compared to video content (Mean = 9.450). This supports our hypothesis, H 3 . Influence of previous interaction with KPD's Facebook page on attitude towards KPD To test H 4 , an independent samples t-test examined attitudestoward KPD of users who have had previous interaction with the Facebook page of KPD and thosewho have not had previous interaction with their Facebook page. The resultshowed a significant difference between the attitudes of these two groups. Thus, H 4 is supported by the analysis. Discussion And Managerial Implication:- This study testswhether the content format and content type positively influence the Facebook user's attitude towards the Government department and towards the content itself. The researchers use the case of the Facebook page of Kerala police department. The results of the study revealed findings, some of which concur with the results of previous studies and others which contradict them. The study found that the type of content influences the attitude of the citizens towards the Government Department's page. It brings out the importance of -Hilarious meme/ trolls‖ towards attitude creation. The influence of humour in attitude formation is already recognised (Zhang, 1996). Humour/ entertainment is known to influence behaviour in SNSs users (Lin and Lu, 2011) and generate higherengagement (Luarn et al., 2015). The study also showed that Facebook posts of the KPD, that were actually re-post of user posts that glorified KPD generated the most positive attitude towards the content. Previous studies have also found that user submitted content generates a significantly higher level of proportional engagement (Peruta and Shields, 2018); Bonsón, et al, 2015). Images/ photos generates more positive attitude towards the Government department compared to video content. This may be due the fact that users may simply opt not to watch videos that require them to spend too much time (Kim 2013). The study has many managerial implications. The study in particular has unearthed the importance of humorous meme images which are not used by many Government Facebook pages. This is of great practical implication as social media cell of every government organisation/ department can post such content in their Social Networking Sites. However informative content cannot to be ignored as some studies have proved its importance in attitude formation (Cervellon, 2015) and generating user engagement (de Vries et al., 2012). Informative content is also important as it is likely to improve the user's knowledge (Kim and Lee 2012). Social media cell moderators of Government departments should also realise the importance of citizens' posts. Presently, government departments do not allow citizens to post on the -wall‖ of their Facebook page. Reddick et. al (2016) found that citizens were being consulted and requested feedback only in a limited manner by Government department through SNSs. The study has also revealed that there is a significant difference between the attitude of respondents that have had previous interaction versus those who have had no interaction. This is a crucial finding as this suggeststhe importance of Government departments' interaction with citizens through Social Networking Sites. The difference in attitude towards the different type and format of content, along the results of the previous studies reveal that solely having a profile in the social media by Government departments is not enough. Careful planning and research will greatly benefit non-business entities as they attempt to develop social networking (Waters et al, 2009). Conclusion:- With the advent of social media, e-commerce and user generated content, marketers are obliged to introduce new channels of marketing communication that provide authentic and useful information to the consumers duringall phases of their buying cycle. This is true not only for business firms but also for various state and central governments and their departments. Research findings have also shown that Facebook enhances citizens' perceptions of government transparency and improves citizens' trust in government (Bonsón et. al., 2012;Morgeson et. al., 2011.This study reveals that social media interaction of citizens with Government Departments -in the present study, the Kerala Police Department -would lead to favourable attitude creation towards the department. Thus it is important to for the Government agencies to include social media tools to their 281 communication strategies. However, since Facebook page can carry only limited information and therefore cannot be considered a substitutes for other forms of reaching stakeholders, including websites (Grančay2013). Though this research throws light on what content is most effective in developing favourable attitude toward the government department, the researchers have not analysed whether various combinations of stimuli would produce better results. Further research has to be conducted to analyse how consumers perceive complex stimuli. We can say with confidence that Facebook content leads to positive attitudes among current users, but would that be true if a more diverse group was studied? Future studies could also broaden the study to other departments of the state and country. A thorough content analysis of the Facebook pages of Indian Government Departments is also an issue that also deserves further attention. Nonetheless, this study contributes to the growing literature of social media content use by government organisations.
2021-08-02T00:06:37.372Z
2021-04-30T00:00:00.000
{ "year": 2021, "sha1": "a21f624d110b09fe7ae51dbb3c2bf1abf72d88b8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.21474/ijar01/12684", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "ad4905f5929d28e458ff0fc0f312532bece2261a", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
253805619
pes2o/s2orc
v3-fos-license
Effects of occlusal splint and exercise therapy, respectively, for the painful temporomandibular disorder in patients seeking for orthodontic treatment: a retrospective study Objective To evaluate the effect of hard stabilization splints (HSS), counselling and exercise therapies, respectively, for the painful temporomandibular disorder (TMD) in patients seeking for orthodontic treatment through magnetic resonance imaging (MRI) and clinical examination. Materials and methods Eighty-seven TMD patients were divided into two groups according to their therapies: the HSS group (n = 43) comprising of patients treated with HSS, counselling and masticatory muscle exercises; the control group (n = 44) comprising of patients treated with counselling and masticatory muscle exercises alone. All patients had orthodontic therapies after the first treatment phase. The joint pain and clicking of all patients were recorded via clinical examination. MRIs of HSS groups were taken before (T0), after the first phase (T1), and after the orthodontic treatment (T2). Parameters indicating the condyles and articular discs were evaluated. Clinical symptom (pain and clicking) changes among T0, T1 and T2 time point were detected in the two groups respectively. The significant differences between HSS and control groups, as well as between male and female were tested at T1 and T2. Position changes of condyles and discs in HSS group among T0, T1 and T2 were detected in male and female respectively. Results After the first treatment phase, there was no difference in the decrease of facial pain between the two group, as well as between male and female in the two groups (P > 0.05). Clicking decreasing was not statistically significant. After the whole orthodontic periods, the TMJ pain relapsed in female of the control group, and the number of female’s pain joints was more than male’s (P < 0.05). In the HSS group, the posterosuperior movements of discs and the anteroposterior movements of condyles were recorded in closing position (P < 0.05). After the whole orthodontic periods, female’s disc-condyle angles increased, the discs to HRP distance decreased and condyles to VRP distance increased when compared with the data of T1 (P < 0.05). Conclusions For the orthodontic patients with painful TMD, HSS combined with counselling and exercise therapies before orthodontic treatment could provide pain relief. HSS is helpful to improve the position and relation of discs and condyles. In addition, male's prognosis is better than female's in terms of stability. Introduction Temporomandibular disorder (TMD) is a common condition and frequently encountered disease in oral system, which is a general term for a group of diseases involving masticatory muscles, peripheral nervous system, and temporomandibular joints (TMJs) [1]. Main symptoms of these diseases are pain, joint friction, irregular or limited mandibular function, and they are more common in young female [2]. Anterior disk displacement (ADD) is one kind of TMD, including anterior disc displacement with reduction (ADDR) and anterior disk displacement without reduction (ADDWR). Some pointed out that ADDR is the most frequent type of disc displacement [3,4], and most TMD patients' condyles were located in a posteroinferior position [5,6]. The pathogenesis of TMD has not been fully clarified, which is related to psychosocial factors, immunity, occupational strain, TMJ overload and its anatomical factors and so on [7]. Occlusal factor plays a controversial role in the occurrence and development of TMD, which has been studied by numbers of researchers [8,9]. But those thought occlusal factor as the most important are pretty old ones from mainly 20-40 years ago [10]. Current literatures have suggested to administrate TMD as a multifactorial problem, as some systemic or local factors such as cervical spine disorders, oral parafunctions [11], nicotine, sleep bruxism [12], and mental state [13] promote the development and progression of TMD. The conservative therapy of TMD mainly include occlusal splint therapy, counselling, exercises, massage, manual therapy, which are considered as the first choices treatment for TMD pain because of their low risk of side effects [14]. Exercises therapy is one of effective treatment methods in rehabilitation to facilitate normal movement patterns through the biofeedback mechanism. Florjanski [15] included 10 papers into a systematic review, showed a significant correlation between biofeedback usage and reduction of masticatory muscle, that means exercises and biofeedback can be effective tools in painful TMD management. Hard stabilization splints (HSS) are also used to relax masticatory muscles and guide the mandible to a stable position, and they can effectively reduce clinical symptoms with the advantages of easy adaptation and simple preparation [16,17]. The reality is that many patients seeking for orthodontic treatment do have some TMJ clinical symptoms, such as pain, friction and clicking. In orthodontic clinics, in order to alleviate the clinical symptoms of some patients with obvious TMJ pain, sometimes HSS, counselling or exercises therapy are used before orthodontic treatment to relieve TMJ symptoms and reduce TMJ discomfort [18,19]. However, HSS usually causes more discomfort than exercise and counselling therapies. For the same purpose of alleviating clinical symptoms, would HSS therapy before orthodontic treatment be more effective than counselling and exercise therapies in alleviating pain, clicking? How about its effect on discs and condyles? Is there any difference about the efficacy between male and female? The whole dynamic process from HSS, counseling, exercise therapies to the end of the occlusal reconstruction about TMJ symptoms are relatively rare. The aim of this study was to explore whether HSS combined with counselling and exercises has additional benefit in relieving TMJ pain, whether HSS will increase longterm efficacy stability in the following orthodontic treatment, to explore what the HSS dose to disc-condyle position. The null hypothesis is that no difference in efficacy would be found between HSS therapy and counselling, exercise therapies, as well as between male and female. This study could provide some references clinically. Participants This retrospective study selected 20 to 30-year-old adults with painful TMD as the experimental object, and was done at the Xiangya Stomatological Hospital of Central South University with 41 males and 46 females. The samples were consecutive patients with detailed and complete clinical data who started the treatment after 04-2020 and ended before 06-2020. Subjects were selected according to the following criteria: (1) Meeting the diagnostic criteria for TMD research (RDC/TMD) [20], (2) unilateral or bilateral chronic TMJ pain in articulatory areas or masticatory muscles, no acute pain, (3) no degenerative disease on TMJ or oral parafunctional activity, (4) no trauma history or TMJ surgery history, (5) no orthodontic treatment history or restorations in the oral cavity, (6) no other systemic disease or clinical history. Conclusions: For the orthodontic patients with painful TMD, HSS combined with counselling and exercise therapies before orthodontic treatment could provide pain relief. HSS is helpful to improve the position and relation of discs and condyles. In addition, male's prognosis is better than female's in terms of stability. Keywords: HSS, Orthodontic treatment, Painful TMD, Disc-condyle position Treatment design Centric relation (CR) occlusion was performed by one hand induction directly by the same clinician and recorded by Delar wax. Then, occlusion records were transferred to the Germany SAM articular and the splints were produced on this CR position (Fig. 1). Patients of HSS group were asked to wear HSS throughout night plus 4 h during the day, being 2 h in the morning and 2 h in the afternoon for 4 months. Clinicians have the HSS adjustably grinded on each subsequent visit (returning to the hospital twice a month in the first month, once a month afterward). In addition, counselling and masticatory muscle exercises were also implemented during the period. The control group implemented counselling and masticatory muscle exercises alone. Fixed orthodontic treatment was used to all patients after the first treatment phase. Class I occlusion was achieved at the end of the orthodontic therapy and modified Hawley's mechanical retainers were used for each patient. The implementation of treatment for all patients was performed by one clinician. Radiographic data In the section of radiographic data, Authors should add the information weather or not the patients wore, as this, see: MRIs of three different stages in the whole treatment procedures including pre-splint (T 0 ), immediate postsplint (T 1 ) and post-orthodontics (T 2 ) of the patients in HSS group were accessed with closings and the greatest openings respectively. All MRI data were calibrated and obtained by the same observer with patients not wearing fixed retainers, as retainers may blur and cause distortions to the image [21]. MRIs were acquired by a 1. The midsagittal positions of condyles on MRIs were selected for image depiction and fixed point, and Drace-Enzmann's method [22] was adopted for measurement. Descriptions on MRIs were designed to measure the position of condyles and discs, as well as the disc-condyle relations (Fig. 2). Detailed definitions and methods of each measurement are shown in Table 1 and Fig. 2. Clinical efficacy evaluation TMJ clicking: Whether there was joint clicking and the number of joints with clicking at T 0 , T 1 and T 2 were recorded. No clicking was defined as the disappearance of joint clicking during the whole opening and closing movement. (2) TMJ pain: The degree of TMJ pain, including masticatory and joint area, was assessed for signs and symptoms according to Mehra and Wolford (7) and Kurita et al. [23] Visual analogue scales (VAS) were used for subjective evaluation of joint pain (0 = no pain, 10 = severe pain) (Fig. 3). The TMD symptom assessments after treatment were done by one evaluator who was blinded to the history of each subject. Statistical analysis All data were analyzed by IBM SPSS Statistics 21 software. Kolmogorov-Smirnov analysis and Fanchazzi analysis were used for the homogeneity of variance test and normal distribution, and statistical significance was established at α = 0.05. One-and two-way analysis of variance (ANOVA) with Tukey tests was used to detect clinical symptom changes among T 0 , T 1 and T 2 time point. An independent sample t-test was used to test significant differences between HSS and control groups, as well as between male and female. To assess the reliability of these measurements, 30 subjects were randomly chosen. All measures were duplicated by the investigator R.N. An intraclass correlation coefficient (ICC) was used to determine the intraobserver reliability of the measurements through reliability analysis in SPSS. Reliability was divided into three categories: poor (ICC < 0.40), fair to good (0.40 ≤ ICC ≤ 0.75), and excellent (ICC > 0.75) [24]. Reusults Kolmogorov-Smirnov analysis and Fanchazzi analysis with all P < 0.05 shows that each set of data conforms to homogeneity of variance test and normal distribution. And the intra-observer reliability of the measurements of all descriptions using SPSS software was excellent, with ICC's ranging from 0.821 to 0.898. The retrospective power was from 0.791 to 0.932. Distribution of diagnostic subgroups of TMD (Table 2) The subjects in both the HSS and control groups that were examined according to RDC/TMD criteria showed none of the patients were diagnosed with Disc displacement without reduction, with limited opening. Comparison of clinical symptoms between HSS group and control group (Tables 3, and 4) None of HSS, counselling and exercise therapies could alleviate the TMJ clicking. There was a significant TMJ pain relief in both groups. However, pain relief in the HSS group was mainly concentrated at the T 1 period, after which there was no statistically significant change at the T 2 period. In contrast, the changes of TMJ pain in the control group had statistical significance at both T 1 and T 2 , with pain relief at T 1 and pain recurrence at T 2 . There was no difference between the T 0 and T 2 periods in TMJ pain in control group. At the T 1 period, the VAS value and number of pain joints in HSS and control groups decreased, and the difference between the two groups was not statistically significant. At the T 2 period, the VAS value and number of pain joints in the control group increased, which was significantly greater than that of HSS group (P = 0.017, 0.004). Comparison of TMJ pain between female and male in the two groups (Table 5) There were no statistically significant differences in numerical comparisons between all male and female in the HSS group. At the T 2 period of the control group, the difference in the number of pain joints and VAS value was statistically significant (P = 0.001, 0.003). The VAS value of male (2.14 ± 1.05) was smaller than that of female (4.33 ± 1.57), and the number of pain joints of male (9) was also less than that of female (30). MRI descriptions (Table 6) Both in male and female, the discs to HRP distance significantly increased and to VRP distance significantly decreased after the HSS treatment in closing position (P < 0.05). The disc-condyle angles also decreased (P < 0.05). These outcomes indicated that the discs showed posterosuperior movements and condyles showed anterosuperior movements immediately after HSS treatment in all HSS group's patients. At the T 2 period, female's disc-condyle angles increased, the discs to HRP distance decreased and condyles to VRP distance increased when compared with the data of T 1 (P < 0.05), which meant that there was a tendency to relapse back to discs' original position. The good news was that the statistically significant difference in (T 2 -T 0 ) period was still showed in female (P > 0.05). In opening position, the movement of discs and condyles had no statistically significant difference, as well as disc-condyle angles. The MRIs in typical cases was showed in the Fig. 4. Relationship of orthodontic treatments and TMDs In normal physiological conditions, there is a harmonious and balanced relationship among condyle, articular disc and fossa. TMDs may occur when this balance is broken [25]. The normal TMJ disc is located between the articular fossa and the condyle. Its posterior articular disc band is located on the transverse crest of the condyle, and the intermediate zone lies interposed between the condyle and the articular eminence [4,26]. The ideal position of condyles is usually defined by the phrase centric relation (CR): condyles are at the most upper and anterior position of the fossa. At this time, condyles face to the posterior slopes of joint nodules, and articular discs are at a moderate and stable position. CR has nothing to do with occlusion and facial vertical distance, but it is the most stable, comfortable and repeatable physiological position of mandible [27]. While centric occlusion (CO) is a mandible position versus maxilla based on the biting. Rinchuse, et al. [28] believed that there was no direct relationship between orthodontic treatment and TMDs. They thought TMJ problems could be temporarily ignored before and after orthodontic treatment, especially for patients without joint pain. While others, such as Hudson et al., [29] insisted on the close effect of bites on joints, and they believed that TMDs caused by the improper position of condyle was a problem that needs to be considered and dealt with before orthodontic treatment. At present, it is recognized that orthodontic treatments cannot lead to and cannot treat TMDs, but the reality is that many patients seeking for orthodontic treatment have TMD symptoms such as TMJ pain. For alleviating TMJ pain and discomfort before orthodontic treatment, occlusal splint therapy, counselling, exercises, massage, manual therapy are considered as the main choice treatment because of their low risk of side effects. Besides, some clinicians think occlusal splints, in their experience, can provide a guarantee for the accurate diagnosis and security of TMJs in the people seeking orthodontic treatment. Influence of the hard stabilization splint Both in male and female, clinical examination indicated that joint pain disappeared in two groups at T 1 and joint clicking were not eliminated at all in the whole treatment period. However, there was a TMJ pain relapse in control group at T 2 , especially in female. Which suggested that there was no additional beneficial effect of the use of HSS combined with exercise and counselling therapies in terms of TMJ pain relief in short-term, but after the end of the entire orthodontic treatment, HSS combined with exercise and counselling therapies are better for TMJ pain relief in long-term, especially in female. In addition, in HSS group, MRI showed the disc-condyle angle decreased, and posterosuperior movement of articular discs and anterosuperior movement of condyles occurred at closings after splint treatment. Male had a more stable curative effect than female in terms of the complete orthodontic treatment. All of these suggested that discs, condyles had a tendency to restore the physiologically optimal positions to some extent, and the patients with HSS tended to restore normal disc-condyle relation at closings after splint treatment. We don't really know exactly what the additional therapeutic mechanism of HSS are when compared with counselling and exercises therapies, but here are some possibilities. (1) The surface of the posterior tooth area of HSS is flat and smooth, which can ensure no interference in mandibular movement and prevent condyles from being locked by ICP (intercuspal position). The anterior tooth area of HSS forms the guiding surface of mandibular anterior and lateral movement, which will ensure a balance in mandibular anterior and lateral movement, and gradually restore TMJ to a comfortable condition [30]. (2) Ettlin [31] suggested that the distance between condyle and articular fossa was sometimes redistributed after splint treatment. This changed space can redistribute the contact area of joint surface, and reduce joint load. So as to cushion abnormal pressure of condylar functional area, release the pressure on disc double plate area, and reduce or even eliminate TMJ injury. (3) The posterior movement of the TMJ disc may be related to the improvement of the disc-condyle relation, so that the TMJ disc can take the advantages of its Table 6 Position changes of condyles and discs in post-splint (T 1 -T 0 ), post-orthodontics (T 2 -T 1 ) and observation (T 2 -T 0 ) (male = 20, female = 23,) M is mean, SD indicates standard deviation, Sig Significance * P < 0.05; ** P < 0.01 own elasticity and surrounding attachments, especially the posterior condyle attachment to return to the normal position [32]. Compared with the articular disc itself and other attachments, the posterior condylar attachment is quite different in structure, which still has the property of restoration after being stretched by external force. In the premise of the ADD degree was not serious and the structure of posterior condyle attachments was not damaged, as long as a favorable environment is created, such as using splints to increase joint space and reduce joint pressure, restoration is still possible under the action of elastic fibers of posterior condyle attachments. (3) Other factors can account for the result, such as the increased occlusal vertical distance, eliminated muscle tension, reduced stress of masticatory muscle and TMJ, and inhibited contraction of ascending muscle group. On the other hand, male had a more stable efficacy in both clinical examinations and MRI results when compared with female's. We don't really know exactly why things turned out as they did, but we hypothesized some possible reasons for the phenomalea. (1) In clinical practice, normal disc-condyle relations are often not restored, and the temporary stability of disc-condyle relations partly result from the reconstruction of joint discs, which can be helped by HSS. TMJ pain will occur again if such reconstruction-more difficult for female-is not successful. Isberg [33] suggested that the difference results from changes in collagen metabolism associated with the genetic joint laxity. Campos [34] also suggested that the female sexual hormone-oestradiol leads to pro-inflammatory cytokines, as well as aggravating TMJ inflammation. (2) Larger joint spaces were observed in male samples when compared with female samples, especially the superior and posterior spaces in the sagittal view due to the greater thickness of posterior bands of TMJ disks [35,36]. Kinniburgh [37] also reported that the volume of superior and posterior spaces was associated with disk reduction and pain. So, we hypothesis that smaller articular spaces of female may be an etiologic factor for the higher relapse of TMJ pain on the basis of these findings. (3) The stronger maxillofacial muscle strength of males promoted a more stable soft tissue in the TMJ area. While the weaker muscle strength of young female tended to form a more flexible TMJ disc and ligament, which led to easier displacement of TMJ disc and articular cartilage damage. For the above reasons, the female's long-term stability of painful TMDs treatment is more difficult than that of male. Variaties (mm) There are certain limitations of this retrospective analysis. First, the research has a relatively small Fig. 4 MRIs of typical TMJs before and after HSS therapy. a closing position before HSS therapy; b opening position before HSS therapy. c closing position after HSS therapy; d opening position after HSS therapy sample size due to the strictly controlled inclusion criteria and the required integrity of the research data. Second, it is difficult to use a blind method in retrospective analysis, and the bias is inescapable. Third, the research lack of long-term tracking of clinical efficacy on TMD, which need to be implemented in the future. However, taking the clinical symptoms and MRI results into consideration, we analyzed the influence of gender factor and treatment methods on the clinical efficacy. The useful conclusion could still provide some clinical references. Clinical applications In general, this study suggests such HSS can relieve the pain of masticatory muscles for the orthodontic patients with painful TMD, especially for female. Besides, although there was a slight increase in disc deviation after orthodontic treatment in female, there was still statistical significance in all descriptions between T 0 and T 2 . These results indicates that both in male and female, HSS might improve disc-condyle relations in short-term, but long-term observation and a more adequate statistical analysis are needed. In addition to treating painful TMDs, some clinicians think HSS also have the function of clarifying mandibular positions and assisting orthodontic diagnosis. In view of the possible relationship between the CO-CR incompatibility and the TMD, it is particularly important for clinicians to determine accurate orthodontic diagnosis under favorable TMJ condition for patients with TMJ symptom. Clinicians' treatment goal is to achieve coordinated static and functional occlusion, and to limit CO-CR deviation within a certain range as much as possible [38]. Unfortunately, such guiding function we have said about HSS is not scientifically verified, but instead it just represents ideas floating around in the TMD, orthodontic, and restorative communities. Conclusion (1) For the orthodontic patients with painful TMD, HSS combined with counselling and exercise therapies before orthodontic treatment could provide pain relief. In addition, male's prognosis is better than female's when without HSS therapy. (2) HSS is helpful to improve the position and relation of discs and condyles. However, there was a tendency to relapse back to discs' original position, especially in female. Careful and long-term monitoring of the condyle and disc may be needed after splints and orthodontic therapies.
2022-11-24T15:01:10.948Z
2022-11-24T00:00:00.000
{ "year": 2022, "sha1": "b56e976d4b682515a70a9c54f5be8d8bec651b19", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "b56e976d4b682515a70a9c54f5be8d8bec651b19", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
90119184
pes2o/s2orc
v3-fos-license
The genetic polymorphism of merozoite surface protein-1 in Plasmodium falciparum isolates from Aceh province, Indonesia An estimated of 3.3 million Indonesian population were infected with malaria. However, extensive genetic polymorphism of the field isolates msp-1 of P. falciparum represents a major obstacle for the development of malaria treatment. The aim of this study was to investigate the genetic diversity of msp-1 genotype in field isolates of P. falciparum collected in Aceh Province. A total of 90 patients with malaria (+) were selected from eleven district hospitals in Aceh from 2013-2015. Data were collected by anamnesis, complete physical examination and laboratory tests for msp-1. All protocols to diagnose malaria followed the WHO 2010 guideline. All samples were stored in Eijkman Biology Molecular Institute, Jakarta. Among 90 samples, 57.7% were male, and 42.3% were female with the most cases found between 21-30 years old. From the allele typing analysis of P. falciparum from Aceh; K1, MAD20, and RO33 allele types were identified. MAD20 type was the highest allele found in this study (57.9%). It was found in single and mixed infection. A moderate level of the mixed allele was also observed. Introduction An estimated of 3.3 million Indonesian population were infected with malaria including 1.2 million in the risk areas which Plasmodium (P) falciparum is dominant with an Annual Parasite Incidence (API) of 1.0/1,000 population. [1][2][3] Unfortunately, the manifestation of malaria varies. However, extensive genetic polymorphism of the field isolates of P. falciparum represents a major obstacle for the development of clinical manifestation and malaria treatment. In this study, genetic of msp-1 among P. falciparum field isolates from Aceh Province was analyzed. Malaria is the most significant of the parasitic diseases, affecting 198 million people worldwide. [4][5][6][7] Parasite virulence contributes directly to the clinical outcome, and parasite diversity influences the speed at which strain-specific immunity develops in the host population. [8,9] The aim of this study was to investigate the genetic diversity of msp-1 genotype in field isolates of P. falciparum collected from Aceh Province. Methods This study has been approved by Gadjah Mada University ethical committee reference no: KE/FK/173/EC. A cross-sectional study with 90 participants was enrolled. Samples were selected from positive P. falciparum tested microscopically and above 18 years of age from eleven district hospitals in Aceh Province which were collected from October 2013-February 2015. Malaria case was an individual who had positive P. falciparum from microscopic examination and nPCR. Data was collected by anamnesis, complete physical examination and laboratory tests (microscopic and nPCR for MSP-2 allele). [11][12][13] All samples were stored in Eijkman Biology Molecular Institute, Jakarta. All protocol of assignment and malaria treatment followed the manufacturer manuals and WHO 2010 guideline. Malaria was diagnosed by using finger-prick blood samples which were collected with Whatman 3 M filter paper (GE Healthcare, Buckinghamshire, UK)and stained with 20% Giemsa for 20 min [14][15] for species identification. [16][17][18][19] Plasmodium species were identified by using double assignment microscopic test followed by nPCR with five sets of primers (20 mM), nested-1 using primers r-PLU-5 and r-PLU-6 (25 µL total PCR reaction) and nested-2 using primers with PCR condition from Snounow et al. (1993). [20][21][22][23] Bands were visualized by ultraviolet illumination with a DNA ladder of 100 bp from Vivantis, Selangor, and Malaysia. DNA was extracted from peripheral blood collected in ethylenediaminetetra acetic acid (EDTA) tubes using the Chelex-100 kit according to the manufacturer's instructions. The polymorphic region of block 2 of MSP-1 was amplified by nested PCR using the protocol described by Ntoumi et al. Result Among 90 samples, 57.7% were male, and 42.3% were female with the most cases found between 21-30 years old (46.7%).Diverse allelic of msp-1 was identified in P. falciparum isolates from Aceh Province. Allele analysis of msp-1 revealed 3 different alleles. From the allele typing analysis of P. falciparum from Aceh Province isolates, K1 and MAD20 allele types were identified, with a low number of the theRO33 allele. K1 allele type was identified in 43 (47.8%) blood samples with the majority (79.1%, 34/43) occurred as single infections. MAD20 type was the highest allele found in this study (56.6%), either as single or mixed infection. Discussion Analysis of the P. falciparum genetic profile may provide useful information on specific parasite characteristics to design intervention strategies targeting virulence factors. [28,29] To our knowledge, this is the first study in Indonesia that provide information about the genetic diversity of msp-1 genotype in field isolates of P. falciparum. The first investigation which studies the genetic diversity of P. falciparum isolates was done in Libreville, Gabon. Extensive genetic polymorphism within the msp allelic families (30 alleles identified) is observed. This is consistent with the diversity found in Bakoumba (25 alleles) in 1999, in Senegal (33 alleles) in 1995, and in Mauritania (27 alleles) in 2010. [30,31] In our study, the distribution of msp-1 genotype was dominantly from K1 and MAD20 allele. Our study is similar to Kang et al.'s in Myanmar who have the same geographical areas, where they found MAD20 as the most predominant allele. The difference in a number of patients infected with mixed allele was higher in their study, whereas in our study, single infection from MAD20 was higher than mixed infection. [32] Conclusion The genetic diversity of MSP-1 genotype of P. falciparum from Aceh Province was identified in MSP-1 P. falciparum patients. The distribution of MSP-1 genotype found was the MAD20 type, which was the highest allele found in this study as single and mixed infection. A moderate level sign and symptoms of the mixed allele was also observed.
2019-04-02T13:13:34.309Z
2018-03-01T00:00:00.000
{ "year": 2018, "sha1": "7b333ccdc88735f2001c058858cfae2f828bdbb8", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/125/1/012037", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "b1e6f1d1282338d297872d570d709cf2a0314991", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
232144596
pes2o/s2orc
v3-fos-license
Estimation of rainwater harvesting potential for emergency water demand in the era of COVID-19. The case of Dilla town, Southern, Ethiopia Safe and adequate quantity of water is crucial for the implementation of infection prevention and control measures during the prevention of COVID-19. Rainwater harvesting could be an optional water source to fulfill or support the emergency water demand in areas where there is abundant rainfall. The study aimed to assess the rainwater harvesting potential and storage requirements for households and selected institutions and to determine its adequacy to satisfy the emergency water demand for the prevention of COVID-19 in Dilla town, Southern Ethiopia. Rainwater harvesting potential for households and selected institutions were quantified using 17 years’ worth of rainfall data from the Ethiopian Meteorology Agency. To address the rainfall variability, we computed the confidence limits of monthly harvest-able rainwater potential using confidence intervals about the mean as well as confidence intervals using Coefficient of Variation (COV) of monthly rainfall. The storage requirements were also estimated by considering the driest and west seasons and months. The average annual rainfall in Dilla town was 1464 mm. Households with a roof area of 40 and 100 m2 have the potential to harvest 7.2–39.66 m3 and 19.11–105.35 m3 of rainwater respectively. Similarly, the rainwater harvesting potential for the selected institutions was in the range of 34524.5–190374.5, 4070.8–14964.8 , 1140.4–6288.6, 4561.7–25154.3, 5605.8–14152.8 , and 402.4–2219.1 m3 of rainwater for colleges, vocational schools, secondary schools, primary schools, Dilla University Referral Hospital and health centers respectively. These institutional rainwater harvesting potentials can address, 24–132.2, 222.4 –817.8, 59.4–327.3, 34.6–190.9, 94.5–238.5, and 28.2–155.7 % of the colleges, vocational schools, secondary schools, primary schools, Dilla University referral hospital, and, health centers emergency water demand respectively. Rainwater can be an alternative water source for the town in the prevention and control of COVID-19. Further applied researches must be conducted that can address the rainwater quality and treatment for ease of use. World Health Organization ( WHO, 2021 ). According to the Africa Centers for Disease Control and Prevention, as of 23 February, 2021, a total of 3,836,817 COVID-19 cases and 101,629 deaths with a case fatality rate (CFR: 2.6%) have been reported in 55 African Union (AU) Member States which is 3.5% of all cases reported globally and the majority (91%) of Member States continue to report community transmission. With a total of 152,806 cases. Ethiopia is among the fifth most affected country in terms of the number of positive cases reported next to South Africa, Morocco, Tunisia, and Egypt, and the total number of death toll reached 2,279 ( Africa CDC, 2021 ). The provision of safe and adequate water, sanitation and, hygienic conditions play an essential role in protecting human health and controlling disease outbreaks, including the current COVID-19 outbreak ( UN Habitat, 2020 ). The provision of quick and just-in-time community water access points (including the provision of soap) in urban and rural areas are critical ( UN Habitat, 2020 ;UNHCR, 2020 ). Different disasters or public health emergency events including, the current pandemic may cause community water system interruptions or may pause additional water requirements than usual which may lead to delivering services below the required amount and quality ( CDC and AWWA, 2012 ). For example, the prominent recommended prevention measures for COVID-19 in households, schools, and health care settings are good infection prevention and control (IPC) and water, sanitation, and hygiene (WASH) practices, which include hand hygiene, safe cough and sneeze etiquette, environmental cleanliness, and equipment disinfection. These preventive measures must be consistently applied, to serve as barriers to human-to-human transmission of the COVID-19 virus in homes, communities, health care facilities, schools, and other public spaces ( WHO, 2020 ;UNICEF, 2020 ;World Bank, 2020 ). These key IPC and WASH practices require not only the availability of a safe and adequate quantity of water ( UNICEF, 2020 ), rather extra water must be available from the normal times. For example, a single 20 s hand wash plus wetting and rinsing uses at least two liters of water, and for a family of five for washing 10 times a day, each would mean 100 l of water is needed only for handwashing ( Rohilla, 2020 ). A global study also indicated that the magnitude of the COVID-19 outbreak was found to be much higher in countries with a lower habit of handwashing ( Pogrebna and Kharlamov, 2020 ), where one of the reasons for the poor habit of handwashing might be a lack of extra water. To meet these higher water demands a continuous piped water service which is an internationally accepted standard for urban water utilities must be in place, a problem where most urban cities in developing countries are trying to overcome ( Kumpel and Nelson, 2016 ). The water supply service in Dilla town is characterized by intermittent water supply with regular interruption and unfair water distribution just like many cities in developing countries with a water consumption rate of less than 20 l per capita per day ( Debela and Muhye, 2015 ;Kanno et al . , 2020 ;Kumpel and Nelson, 2016 ). Intermittent water supply is mainly caused by a variety of factors, which include inadequate water and energy supplies, pipe breakages and leaks, and municipal rationing in response to water shortages ( Kumpel and Nelson, 2016 ;Van den Berg and Danilenko, 2010 ). It has many adverse implications for users, including poorer water quality and higher costs ( Kaminsky and Kumpel, 2018 ), increases the risk of contamination, and ultimately the disease burden for water consumers ( Kumpel and Nelson, 2016 ). In the absence of a safely managed and reliable public piped water service, residents in struggling and emerging cities purchase water from private sources or obtain it directly from natural sources ( UNESCO, 2019 ). Rainwater can be a good water source when emergencies last from mid to longer periods, where there is time to investigate yields, and if appropriate catchment and storage facilities are available ( House and Reed, 1997 ). When compared with surface water, the raw water quality of rainwater is much safer, and less prone to contamination if properly managed and capable of meeting the WHO drinking water standards with no energy requirement in areas where there is an abun-dant annual rainfall ( Parker et al., 2013 ;Nijhof et al., 2010 ;Worm and Hattum, 2006 ;Thomas and Martinson, 2007 ;Rodrigo et al., 2009 ;Ndomba and Wambura, 2010 ;Nguyen et al., 2013 ;Temesgen et al., 2015 ). As a result, it has been suggested as an alternative potable water source to the piped system in different setups including during public health emergencies ( Kim et al. , 2012 ;House and Reed, 1997 ). According to the Pan American Health Organization, rainwater harvesting practice at health care facilities can be a resilient approach for the health sector during natural and man-made disasters ( PAHO, 2017 ). Rainwater harvesting systems has helped and have been suggested in tackling water shortages among schools and universities in different parts of Ethiopia such as in Adama University of Science and Technology ( Temesgen et al., 2015 ), Debretabor University ( Andualem et al., 2019 ), and Addis Ababa University ( Adugna et al., 2018 ). Similar experiences from other teaching institutions such as UK ( Lau et al., 2014 ;Shah et al., 2013 ), Malaysia ( Hamid and Nordin, 2011 ), and Tanzania ( Mwamila et al., 2015 ) revealed that rainwater harvesting systems help to alleviate water shortages for different hygienic requirements and reduce the usage of treated, piped water for non-consumptive purposes. Rainwater harvesting can also benefit households. For instance, in Arba Minch, multistory buildings with a roof area of 60 m 2 , and mean annual rainfall of 900 mm, have the potential of producing 46 m 3 of rainwater ( Feki et al., 2014 ). In another study conducted in Nigeria, a household with a 100 m 2 roof size were found to have a potential to generate between 15.23 and 30.40 m 3 of rainwater which has the potential to meet 27.51-54.91% of non-potable household water demand as well as 78.34-156.38% of household potable water demand for a sixmember household ( Balogun et al., 2016 ). A minimum emergency water quantity standard of 15 l of water per person per day (l/p/d) was recommended by the United Nations Higher Commissioner for Refugees and humanitarian assistant agencies ( UNHCR, 2015 ;Sphere, 2020 ) whereas ( WHO, 2002 ) recommends 15-20 l/c/d and for basic hygiene and infection prevention activities in health centers, the minimum is 10 l/outpatient/day and 40-60 l/inpatient/day and a minimum is 3 l/person/day ( WHO, 2013 ;Sphere, 2018 ;Sphere, 2020 ) of water is recommended for schools. There are arguments from scholars that, these minimum emergency water requirements to prevent COVID-19 such as hand hygiene might be modified up to 100-200 l per person per day for a family of five with a daily handwashing frequency increasing from the normal five times to ten 10 times a day ( Rohilla, 2020 ). However, before the use of rainwater as a source of emergency water supply, access to adequate information is critical. Information may include the amount of monthly and annual rainfall data, rainfall variability, the storage capacity, the emergency water demand, the catchment size, and the local practice of using rainwater harvesting as well as the legal context must be well addressed ( House and Reed, 1997 ). Therefore, our study intends to assess the rainwater harvesting potential, both at the household and institutional level, its adequacy for the emergency water demand for the prevention of COVID-19, and estimation of the size of the required storage tank in Dilla Town, southern Ethiopia. Study area and period The study was conducted in Dilla Town ( Fig. 1 ), which is located in Southern Ethiopia at a distance of 359 km from the capital city, Addis Ababa, on the way from Addis Ababa to Moyale. It is located at 6 °22t o 6 °42´N and 38 °21´to 38 °41´E longitude with an altitude of about 1476 m above sea level ( Demelash, 2010 ;Debela and Muhye, 2015 ). According to the data obtained from Ethiopian National Metrology Agency (ENMA), the 17 years (2002-2018) mean annual rainfall in the area was 1464 mm ( ENMA, 2018 ). The wettest months occur between March and October and the driest months occur from November to February. Precipitation is characterized by a bi-modal pattern with maximum peaks during April and May ( "small rainy " season) and during September and October in the "main rainy " season ( ENMA, 2018 ). The city's water supply represents an annual consumption of 494,164 m 3, in 2018, which is abstracted from groundwater (70 %) and surface water (30%) sources ( CWC, 2016 ;Kanno et al., 2020 ). However, in recent years, owing to the high rate of urbanization coupled with industrial development and population growth, as well as a change in precipitation patterns, the available water to satisfy the water demand has radically decreased, representing a 38% deficit between 2016-2018 ( Debela and Muhye, 2015 ;Kanno et al ., 2020 ). The use of rainwater harvesting is common in Dilla Town. A simple household level rainwa-ter harvesting system used in the town was shown in Fig.2 with no first flush diversion and treatment mechanism. Data collection methods Rainfall data were obtained from the Ethiopian Meteorology Agency in digital form and further analyzed in a spreadsheet ( ENMA, 2018 ). According to Shakya and Thanju (2013) , rainfall is the most unpredictable variable. Therefore, reliable rainfall data, preferably for at least 15 years is required from the nearest station during calculations to consider the variations. Hence, monthly rainfall data for Dilla town for the recent 17 ( Mourad et al., 2017 ), were corrugated iron sheets, and the average roof size of 60 m 2 and a runoff coefficient of 0.8 was employed to account for evaporation loss and possible first flush ( Thomas and Martinson, 2007 ). To include households with different ranges of roof sizes, rainwater harvesting potentials for seven typical roof sizes (40,50,60,70,80,90, and 100 m 2 ) were calculated. Statistical variability (rainfall variability) In monthly rainfall data (intera annual) and accumulated annual rainfall (mm) inter annual were expressed with coefficients of variation CV, using equation Where CV is monthly/seasonal/annual coefficients of variation Sd: is the mean monthly/seasonal/annual standard deviation Mr: is the mean monthly/seasonal/annual rainfall Seasons were classified based on ENMA, (2018) , classification as; Summer (Kiremet) heavy rainfall seasons June, July, August, and September; Winter (Bega) dry season with frost in the morning, which includes October, November, December, and January; and Autumn (Belg) seasons with occasional showers of rain includes February, March, April, and May. Estimation of the rainwater harvesting potential Rainwater harvesting potential for our study was calculated using the monthly balance approach. The monthly harvest-able rainwater ( Qm ) was calculated as a function of the product of mean monthly rainfall ( ̄ m ), roof area ( A ), percentage of roof area utilized for rainwater harvesting ( = 50% (0.5) was utilized), and roof run-off coefficient ( C ) as given in Eq.(2 ). According to Balogun et al. (2016) , using only monthly rainfall for the estimation of rainwater harvesting potential could be misleading since it can hide rainfall variability which occurs in real-life scenarios. Therefore, they have suggested the use of two approaches to be utilized in computing the confidence limits, namely: confidence interval about the mean monthly rainfall as well as confidence interval using Coefficient of Variation (COV) of monthly rainfall as described by Johnson and Kuby (2012) , Bluman (2013) as: n Maximum error of estimate (MEE), = Standard deviation of monthly rainfall for each month and n = sample size = 17. The confidence interval adopted in our study was 0.95 which gave a confidence coefficient of 1.96. The harvest-able rainwater equations for the scenarios of upper confidence limit (UCL) of monthly mean rainfall and lower confidence limit (LCL) of monthly mean rainfall according to Johnson and Kuby (2012) , Bluman (2013) as stated in Balogun et al. (2016) were obtained as Finally, for the second approach, harvest-able rainwater equations for the upper confidence limit (UCL) of monthly mean rainfall and lower confidence limit (LCL) of monthly mean rainfall were also calculated as; Proposed basic water requirement for households For households, emergency water requirement we have used the standard set by Sphere (2020) , which is 15 l/c/day, and the standard set by ( WHO, 2002 ) which is (15-20 l/c/day) we take the maximum 20 l/c/day of water for emergency water need at the household level for comparison. From the total daily water requirement, 7.5 l was allocated for drinking and cooking purpose whereas the remaining (7.5 l when using the Sphere standard) and (13.5 l per capita per day when using the WHO standard) were allocated for hygienic purposes in the fight against the COVID-19 such as frequent hand washing and other personal hygienic purposes. The average family size of five was utilized for the water demand calculation based which is taken from Ethiopian Central Statistical Agency ( CSA, 2007 ). Proposed basic water requirement for health facilities and schools To assess rainwater harvesting potential in the selected institutions in Dilla Town the average roof size was adopted from similar institutions in Addis Ababa ( Adugna et al., 2018 ). We took the assumption that the roof sizes of the institutions in the two cities would be proportional. The patient load for the health centers and the hospitals as well as the number of students in the schools and colleges were directly taken from the institutions and were utilized for the water demand calculation as indicated in Table 1 . A total of 305 school days were utilized for the calculation because most schools are closed (for two months) during the summertime. However, for Dilla University the calculation takes into consideration all the days (365 days) in the year because the university is giving summer courses. Storage size determination To determine the required storage volume at the household level, the maximum rainwater harvesting potential limit was used. For the institutions, the required monthly water demand multiplied by the dry period will give the required storage capacity. Annual, seasonal, and monthly rainfall distribution for Dilla town The average rainfall (for the historical period) was 1464 mm ( Fig. 2 ). According to the data obtained from the Ethiopian Meteorology Agency, out of the total study years, the highest average rainfall in the study area was 2781.1 mm, recorded in 2008 while the lowest average rainfall was 974 mm, recorded in 2003. Besides, a comparison of annual rainfall for the study period as shown in Fig.2 indicating an increment in yearly rainfall between 2003 and 2008. The seasonal variation of rainfall for Dilla town is described in Table 2 , where winter recorded the highest seasonal rainfall followed by autumn and summer, respectively. The maximum seasonal rainfall of 1400.1 mm occurred in winter, while the minimum seasonal rainfall of 250.1 mm occurred in the summer. The season with the lowest coefficient of variation (COV) of 4.45% was winter, while summer had the highest Coefficient of Variation, which was 39.3% as shown in Table 2 . Based on Hare (1983) rainfall variability index (which is COV expressed in percentage terms), the seasonal rainfall pattern was less variable with an index of < 20% in winter and there is highly variable rainfall in summer with a coefficient of variation index of > 30% while rainfall in autumn was moderately variable with an index between 20 and 30%. Between 2002 Dilla town has a bi-modal rainfall distribution with maximum monthly rainfall was recorded in June and January of 227.2 mm in June and 214.5 mm in January, respectively ( Fig.4 ). The minimum monthly average rainfall for the study period (17 years) was 0 mm in (March, 2012, August, 2007, and November, 2013, while the maximum rainfall was recorded at 983.2 mm in November 2008 followed by 894.2 mm in October, 2011 as shown in ( Fig. 3 ). The highest variability of monthly rainfall was recorded during October and November with COV of 246 and 154.8%, marking the beginning of intense rainfall during the rainy season, while the month with the lowest monthly rainfall variability took place during December with COV of 28%. Based on Hare (1983) rainfall variability index, all the months exhibited high variability with COV (%) > 30% except for December, which exhibited moderate variability between 20 and 30%. Rainwater harvesting potential for households in Dilla town In the proposed household roof sizes, June and August recorded the highest and lowest rainwater harvesting potential respectively. Households with a roof size of 40 m 2 have an average rainwater harvesting potential of 3.63 m 3 during June and 0.47 m 3 in August. Similarly, a household with a roof size of 100 m 2 has an average potential to harvest 9.65 m 3 of rainwater during June and 1.25 m 3 during August ( Table 3 ). The maximum and minimum values (confidence limits) of householdlevel harvest-able rainwater for each month were calculated using the Maximum Error estimate and the Coefficient of variation were indicated in the supplementary materials (SM1). Household-level emergency water demand met by rainwater harvesting (RWH) The estimation of monthly harvest-able rainwater mainly depends on the confidence limits calculated using the coefficient of variation (COV) approach, because it explained the rainfall variability better than the approach used by the maximum error estimate. For example, a household in Dilla town with a roof area of 40 and 100 m 2 have the potential to harvest 7.2-39.66 and 19.11-105.35 m 3 of rainwater respectively. The highest and lowest values were recorded during June and September, respectively as indicated in (SM1). As indicated in ( Fig. 5 ), rainwater harvesting potential was not adequate to satisfy the emergency water demand during December and January across all kinds of roof sizes whereas higher roof sizes such as 100 m 2 and above can satisfy the monthly water demand more than 80% during the dry months (December and January) and surplus can be harvested in most of the months when the maximum limit of the rainfall is considered. When the lowest limit of the rainwater harvesting potential is considered (determined by using the Coefficient of variation) a household with a roof area of 100 m 2 could satisfy 6.8 to 145.2% of the emergency water consumption during the wet months and RWH could not supply during the dry months (December, January, February, and March). The wet months could provide the extra volume of rainwater for the dry months by considering the lowest rainwater harvesting potential as indicated in ( Fig. 6 , Fig. 7 ). Institutional rainwater harvesting potential in Dilla town Average institutional rainwater harvesting was found to be the highest among the collages in Dilla University where they can produce a total of 112,449.5 m 3 of rainwater annually with the highest and the lowest being during October and December respectively as indicated in Table 4 . Health centers in Dilla Town were found to be the lowest among the institutions examined for rainwater harvesting potential. Using the coefficient of variation (COV), when the rainfall is at its maximum, all the selected institutions have fulfilled more than 100% of their demand from the rainwater as the single water source only, except the health centers (which can achieve only 78.3 % of their emergency water demand). The data regarding the maximum and minimum institutional rainwater potential limits calculated using both the maximum error estimate can coefficient of variation can be found on the supplementary material (SM1). Storage requirements To estimate the size of the storage tanker the maximum and the minimum rainwater harvesting limits were applied. Since the Coefficient of variation (COV) approach give a better variability, it was also used for determining the household's storage requirements. As indicated in Table 5 for a household with a 40 m 2 roof size, the minimum, and maximum rainwater storage tanker sizes were 1 and 7 m 3 respectively, while for a household with 100 m 2 roof size 1 and 17 m 3 storage sizes were the minimum and maximum sizes taking into account the effects of rainfall variability to determine the confidence limits. For the storage requirements of the institutions, the monthly demand for the dry months was used to determine the storage requirements. Hence, the collages in Dilla town need the largest storage sizes to accommodate the rainwater which is 48,000 m 3 which means 6000 m 3 each college to store adequate water for the dry seasons ( Table 6 ). Discussion Access to clean water and sanitation are essential and must be available during normal times and extra must be delivered in emergencies such as the current pandemic ( Tortajada, 2020 ;GRM, 2009 ). In such critical times, lack of clean and adequate water for drinking and proper hygienic practices become a major concern for most urban utilities in developing countries ( Kumpel and Nelson, 2016 ). Many cities around the world obtain their water from great distances -often over 100 km away. Rainwater harvesting could give a feasible and resilient solution to this practice of increasing dependence on the upper streams of the water resource supply area which is not sustainable ( UN-Habitat, 2005 ). In this study, the rainwater harvesting potential and the storage sizes among households with different roof sizes and selected key public institutions were estimated using 17 years of rainfall data. The intra-annual variation (CV) for Dilla town ranged between 27.9 and 246% which is an indication of high variability in the rainfall distribution ( Aladenola and Adeboye, 2010 ). This could potentially affect the rainwater harvesting potential at the household and institutional level during the driest months. But since there is adequate harvest-able rainwater during the wet seasons, water-saving practices can compensate for the water shortage during the driest months. The potential to transfer excess water from the wet season to later use in the dry season could be a challenge because it requires higher storage tank installations. Because of the higher rainwater harvesting potential among households and public institutions in the study area, the size of the storage tanks for households and institutions indicated in this study were higher as indicated in Tables 5 and 6 . This requires a higher investment power which could be a challenge in low economy institutions, as also reported by Abdulla and Al-Shareef (2006) . The rainwater harvesting potential from the selected public institutions can supplement 30.3% of the water supplied by Dilla Town's water supply agency in 2018 as reported by Kanno et al. (2020) . This supplement could be increased if other public institutions are involved and if improvements will be made to increase the harvest-able roof sizes above 50%. The institutional rainwater harvesting potential for Dila University dormitory and administration offices was in the range of 345,24.5-190,374.5 m 3 (where each college having an average RWH potential of 14,056.2 m 3 ). This was higher than previously reported findings in Ethiopian universities such as Debre Tabor University ( Andualem et al., 2019 ), where the available water to be collected from dormitory buildings were 24,671.43 m 3 and lower when compared to a single collage rainwater harvesting potential reported by Mishra et al. , (2020 ) in India (Amity University Mumbai campus) which was 25,379.89 m 3 . This discrepancy might be attributed mainly to the difference in roof sizes and the rainfall variability in the two study areas. Emergencies always put decision-makers stuck between fulfilling minimum water quantity versus water quality choices. The priority should be for quantity over quality and all the options available to make the water safer should be applied afterward ( World Health Organization, 2003 ). Since washing hands at critical times together with other hygienic practices is the primary strategy to prevent and control further spread of COVID-19, the impact of water quantity is also expected to have greater influence over water quality just like in the case of diarrheal diseases. Our study showed that Dilla University's Referal hospital can achieve 35.4-262.3 % of its emergency water supply from rainwater which is very crucial in terms the infection prevention as well as economic terms. For example, hospitals in the US that are among the institutions with high water-usage were estimated to benefit 81-122 million liters of saved water per year if they could use rainwater ( Fulton, 2018 ). In health care setups, the water used for infection prevention tasks such as laundry and for cleaning floors and other surfaces need not be of drinking water quality, as long as it is used with a disinfectant or a detergent ( WHO, 2002 ;World Health Organization, 2003 ). Therefore, the rainwater can be used for certain infection prevention tasks even with lower rainwater quality levels, whereas care must be taken in using rainwater for medical activities such as hemodialysis, which requires higher water quality standards. In situations like this care must be taken and rainwater must be used only after approved and recommended water treatment methods are applied ( WHO, 2002 ;World Health Organization, 2003 ). Different studies tried to estimate the rainwater harvesting potential in different countries using rainfall data. Our finding revealed that a higher rainwater harvesting potential when compared with a study conducted in Nigeria ( Balogun et al., 2016 ), where, using the Maximum error Estimate for calculation, a roof size of 100 m 2 had a rainwater harvesting potential between 18.16 and 27.45 m 3 , while 15.23 and 30.40 m 3 of water can be harvested using the Coefficient of variation for calculation. Whereas, for Dilla town even smaller roof sizes such as 40 m 2 gave a higher amount of rainwater (15.71-31.1541.73 m 3 using MEE) and (7.2-39.66 m 3 using COV) for calculation. But rainwater harvesting potential was found to be lower (35.14 m 3 of harvest-able rainwater) when compared to the rainwater harvested with a similar finding from Arba Minch (a city also located in southern Ethiopia) 46 m 3 using a similar roof size of 60 m 2 ( Feki et al., 2014 ). Rainwater was also found to cover more than half of the water demand for the institutional emergency water demand in Dilla town. This is comparable to findings from Addis Ababa, Ethiopia where rooftop RWH from large public institutions can replace 0.9-649% of the water supply depending on the season of the year indicating that the importance of storage facilities to use the excess rainwater during the wet season for later uses ( Adugna et al., 2018 ). Different standard-setting and humanitarian agencies such as ( WHO, 2020 ;INEE, 2012 ;UNICEF, 2020 ) are stressing that for strong personal prevention practices like hand washing and environmental cleaning and disinfection plans to be in place before reopening schools are important precautionary measures that must be taken to lower the risk of COVID-19. Therefore, for institutions like schools and health facilities, rainwater harvesting can be a valuable source of water supply for the strict hygienic purposes needed during pandemic areas with limited or unreliable water supply ( UN Water, 2016 ;Chubaka, 2018 ). Our findings can also imply the water security status of households and the Dilla town in general. In a study done at Addis Ababa city, Ethiopia ( Assefa et al., 2019 ), water security was assessed from three dimensions, namely water supply sanitation, and hygiene. The water supply dimension takes into account different variables for the assessment such as the proportion of the population with piped water supply; water supply service duration; per capita water consumption; the percentage of non-revenue water (NRW); conforming to water quality standards; affordability of domestic water supply tariff. Our finding in-dicated that rainwater harvesting can contribute directly to two critical components of the water security issues by addressing the water supply dimension. The first one is by increasing the per capita water consumption at the household level and the other is by making water available at an affordable price at the household and institutional level. It can also indirectly enhance the water security problems by reducing the stress on the formal water supply services in such emergencies. Self-help RWH water supply systems can enhance water security through easy access, low cost, and ease of management for households and institutions ( Chubaka, 2018 ). In Dilla Town, the main sources of water supply are deep boreholes and, Legga Dara River ( Debela and Muhye, 2015 ;World Health Organization, 2013 ). However, rainwater is mostly an overlooked water source that could easily be an accessible and sustainable source of safe water supply like most countries located in the tropical and subtropical climates ( Chubaka, 2018 ;Nduka and Orisakwe, 2010 ;World Health Organization, 2003 ). Since, the rainwater harvesting potential calculation (for households and institutions) assumes only 50% of the roof size, if a household or the institutions are efficient enough to utilize 100% of the roof area the outcome will be doubled which is very promising. Yet, it should be considered that the upper limit and lower limit calculation of the harvest-able water volume, did not take into account the critical real-life limitations associated with tank size, water losses, water pollution, or social and cultural issues that are likely to reduce the volume that can be attained in practice. Besides, the water quality issues must be a priority if rainwater has to replace other water sources for the prevention of COVID-19. Strength and limitation of the study The strength of this study is that it tried to address the rainwater harvesting potential both for households and major public institutions by considering the rainfall variability into consideration during the calculation. Since hand washing is the simplest, cost-effective, and most effective prevention strategy, handwashing frequency is expected to increase both at household and institutional levels. As a result, the water demand is also expected to increase. This is also true for other hygienic and infection prevention tasks implemented in the fight against COVID-19, which are dependent on water demand for the operation. One limitation was the absence of data regarding the amount of emergency water that is needed for the increased water demand for hygienic purposes during pandemics like COVID-19 for calculation. Conclusions This study confirms that rainwater harvesting for households with different roof sizes is a viable water source option, with an average annual rainfall of 1464 mm. Households with a roof area of 40 m 2 and 100 m 2 have the potential to harvest 7.2-39.66 m 3 and 19.11-105.35 m 3 of rainwater respectively. This potential can be translated into 19.72-108.66 % and 114.3-170.5% of the household's emergency water demand for the prevention of COVID-19 for the households with 40 m 2 and 100 m 2 roof sizes respectively. Institutions such as Dilla University Referal Hospital (DURH) can cover 94.5-238.5 % of their emergency water demand needed for the infection prevention tasks using rainwater as the single water source. Taking the rainfall variability into consideration, the minimum storage size required for all the households with different roof sizes were 1 m 3 and the maximum rainwater harvesting storage tanker sizes were 7 m 3 for a household with 40 m 2 roof size and 17 m 3 for a household with 100 m 2 roof size respectively. The storage size estimated for the institutions ranges from 642-48,000 m 3 . Excluding the rainwater harvested from households, the rainwater harvesting potential from the selected public institutions in Dilla town can supplement 30.3% of the water supplied by Dilla Town's water supply agency in 2018. We have concluded that rainwater can be one alternative option as a source of water for emergency water demand in Dilla town. Furthermore, observational studies must also be conducted to quantify the actual emergency water demand needed for all the hygienic and infection prevention measures needed to combat the COVID-19 both at the household and institutional level. The priority that must be given to water quantity versus water quality must also be investigated. Declaration of Competing Interest The authors declare no conflict of interest, financial or otherwise. Data availability statement All relevant data are included in the paper or its Supplementary Information. Funding source Dilla University's research and dissemination office has partially supported the writing of this research. All the rest of the funding was covered by the authors alone.
2021-03-09T14:10:34.551Z
2021-03-09T00:00:00.000
{ "year": 2021, "sha1": "46823fddbc9bc619654a6b94912eaa45ae585e33", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.envc.2021.100077", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "205ca1ed51b781e475c0baf426322a5215bc009f", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
267094447
pes2o/s2orc
v3-fos-license
COVID-19 Outcomes in Patients with Hematologic Malignancies in the Era of COVID-19 Vaccination and the Omicron Variant Simple Summary This HEMATO-MADRID COVID-19 study assessed COVID-19 outcomes in 1818 hematologic cancer patients from February 2020 to October 2022 across different phases, including the Omicron period. Severe cases were more common in patients over 70 years with comorbidities or chronic lymphocytic leukemia. However, during the Omicron period, rates of severe illness reduced notably, especially among vaccinated individuals. Hospitalization, intensive care admissions, and overall mortality decreased in the Omicron phase compared to pre-Omicron, yet mortality rates in hospitalized patients remained high. Older age consistently correlated with higher mortality risk in both phases. Factors like prior stem cell transplantation, vaccination, and specific treatments were linked to improved survival rates among hematologic cancer patients facing COVID-19. Abstract A greater understanding of clinical trends in COVID-19 outcomes among patients with hematologic malignancies (HM) over the course of the pandemic, particularly the Omicron era, is needed. This ongoing, observational, and registry-based study with prospective data collection evaluated COVID-19 clinical severity and mortality in 1818 adult HM patients diagnosed with COVID-19 between 27 February 2020 and 1 October 2022, at 31 centers in the Madrid region of Spain. Of these, 1281 (70.5%) and 537 (29.5%) were reported in the pre-Omicron and Omicron periods, respectively. Overall, patients aged ≥70 years (odds ratio 2.16, 95% CI 1.64–2.87), with >1 comorbidity (2.44, 1.85–3.21), or with an underlying HM of chronic lymphocytic leukemia (1.64, 1.19–2.27), had greater odds of severe/critical COVID-19; odds were lower during the Omicron BA.1/BA.2 (0.28, 0.2–0.37) or BA.4/BA.5 (0.13, 0.08–0.19) periods and among patients vaccinated with one or two (0.51, 0.34–0.75) or three or four (0.22, 0.16–0.29) doses. The hospitalization rate (75.3% [963/1279], 35.7% [191/535]), rate of intensive care admission (30.0% [289/963], 14.7% [28/191]), and mortality rate overall (31.9% [409/1281], 9.9% [53/536]) and in hospitalized patients (41.3% [398/963], 22.0% [42/191]) decreased from the pre-Omicron to Omicron period. Age ≥70 years was the only factor associated with higher mortality risk in both the pre-Omicron (hazard ratio 2.57, 95% CI 2.03–3.25) and Omicron (3.19, 95% CI 1.59–6.42) periods. Receipt of prior stem cell transplantation, COVID-19 vaccination(s), and treatment with nirmatrelvir/ritonavir or remdesivir were associated with greater survival rates. In conclusion, COVID-19 mortality in HM patients has decreased considerably in the Omicron period; however, mortality in hospitalized HM patients remains high. Specific studies should be undertaken to test new treatments and preventive interventions in HM patients. Introduction The COVID-19 pandemic caused by the SARS-CoV-2 virus has resulted in more than 750 million cases of COVID-19 worldwide, including more than 6.9 million deaths [1].Adult patients with hematologic malignancies (HM) have been more substantially affected by COVID-19 than the general adult population, and have experienced a higher mortality rate, as they have greater susceptibility to SARS-CoV-2 infection and severe disease because of their immune-deficient status and their use of immunosuppressive treatments [2]. Multiple reports have been published on COVID-19 outcomes in patients with HM [3][4][5][6][7][8][9][10], but most of these have been focused on cases occurring during the pre-vaccination era, between March and December 2020.Therefore, changes in outcomes over time during the vaccination era and the multiple waves driven by SARS-CoV-2 variants and subvariants are not fully understood.Epidemiologic data from the pre-vaccination era showed that the mortality rate among HM patients was in the range of 13-37% overall and 19-46% for those hospitalized with COVID-19, with high rates reported particularly for patients aged ≥60 years, those with a diagnosis of acute myeloid leukemia (AML) or myelodysplastic syndrome (MDS), and patients receiving active treatment with conventional chemotherapy or monoclonal antibodies [11]. Data from a number of reports in the general population have demonstrated that COVID-19 mortality rates have declined over time, likely reflecting advances in detection, the development of effective COVID-19-directed therapies such as anti-SARS-CoV-2 monoclonal antibodies and new antiviral agents [12][13][14][15][16][17][18], the emergence of less virulent SARS-CoV-2 variants, and the introduction of messenger RNA (mRNA) COVID-19 vaccines (BNT162b2 and mRNA-1273) since December 2020 [19][20][21][22].However, there have been few direct comparisons of COVID-19 severity and outcomes in HM patients between the pre-vaccination and vaccination eras.Furthermore, the impact of the highly transmissible Omicron variant on HM patients, and the protection conferred by vaccination and boosters against the severe clinical outcomes associated with Omicron, remain unknown. Therefore, in order to better understand clinical trends in COVID-19 outcomes among HM patients, it is imperative to consider data from across the multiple waves driven by different dominant circulating variants, and particularly from those driven by the most recent Omicron subvariants [23,24].To date, three real-world studies have been published on COVID-19 among HM patients, including two conducted during Alpha-and Deltadominant periods [25,26] and a third conducted during the Omicron-dominant period [27].These studies showed high rates of hospitalization (37-53%) and death (5.7-9.2%)due to COVID-19 among HM patients.The last study focused on the Omicron-dominant period and reported an overall 16.5% mortality in hospitalized patients; older age and active malignancy increased mortality; and three doses of the vaccine were protective against progression to critical disease [27].Although the comparison of mortality rates across studies is not straightforward due the heterogeneity in methods, the most recent studies reported rates that are lower than those during the pre-vaccination era [9,10] but that are nevertheless higher than those previously reported in the fully vaccinated general population [28]. In this context, with a rapidly evolving landscape and the initial evidence regarding the impact of COVID-19 on HM patients having become outdated, we evaluated morbidity and mortality over time from March 2020 to September 2022 in HM patients diagnosed with COVID-19 in the Madrid region of Spain, comparing outcomes between the pre-Omicron and Omicron eras and between the pre-vaccination and vaccination eras.We also analyzed the differing patient characteristics and risk factors associated with severe outcome and death, and the impact of COVID-19 therapies on outcomes. Study Design and Participants HEMATO-MADRID COVID-19 is an ongoing, observational, multicenter, and registrybased study with prospective data collection that is sponsored by the Madrid Society of Hematology (Asociación Madrileña de Hematología y Hemoterapia, AMHH) [11].Full methodological details for this study have been reported previously [5].Briefly, the study population was accrued from 32 healthcare centers with AMHH-affiliated hematologists in the Madrid region in Spain, covering 6.6 million inhabitants.For inclusion in the analysis, HM patients had to be aged ≥18 years and to have had a SARS-CoV-2 infection confirmed by reverse transcription-polymerase chain reaction of a nasopharyngeal swab [29] in the emergency departments, hospital wards (infection while hospitalized), or outpatient clinics of participating healthcare centers.Patients also required a medical history of HM at any time; their disease could be either active or in remission at the time of COVID-19 diagnosis, which was established based on World Health Organization (WHO) recommendations [30].Investigators at each participating institution evaluated patients per local practice, when clinically indicated. This study was granted by the Fundación Madrileña de Hematología y Hemoterapia and the Fundación Leucemia y Linfoma.The study protocol was approved by the local Ethics Committee and written informed consent was waived (CEIm Hospital 12 de Octubre, Spain: ref. 20/182; date of approval: 20 April 2020). Study Outcomes and Data Collection COVID-19 clinical severity and mortality, including overall survival and 30-day and 60-day survival probability estimates, were the key study endpoints.Disease severity was assessed within 24 h of admission per World Health Organization guidelines [30], with hospital/intensive care unit (ICU) admissions determined locally based on criteria updated daily during the healthcare emergency period.Data were collected through to the time of last follow-up visit or death.The key determinants evaluated for their impact on COVID-19 outcomes included pre-infection patient characteristics, type of HM and treatment received, and aspects of COVID-19 management. The HEMATO-MADRID COVID-19 registry incorporates deidentified data on factors of relevance to patients with HM and COVID-19.For this analysis, we extracted data on age, sex, and number of specific comorbidities associated with COVID-19; these were cardiac disease, pulmonary disease (not including lung cancer), renal disease, diabetes, hypertension, and body mass index ≥ 35 kg/m 2 .We also collected data on the type of HM and therapy received.For this analysis, patients were defined as having 'active antineoplastic treatment' if they had received anticancer therapy within 30 days prior to their COVID-19 diagnosis.These therapies were classified as 'conventional chemotherapy', 'lowintensity chemotherapy', 'molecular-targeted therapy', 'immunotherapy', 'immunomodulator drugs', 'hypomethylating agents', or 'supportive therapy'.Information on COVID-19 management was also extracted. The analysis time period was sub-divided into four periods, defined according to the dominant circulating SARS-CoV-2 variant (>50% of national circulating SARS-CoV-2 lineages among recorded infections).The first period covered the wave in which the D614G SARS-CoV-2 variant was dominant and included HM patients diagnosed with COVID-19 between 27 February 2020 and 15 February 2021.The second period covered the Alphaand Delta-dominant waves and included patients diagnosed between 15 February and 15 December 2021.The third and fourth periods covered the Omicron BA.1/BA.2-dominantand Omicron BA.4/BA.5-dominantwaves, which included patients diagnosed between 15 December 2021 and 31 May 2022, and between 1 June and 30 September 2022 [31,32].Analyses were also conducted pooling the pre-Omicron (first and second periods) and Omicron (third and fourth periods) time periods. Eligible patients who were entered into the AMHH registry by local investigators between 27 February 2020 and 1 October 2022 were included in the analysis, with all records updated through to the end of September 2022.Patients could be added to the database at any time during their COVID-19 disease course.The study steering committee, with expertise in the research topic and in the study of HM and infectious diseases, reviewed each registered case for completeness and consistency. Statistical Analysis Patient-and disease-related factors were characterized overall, by COVID-19 severity (mild/moderate and severe/critical disease, which covered from severe pneumonia to septic shock), and by time of diagnosis (pre-Omicron and Omicron periods).Absolute and relative frequencies were calculated for all determinant factors, as well as the median and interquartile range (IQR) of patients' ages, for all groups analyzed.Available sample size was reported for each factor.Strength of association of each factor with COVID-19 severity was estimated using logistic regression models, overall (whole analysis period) and for the Omicron period.Multivariable analyses including age, sex, and comorbidity count as covariates and were used to determine adjusted odds ratios (ORs) and 95% confidence intervals (95% CIs) for having severe/critical COVID-19 relative to a reference category for each group of factors.For each population and subgroup of HM patients, 30-day and 60-day survival probabilities were estimated using the actuarial survival method, and p-values were estimated using the log-rank test within each group of factors.Followup time was calculated from time of SARS-CoV-2 diagnosis to time of last hospital visit or death.Kaplan-Meier analyses of overall survival were conducted according to time of diagnosis (pre-Omicron, Omicron BA.1/BA.2, and Omicron BA.4/BA.5 periods) and vaccination status (0, 1-2, and 3-4 COVID-19 vaccine doses) for the overall population and for HM patients hospitalized with COVID-19.Pair-wise comparisons of overall survival were carried out using the log-rank test, p-values were adjusted by the Benjamini-Hochberg method, and overall p-values were estimated using the log-rank test.Cox proportionalhazard regression models were used to estimate hazard ratios (HRs) and 95% CIs for the COVID-19 risk of death associated with each factor.Adjusted models for each factor included the same three pre-specified variables: age, sex, and comorbidity count.All statistical analyses were generated using R software (version 4.2.2). Characteristics of HM Patients with COVID-19 across Time Periods Of the 32 hospitals affiliated with AHMM, 31 centers covering 98% of the Madrid region population reported 2096 cases of HM patients with COVID-19 to the HEMATO-MADRID COVID-19 registry between 28 February 2020 and 1 October 2022 for possible inclusion in this study (Figure 1).Of these patients, 1818 met the eligibility criteria for this analysis. Characteristics of HM Patients with COVID-19 across Time Periods Of the 32 hospitals affiliated with AHMM, 31 centers covering 98% of the Madrid region population reported 2096 cases of HM patients with COVID-19 to the HEMATO-MADRID COVID-19 registry between 28 February 2020 and 1 October 2022 for possible inclusion in this study (Figure 1).Of these patients, 1818 met the eligibility criteria for this analysis.The median age of HM patients with COVID-19 included in the present analysis was 70.0 years (IQR 58-78), 57.5% were male, the median number of comorbidities was one (IQR 0-2), and 74.4% and 25.6% had a lymphoid malignancy or a myeloid neoplasia, respectively (Table 1).The most common HM diagnosis was non-Hodgkin lymphoma (NHL), reported in 554/1817 (30.5%) patients, followed by multiple myeloma (MM; The median age of HM patients with COVID-19 included in the present analysis was 70.0 years (IQR 58-78), 57.5% were male, the median number of comorbidities was one (IQR 0-2), and 74.4% and 25.6% had a lymphoid malignancy or a myeloid neoplasia, respectively (Table 1).The most common HM diagnosis was non-Hodgkin lymphoma (NHL), reported in 554/1817 (30.5%) patients, followed by multiple myeloma (MM; 420/1817, 23.1%), chronic lymphocytic leukemia (CLL; 248/1817, 13.6%), acute myeloid leukemia (AML; 148/1817, 8.1%), myelodysplastic syndrome (MDS; 145/1817, 8.0%), and chronic myeloproliferative neoplasm (MPN; 129/1817, 7.0%) (Table 1).Of the 1754 patients with available information, 1228 (70.0%) had received no doses of the COVID-19 vaccine before presenting with COVID-19 disease, 157 (9.0%) had received one or two vaccine doses, and 369 (21.0%) had received three or four doses.Of the 1818 patients in this analysis, 1281 (70.5%) cases were reported in the pre-Omicron time period, including 1186 (65.2%) and 95 (5.2%) in the D614G-and Alpha/Deltadominant periods, respectively, and 537 (29.5%) cases were reported in the Omicron time period, including 321 (17.7%) and 216 (11.9%) in the BA.1/BA.2-andBA.4/BA.5-dominantperiods, respectively.Table 1 details patient characteristics and COVID-19 management for the pre-Omicron and Omicron periods.In the Omicron period, the median age, percentage of patients who were male, and number of comorbidities was lower, and the percentage of patients with a lymphoid malignancy and active cancer therapy was higher, compared with the pre-Omicron period.Reflecting the COVID-19 vaccination roll-out over time, 73/1273 (5.7%) patients had been vaccinated with ≥1 dose among cases reported in the pre-Omicron time period compared to 453/481 (94.2%) cases reported in the Omicron time period.Similarly, the pharmacologic therapies administered for COVID-19 differed between time periods, reflecting the introduction over time of remdesivir, nirmatrelvir/ritonavir, and monoclonal antibodies and the reduction in the use of tocilizumab. Factors Associated with COVID-19 Severity Data on the clinical severity of COVID-19 were available for 1781/1818 patients (98.0%), of whom 1020 (57.3%) had mild/moderate disease and 761 (42.7%) had severe/critical COVID-19.The proportion of cases that were severe/critical COVID-19 decreased from 53.5% to 17.6% and the proportion of patients hospitalized decreased from 75.3% to 35.7% in the pre-Omicron and Omicron time periods, respectively (Table 1). Factors Associated with COVID-19 Mortality After a median follow-up of 54 days (IQR 20-147), 462/1817 (25.4%)HM patients with COVID-19 had died; the mortality rate was 31.9% (409/1281) in the pre-Omicron time period and 9.9% (53/536) in the Omicron time period (Figure 2).In the pre-Omicron time period, the COVID-19 mortality risk was greater in patients aged ≥70 years (hazard ratio [HR] 2.57, 95% CI 2.03-3.25),those with ≥2 comorbidities (1.53, 1.15-2.04),and patients receiving conventional chemotherapy (1.35, 1.05-1.74),and lower in those who had received one or two (0.52, 0.30-0.90)or three or four (0.15, 0.04-0.62)vaccine doses.In the Omicron time period, an age ≥70 years (HR 3.19, 95% CI 1.59-6.42)and receipt of allogeneic stem cell transplantation (3.12, 1.17-8.36)were associated with higher mortality risk (Figure 2).for all patients and for subgroups, including by malignancy type, cancer therapy, transplantation, and COVID-19 treatment.Among all patients, survival probabilities were 78% (95% CI 76-80) at 30 days and 70% (67-72) at 60 days (Supplemental Table S3).Kaplan-Meier analysis demonstrated significantly better survival among HM patients diagnosed with COVID-19 in the Omicron BA.4/BA.5 or Omicron BA.1/BA.2periods compared with the pre-Omicron period (both p < 0.0001; Figure 3A).Vaccination status also had a significant impact; overall, HM patients who had received three or four or one or two doses of the COVID-19 vaccine had significantly better survival than unvaccinated patients (both p < 0.0001; Figure 3B), and findings were similar for the pre-Omicron period but with pvalues of 0.002 and 0.003 for the comparison of patients receiving three or four or one or two doses versus unvaccinated patients, respectively (Figure 3C).No difference in survival was seen between the three groups during the Omicron period (Figure 3D).Kaplan-Meier analyses of survival are shown in Figure 3. Overall and time periodspecific 30-day and 60-day survival estimates are summarized in Supplemental Table S3 for all patients and for subgroups, including by malignancy type, cancer therapy, transplantation, and COVID-19 treatment.Among all patients, survival probabilities were 78% (95% CI 76-80) at 30 days and 70% (67-72) at 60 days (Supplemental Table S3).Kaplan-Meier analysis demonstrated significantly better survival among HM patients diagnosed with COVID-19 in the Omicron BA.4/BA.5 or Omicron BA.1/BA.2periods compared with the pre-Omicron period (both p < 0.0001; Figure 3A).Vaccination status also had a significant impact; overall, HM patients who had received three or four or one or two doses of the COVID-19 vaccine had significantly better survival than unvaccinated patients (both p < 0.0001; Figure 3B), and findings were similar for the pre-Omicron period but with p-values of 0.002 and 0.003 for the comparison of patients receiving three or four or one or two doses versus unvaccinated patients, respectively (Figure 3C).No difference in survival was seen between the three groups during the Omicron period (Figure 3D).Analyses of 30-day and 60-day survival rates showed that, among all HM patients, older age (p < 0.001), male sex (p = 0.04), greater number of comorbidities (p < 0.001), and treatment with tocilizumab (p < 0.001) or corticosteroids (p < 0.001) were significantly associated with lower survival rates (Supplemental Table S3).Conversely, having undergone stem cell transplantation (p < 0.001), having received COVID-19 vaccination(s) (p < 0.001), and treatment with nirmatrelvir/ritonavir (p < 0.001) or remdesivir (p < 0.03) were associated with greater survival rates.Additionally, specific HM including CML and MPN, and having received active cancer therapy, notably with immunotherapy, were also associated with greater survival rates.During the pre-Omicron time period, similar associations with survival rates were seen for age, comorbidities, stem cell transplantation, COVID-19 vaccination, and treatment with corticosteroids, whereas during the Omicron time period, survival rates were associated with age, sex, comorbidities, and tocilizumab or corticosteroid treatment.Notably, remdesivir treatment during the Omicron time period was associated with lower survival rates, in contrast to the overall findings (Supplemental Table S3). Factors Associated with COVID-19 Mortality in Hospitalized HM Patients Of the 1154 patients who were hospitalized, 440 (38.1%) died, with mortality rates of 41.3% (398/963) and 22.0% (42/191) in the pre-Omicron and Omicron time periods, respectively (Figure 4), after respective median observation periods of 50 days (IQR 23-161) and 85 days .In the pre-Omicron time period, the COVID-19 mortality risk was greater in hospitalized HM patients aged ≥70 years (HR 2.13, 95% CI 1.68-2.70),with ≥2 comorbidities (1.43, 1.07-1.90),or receiving conventional chemotherapy (1.33, 1.03-1.72),and lower in patients who had received three or four vaccine doses (0.20, 0.05-0.79)Analyses of 30-day and 60-day survival rates showed that, among all HM patients, older age (p < 0.001), male sex (p = 0.04), greater number of comorbidities (p < 0.001), and treatment with tocilizumab (p < 0.001) or corticosteroids (p < 0.001) were significantly associated with lower survival rates (Supplemental Table S3).Conversely, having undergone stem cell transplantation (p < 0.001), having received COVID-19 vaccination(s) (p < 0.001), and treatment with nirmatrelvir/ritonavir (p < 0.001) or remdesivir (p < 0.03) were associated with greater survival rates.Additionally, specific HM including CML and MPN, and having received active cancer therapy, notably with immunotherapy, were also associated with greater survival rates.During the pre-Omicron time period, similar associations with survival rates were seen for age, comorbidities, stem cell transplantation, COVID-19 vaccination, and treatment with corticosteroids, whereas during the Omicron time period, survival rates were associated with age, sex, comorbidities, and tocilizumab or corticosteroid treatment.Notably, remdesivir treatment during the Omicron time period was associated with lower survival rates, in contrast to the overall findings (Supplemental Table S3). with mortality risk, with wide 95% CIs all overlapping one (Figure 4). Kaplan-Meier analyses of survival are shown in Figure 5. Overall and time periodspecific 30-day and 60-day survival estimates are summarized in Table 2 for all patients and by subgroup.Among all patients, survival probabilities were 67% (95% CI 65-70) at 30 days and 57% (53-60) at 60 days (Table 2).Kaplan-Meier analysis demonstrated significantly better survival among HM patients diagnosed with COVID-19 in the Omicron BA.4/BA.5 (p < 0.0001) or Omicron BA.1/BA.2(p = 0.003) periods compared with the pre-Omicron period (Figure 5A).Vaccination status also significantly impacted survival; overall, hospitalized HM patients who had received three or four (p < 0.0001) or one or two (p = 0.0005) doses of COVID-19 vaccine had significantly better survival than unvaccinated patients (Figure 5B), and similar findings were seen for the pre-Omicron period but with respective p-values of 0.03 and 0.05 (Figure 5C).No difference in survival of hospitalized HM patients was seen between the three groups during the Omicron period (Figure 5D).Kaplan-Meier analyses of survival are shown in Figure 5. Overall and time periodspecific 30-day and 60-day survival estimates are summarized in Table 2 for all patients and by subgroup.Among all patients, survival probabilities were 67% (95% CI 65-70) at 30 days and 57% (53-60) at 60 days (Table 2).Kaplan-Meier analysis demonstrated significantly better survival among HM patients diagnosed with COVID-19 in the Omicron BA.4/BA.5 (p < 0.0001) or Omicron BA.1/BA.2(p = 0.003) periods compared with the pre-Omicron period (Figure 5A).Vaccination status also significantly impacted survival; overall, hospitalized HM patients who had received three or four (p < 0.0001) or one or two (p = 0.0005) doses of COVID-19 vaccine had significantly better survival than unvaccinated patients (Figure 5B), and similar findings were seen for the pre-Omicron period but with respective p-values of 0.03 and 0.05 (Figure 5C).No difference in survival of hospitalized HM patients was seen between the three groups during the Omicron period (Figure 5D).Actuarial 30-day and 60-day survival rates showed that, among all hospitalized HM patients, an older age (p < 0.001), a greater number of comorbidities (p < 0.001), and treatment with corticosteroids (p < 0.001) were significantly associated with lower survival rates (Table 2).Conversely, having undergone stem cell transplantation (p < 0.001), having received COVID-19 vaccination(s) (p < 0.001), and treatment with remdesivir (p < 0.001) or monoclonal antibodies (p = 0.02) were associated with greater survival rates.During the pre-Omicron time period, similar associations with survival rates were seen for age, comorbidities, stem cell transplantation, COVID-19 vaccination, and treatment with remdesivir or corticosteroids, whereas during the Omicron time period none of the factors analyzed were associated with survival rates (Table 2). Discussion The findings from this large registry-based study demonstrate that COVID-19 outcomes among HM patients have considerably improved over the course of the pandemic, with the hospitalization rate having fallen from 75.3% in the pre-Omicron time period to 35.7% in the Omicron era and the mortality rate having fallen from 31.9% to 9.9%.However, the mortality remains high in HM patients hospitalized with COVID-19, at 22.0%.We found that HM patients diagnosed with COVID-19 during the Omicron period had five-fold lower odds of having severe/critical COVID-19 (OR 0.21), a 52.6% lower risk of hospitalization, and a 63.9% lower risk of 30-day overall mortality than those diagnosed with COVID-19 during the pre-Omicron period.Among the factors related to this improvement are the lower disease severity associated with the new Omicron variants [33], the roll-out of intensive COVID-19 vaccination programs, and the introduction of more effective COVID-19 therapies such as nirmatrelvir/ritonavir, remdesivir, and monoclonal antibodies. As of the end of the first quarter of 2023, the trajectory of the COVD-19 pandemic remains unclear, particularly in HM patients.Our current analysis shows that many of the risk factors that were strongly associated with COVID-19 mortality in the early phase of the pandemic now have an attenuated or no association with mortality during the Omicron-dominant period.During the pre-Omicron period, we found that an age ≥70 years, the presence of >1 comorbidity, and receiving active conventional chemotherapy were associated with higher COVID-19 mortality risk in all HM patients and in hospitalized HM patients, consistent with our previous report [5] and other studies [2][3][4]6,8], whereas having received a primary-series vaccination and at least one booster was associated with lower in-hospital mortality risk.In contrast, during the Omicron time period, our multivariable analysis showed that, of these factors, the only one remaining independently related to the risk of death was age ≥70 years.This finding is in line with two reports from the EPICOVIDEHA survey in HM patients with COVID-19 due to the initial Omicron variant and subvariants, in which only advanced age and active cancer were associated with higher mortality on univariable analysis [26,27].Taken together, these findings underline the changes seen over time in the risk factors associated with COVID-19 mortality among HM patients during the Omicron-dominant period.However, it is unclear whether these changes are due to primary-series and booster COVID-19 vaccinations, changes in the propensity of the virus to cause severe disease, improvements in disease management, or changes in the clinical profile of HM patients with COVID-19. With regard to COVID-19 management, our findings show that the use of corticosteroids was associated with an increased mortality risk overall and in the pre-Omicron and Omicron time periods, confirming our previous results and data from EPICOVIDEHA [34,35].Based on this robust evidence, we recommend avoiding the use of corticosteroids in HM patients with COVID-19 [36].Similarly, in our series tocilizumab treatment did not improve COVID-19 outcomes in HM patients, and we therefore do not recommend its use in this setting.In contrast, treatment with remdesivir and monoclonal antibodies in hospitalized HM patients, as well as with nirmatrelvir/ritonavir in the overall population, was associated with a reduced risk of death, in line with what was reported in the EPICOVIDEHA study [34].Although data from well-designed clinical trials, specifically in HM patients, are not currently available, our findings and those from EPICOVIDEHA suggest that these are reasonable therapeutic strategies in high-risk HM patients. Among the strengths of this study, which represents our first analysis of COVID-19 severity and mortality among HM patients in the Omicron era, are its prospective, comprehensive collection of clinical and outcome data on HM patients with COVID-19, the use of multivariable analysis to identify independent risk factors for COVID-19 mortality, the long period covered, including the length of follow-up, and the fact that our patient series is highly representative of this population.A limitation of our study is that it is based on registry data.Although to the best of our knowledge the registry includes all HM patients with COVID-19, the true patient population may be higher because of low rates of testing or misdiagnoses in the first period of the pandemic.Treating physicians established close contact with their patients and special access paths to facilitate the care and inclusion of virtually all patients with hematologic neoplasms, particularly those under active treatment in the registry.Another limitation is that our case series incorporates a heterogeneous patient population with multiple different HM; nevertheless, the size of the population, including the large numbers of patients with specific malignancies (more than 100 patients in six out of nine HM), the detailed reporting by HM, and the long follow-up, could mitigate in part this perceived limitation. Conclusions This study provides a rare and valuable framework to show strong evidence of change in the clinical picture and mid-term outcomes over more than two years of the COVID-19 pandemic across main subtypes of hematological malignancies.COVID-19 mortality in HM patients has decreased considerably in the Omicron period of the pandemic, and the clinical management of patients with COVID-19 has improved thanks to the addition of new antiviral therapies and monoclonal antibodies.However, mortality in hospitalized HM patients remains high.We suggest that specific studies of novel COVID-19 therapies in immunocompromised patients should be undertaken with the aim of further improving outcomes, and that HM patients should receive active protection against SARS CoV-2 infection and severe outcomes through vaccination and preventive interventions. Figure 1 . Figure 1.Flow diagram.Patients with hematologic malignancies who were reported as having COVID-19 and who were included in the present analysis. Figure 1 . Figure 1.Flow diagram.Patients with hematologic malignancies who were reported as having COVID-19 and who were included in the present analysis. Figure 2 . Figure 2. COVID-19 mortality in the pre-Omicron and Omicron time periods.Figures show the numbers of patients who died (no. of events) by subgroup and the relative hazard ratio (HR) for COVID-19 mortality between associated subgroups (HR = 1 for reference subgroup in each set). Figure 2 . Figure 2. COVID-19 mortality in the pre-Omicron and Omicron time periods.Figures show the numbers of patients who died (no. of events) by subgroup and the relative hazard ratio (HR) for COVID-19 mortality between associated subgroups (HR = 1 for reference subgroup in each set). Cancers 2024 , 17 Figure 3 . Figure 3. Kaplan-Meier analyses of survival outcomes in HM patients with COVID-19.Figures show the survival estimates among HM patients with COVID-19 (A) according to time period (pre-Omicron, Omicron BA.1/BA.2, and Omicron BA.4/BA.5),and (B) overall and (C,D) in the pre-Omicron and Omicron time periods among patients who were unvaccinated or who had received one or two or three or four vaccinations at the time of their COVID-19 diagnosis.Analyses of 30-day and 60-day survival rates showed that, among all HM patients, older age (p < 0.001), male sex (p = 0.04), greater number of comorbidities (p < 0.001), and treatment with tocilizumab (p < 0.001) or corticosteroids (p < 0.001) were significantly associated with lower survival rates (Supplemental TableS3).Conversely, having undergone stem cell transplantation (p < 0.001), having received COVID-19 vaccination(s) (p < 0.001), and treatment with nirmatrelvir/ritonavir (p < 0.001) or remdesivir (p < 0.03) were associated with greater survival rates.Additionally, specific HM including CML and MPN, and having received active cancer therapy, notably with immunotherapy, were also associated with greater survival rates.During the pre-Omicron time period, similar associations with survival rates were seen for age, comorbidities, stem cell transplantation, COVID-19 vaccination, and treatment with corticosteroids, whereas during the Omicron time period, survival rates were associated with age, sex, comorbidities, and tocilizumab or corticosteroid treatment.Notably, remdesivir treatment during the Omicron time period was associated with lower survival rates, in contrast to the overall findings (Supplemental TableS3). Figure 3 . Figure 3. Kaplan-Meier analyses of survival outcomes in HM patients with COVID-19.Figures show the survival estimates among HM patients with COVID-19 (A) according to time period (pre-Omicron, Omicron BA.1/BA.2, and Omicron BA.4/BA.5),and (B) overall and (C,D) in the pre-Omicron and Omicron time periods among patients who were unvaccinated or who had received one or two or three or four vaccinations at the time of their COVID-19 diagnosis.Analyses of 30-day and 60-day survival rates showed that, among all HM patients, older age (p < 0.001), male sex (p = 0.04), greater number of comorbidities (p < 0.001), and treatment with tocilizumab (p < 0.001) or corticosteroids (p < 0.001) were significantly associated with lower survival rates (Supplemental TableS3).Conversely, having undergone stem cell transplantation (p < 0.001), having received COVID-19 vaccination(s) (p < 0.001), and treatment with nirmatrelvir/ritonavir (p < 0.001) or remdesivir (p < 0.03) were associated with greater survival rates.Additionally, specific HM including CML and MPN, and having received active cancer therapy, notably with immunotherapy, were also associated with greater survival rates.During the pre-Omicron time period, similar associations with survival rates were seen for age, comorbidities, stem cell transplantation, COVID-19 vaccination, and treatment with corticosteroids, whereas during the Omicron time period, survival rates were associated with age, sex, comorbidities, and tocilizumab or corticosteroid treatment.Notably, remdesivir treatment during the Omicron time period was associated with lower survival rates, in contrast to the overall findings (Supplemental TableS3). Figure 4 . Figure 4. COVID-19 mortality in hospitalized HM patients in the pre-Omicron and Omicron time periods.Figures show the numbers of patients who died (no. of events) by subgroup and the relative hazard ratio (HR) for COVID-19 mortality between associated subgroups (HR = 1 for reference subgroup in each set). Figure 4 . Figure 4. COVID-19 mortality in hospitalized HM patients in the pre-Omicron and Omicron time periods.Figures show the numbers of patients who died (no. of events) by subgroup and the relative hazard ratio (HR) for COVID-19 mortality between associated subgroups (HR = 1 for reference subgroup in each set). Figure 5 . Figure 5. Kaplan-Meier analyses of survival outcomes in HM patients hospitalized due to COVID-19.Figures show the survival estimates among HM patients hospitalized due to COVID-19 (A) according to time period (pre-Omicron, Omicron BA.1/BA.2, and Omicron BA.4/BA.5),and (B) overall and (C,D) in the pre-Omicron and Omicron time periods among patients who were unvaccinated or who had received one or two or three or four vaccinations at the time of their COVID-19 diagnosis. Figure 5 . Figure 5. Kaplan-Meier analyses of survival outcomes in HM patients hospitalized due to COVID-19.Figures show the survival estimates among HM patients hospitalized due to COVID-19 (A) according to time period (pre-Omicron, Omicron BA.1/BA.2, and Omicron BA.4/BA.5),and (B) overall and (C,D) in the pre-Omicron and Omicron time periods among patients who were unvaccinated or who had received one or two or three or four vaccinations at the time of their COVID-19 diagnosis. Table 1 . Baseline characteristics of, and therapy received by, patients with hematologic malignancies and COVID-19. Table 2 . Actuarial 30-day and 60-day survival in hospitalized patients with hematologic malignancies and COVID-19, overall and in pre-Omicron and Omicron time periods. Table 2 . Actuarial 30-day and 60-day survival in hospitalized patients with hematologic malignancies and COVID-19, overall and in pre-Omicron and Omicron time periods.
2024-01-24T06:17:22.807Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "4687a833c1dd23c34875de2e4ae9886dfd2210cd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/16/2/379/pdf?version=1705396138", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "25d68093c85244bc577a7c9198c37ec5821a077a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
44190574
pes2o/s2orc
v3-fos-license
Preliminary Study on Wearable Devices based on Artificial Intelligence Algorithms In recent years, artificial intelligence technology increases continuously. With the introduction of artificial intelligence algorithms, networking hardware system whose hardware devices have matured can be intelligently integrated, from which, the energy consumption can be reduced and the efficiency of data transmission can be improved via the optimum algorithm. In this paper, the network wireless data transmission is well designed based on the boost algorithm and Qagent algorithm. With the context of path optimization on minimum tree model and the short-circuit model, the transmission efficiency is improved and the energy consumption rate is reduced. The interface is prepared and the model is simulated by MATLAB platform with a view to developing a good robustness and high precision real-time interactive platform of wearables devices of internet of things. INTRODUCTION With the rapid development of networking hardware technology and the implementation of internet plus policy in recent years, wearable devices gradually come to people within touch.Li et al. summarized the development course and development prospect of wearable devices in "research on the industrial chain and development trend of wearable device industry in China".Based on the analysis of networking industry economic development, the authors preliminarily investigate the problems caused by the market behavior of wearable devices and the new industry, which has much guiding significance (Yunji,2014).Li et al. study the small electronic equipment and intelligent devices in "research and implementation of adaptive wireless transmission for wearable devices".By exploring a variety of adaptive wireless data transmission and optimizing the data storage as well as appropriate compression, the interactive graphics interface software is developed, i.e., the preparation and algorithm of data communication are realized (Stephan,2013;Shakhakarmi,2014).From a perspective of algorithm design of system, the problem of human action recognition and its instantaneity are well solved by Lv et al. in "research on human behavior recognition technology based on wearable sensor network", e.g., in the identification delay for 5 seconds, the recognition accuracy of the system reached 93.6%.Finally, in order to solve the problem of efficient behavior aware, they proposed a new sensing method based on the passive wearable high frequency RFID technology, and put forward an efficient data completion and feature extraction algorithm to deal with the inherent challenges of RFID technology.According to the recognition requirement of hard real-time behavior, a recognition prototype system of human behavior based passive wearable sensor network is realized.From the real data sets of experimental results, it could be found that, the recognition accuracy of the prototype system reached 93.6%, yielding, the platform for human action recognition is feasible and efficient, providing a reliable algorithm and hardware support for the organic combining of hardware and software of wearable devices (ósca, 2012).In summary, wearable devices have a certain foundation in the hardware, whose key is the influence of the system data real-time interaction algorithm and efficiency on energy consumption.Through the design of advanced algorithms, the data exchange algorithm can be optimized(minimum energy consumption rate).The development and application of modern intelligent algorithm can meet the requirements.In this paper, the network wireless data transmission is well designed based on the boost algorithm.With the context of path optimization on minimum tree model and the short-circuit model, the transmission efficiency is improved and the energy consumption rate is reduced.The interface is prepared and the model is simulated by MATLAB platform with a view to developing a good robustness and high precision real-time interactive platform of wearables devices of internet of things. WBAN principle of sensing According to the real-time adaptive dynamic optimization algorithm, the transmission efficiency can be optimized on the basis of wearable devices hardware system.This algorithm, through the iterative evolution, can capture the external signal in real time and make a vector operation, accelerating the learning rate.This kind of Agent network architecture is divided into three operational processes: print vector collection , self-learning strengthen cluster , and the self judgment cluster after learning.After print the vector collection, we map the self-learning strengthen cluster, i.e., the computing layer of network.The self-learning strengthen cluster delivers the displacement modal of the vectorization set to the Agent matrix and integrate it via optimization.Finally, we judge the convergence of self judgment cluster and chat with the outside via boost algorithm in order to achieve highly efficient transmission and feedback. According to the self-learning set, the adaptive ability in the unsupervised mode can be obtained.We use the boost algorithm to optimize the maximum threshold field, and set the external incentive function and relevant self learning momentum factor to optimize the mode state of tensor e output.With the help of the previous genetic effect and the next genetic effect, we set the maximum convergence criterion.When the result is convergent or reaches the set threshold value, one optimal learning is completed.The principle of algorithm is shown in Figure .3. where t r is the feedback threshold and  is the penalty term. The feedback mapping can be expressed as the following formula: By means of reading theπmethod, its state tensor reads: The design of BP Adaboost With the help of multi layer mapping of ababoost, we obtain the high robustness and high accuracy physical layer cluster via tensor operations of each level.The operation mechanism reads: the rank tensor space mapping ) , ( y x is extracted and integrated, with the mapping weighted function being m 1 . The convergence solution is obtained by the finite iteration.When updating the iterative algorithm, the mapping weighted function will be remapped in each step, i.e., } { 2 1 n f f f  .By solving the functional, high robustness and high accuracy level mapping functional group can be obtained (Chen,2015;Liu,2014). Assuming that BP function is used as the base vector, dynamic combination will be made by Ababoost .The algorithm flow chart is shown in Figure . 4. The processes that the algorithm operates are as follows: (1)initialization of the sample tensor.We select m dimension from the tensor at random, and set where ) (t g and y , respectively, the mapping matrix and the expected matrix. (3)update the proportion.Label the error t (5)obtained the weighted strong mapping.T mature mapping tensor are obtained after reaching the specified number of iterations, i.e., the strong functional SOLVING THE MODEL The three-dimensional shape of the human body is complex.With the help of simplifying method, we obtain the node abstraction complement graph.Through the extraction of key points, we illustrate the topology graph and theoretically investigate the configuration of wearable devices.The topology graph is depicted in Figure .5 as follows. Figure 5. The topology graph of key points We wirelessly link the key points according to wban network, the connected graph is illustrated in Figure 6.Based on the boost algorithm, it is not very difficult to optimize the the combination mapping physical base group.With the help of self-learning of the momentum factor, the energy loss in the internet of things can be reduced (the optimal value of momentum factor after learning is 0.8).The energy that nodes accept is the bigger the better.After learning, when the momentum factor value is 0.8, the accepted energy is maximum and the loss is minimum. Figure 1 .。 Figure 1.Diagram of the network architecture In Figure. 1, S is a vector set, the model is Agent transmission vector operation cluster, and the output is a feedback adaptive matrix, i.e., Figure 2 . Figure 2. Algorithm model of Agent Figure 3 . Figure 3. Diagram of basic principle Figure 4 . Figure 4.The algorithm flow chart well as w and  .(2)training and mapping the layered tensor.Divide the layered tensor into t groups.Normalized is obtained by mapping training. ) strategy adjustment.The adjusting formula is given by: Figure 6 . Figure 6.Network connection diagramBased on the minimum tree model, it is not very difficult to optimize the connection of wearable devices.We use the set protocol to do real-time dynamic data transmission, and we generalize the prime algorithm by means of the preparation of a graphical user interface program, with the optimized connection diagram being shown in Figure7. Figure 7 . Figure 7. Diagram of the optimized wireless connection path Figure. 8 implies the energy consumption diagram in the path. Figure 8 . Figure 8. Diagram of the momentum factor learning curve Figure 9 . Figure 9. Diagram of the accepted energy by nodes Figure 10 . Figure 10.The mapping relationship between the number of sink path coupling nodes and the network survival cycle
2018-06-02T06:06:07.845Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "41aa70b2b823eb61c5f6beaf75c91cca8010090b", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.21311/001.39.12.20", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "41aa70b2b823eb61c5f6beaf75c91cca8010090b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
255441220
pes2o/s2orc
v3-fos-license
Ultrasensitive and amplification-free detection of SARS-CoV-2 RNA using an electrochemical biosensor powered by CRISPR/Cas13a Graphical abstract Introduction The Coronavirus disease 2019 (COVID-19) outbreak has swiftly expanded over the world and has become a worldwide health emergency due to the high rate of transmission. Early diagnosis and prompt medical intervention for those who are at higher risk can mitigate serious complications from COVID-19. An ongoing theme of the COVID-19 pandemic necessitates the availability of accurate, rapid, and costeffective detection methods in infected individuals. However, particular biomarkers, mostly nucleic acids and proteins, are required for reliable detection of the COVID-19 infection [1]. As a result, current diagnostics generally depend on polymerase chain reaction (PCR) and antibody-based technologies [2,3]. Although these approaches are precise, sensitive, and specific, they require costly equipment and reagents, centralized diagnostic services, and highly skilled staff, all of which pose obstacles, particularly in underserved areas [4]. Therefore, there is a high demand for innovative approaches to tackle the challenges associated with conventional detection strategies and develop straightforward, time-efficient, and cost-effective procedures for detecting COVID-19 biomarkers. The development of a mobile phone-based CRISPR-Cas13a assay was reported for direct detection of SARS-CoV-2 from nasal swab RNA by Fozouni et. al. The assay achieved sensitivity of 100 copies per milliliter as well as detected pre-extracted RNA from a set of positive clinical samples in a short period of time [25]. Although these methods are sensitive and capable of detecting COVID-19 quickly, most of them require several tedious preparation steps such as nucleic acid amplification (NAA) and/or reverse transcription (RT) processes, as well as primer design. As a result, developing universal detection methods for SARS-CoV-2 RNA that not only achieve PCR-like sensitivity but also avoid NAA-related issues is crucial. In this study, a NAA-free CRISPR/Cas13a-based multiplexed electrochemical biosensor (E-CRISPR) was reported for detecting SARS-CoV-2 RNA sequences (i.e., S and Orf1ab genes) collected from clinical samples. The high sensitivity of the biosensor rises from the enzyme's collateral activity as well as the intrinsic sensitivity of the electrochemical technique, whereas the specificity results from a target-specific CRISPR RNA (crRNA) in the CRISPR/Cas13a complex, which directs Cas13a to the targeted RNA sequence [22]. To construct the E-CRISPR, a nonspecific thiolated reporter RNA (reRNA) whose other terminus was labelled with methylene blue (MB), or ferrocene (Fc) was immobilized on a gold-nanostructured electrode via sulfur-gold chemistry, then backfilled with 6-mercapto-1-hexanol (MCH) to prevent nonspecific adsorption. Following that, the sensing surface was exposed to the Cas13a-crRNA-target RNA assembly. The Cas13a cleavage capability is triggered in the presence of the target RNA, resulting in the cleavage of the redox probe labeled-reRNA and a decrease in the electrochemical signal. The robust turnover nature of Cas13a allows thousands of reporter cleavage per single target RNA binding, leading to signal amplification, and eventually quantitative readout of SARS-CoV-2 RNA. [19,22] In the absence of the target sequence, the Cas13a cleavage activity is inhibited, and the redox probe labeled-reRNA remains intact, retaining the electrochemical signal. Even across highly related RNA sequences with a single nucleotide substitution, the E-CRISPR can detect SARS-CoV-2 genes with excellent specificity and sensitivity 2.5 and 4.5 ag/µL (26.2 and 53.5 copies/µL for the S and Orf1ab genes, respectively) within an hour. Furthermore, the assay was tested on clinical samples and found to be in good agreement with qRT-PCR results. We believe that this multiplexed, NAA-free biosensing system will be widely used in diagnosis of viral infections, and a variety of genetic diseases. The present developed method offers the following benefits: (i) multiplexed detection to avoid producing falsenegative results; (ii) NAA-free detection to evade NAA-related issues while maintaining PCR-like sensitivity (iii) high specificity and the ability to distinguish between closely related RNA target sequences by a single nucleotide substitution; (iv) a low LOD that meets the sensitivity requirement and could potentially be used to detect SARS-CoV-2 RNA targets in the early stages of the disease when viral gene load is low. Instrumentation Field emission-scanning electron microscopy (FE-SEM) imaging was performed with the Carl Zeiss-Sigma instrument (Carl Zeiss, Germany) at an accelerating voltage of 20 kV. X-ray photoelectron spectroscopy (XPS) scanning was investigated with ThermoFisher Scientific (K-alpha + ) using Al Kα (mono) anode at 150 W energy in 10 7 Pa vacuum. Electrochemical experiments were carried out using CHI 1030C and CHI-660E (CH Instruments, USA) electrochemical analyzers. For the analysis of clinical samples, QuantStudio™ 5 Real-Time PCR System (Applied Biosystems, Massachusetts, USA) was used. Electrochemical measurements All electrochemical measurements were performed in a threeelectrode system using dual screen-printed electrodes (SPGEs) on the CHI instruments. Cyclic voltammetry (CV) was carried out in a K 3 Chronocoulometry experiments were performed at a potential step of 0.5 V (+0.1 to − 0.4 V) with a pulse width of 0.5 s and a pulse interval of 0.0025 s. SWV measurements were recorded in SWV measuring buffer at a potential range from − 0.6 to + 0.5 V, frequency of 15 Hz, amplitude of 2.5 × 10 − 2 V, and step potential of 5 × 10 − 3 V. All the measurements were performed at least in triplicate at room temperature (RT). RNA in vitro transcription. T7 transcription from synthesized DNA oligos was used to create the S and Orf1ab RNA sequences. Prior to in vitro transcription, a T7 promoter was added to each DNA template sequence using a PCR primer set (Table S2). The S and Orf1ab gene DNA sequences were then transcribed using the AmpliScribeTM High Yield Transcription kit as directed by the manufacturer. To summarise, all reaction components, except the AmpliScribe T7 RNA Polymerase, were first brought to RT.The reaction component, save the AmpliScribe T7 RNA Polymerase, was combined and mixed in the following order: 1 μg template DNA with appropriate promoter, 6.5 μL sterilized Nuclease-Free Water, 2 μL AmpliScribe T7 10X Reaction Buffer, 1.5 μL 1 × 10 − 1 M of each nucleotide (ATP, CTP, GTP, UTP), 2 μL 1 × 10 − 1 M DTT, and 0.5 μL RiboGuard RNase Inhibitor. Following that, 2 μL of AmpliScribe T7 RNA Polymerase was added and mixed. The resulting mixture was then incubated for 3 h at 42 •C with interval inversion of the mixture. After the reaction completed, 5 × 10 − 2 M EDTA was added to the mixtures to remove magnesium pyrophosphate formed. After that, the mixture was treated with RNase-Free DNase I to remove DNA template. Eventually, the RNA products were purified using TRIzol LS reagent. The purified RNA transcripts were kept at − 80 •C for the future use. Fabrication of biosensing surface Dual SPGEs with two working zones were employed for fabricating the biosensing surface. The SPGEs were cleaned and activated using the previously described procedures prior to construction of the biosensor [9]. These electrodes were then used as substrate for nanostructured gold electrodeposition and reRNA capture. The electroplating conditions were optimized for 120 s at a concentration of 3 × 10 − 2 M and a plating potential of 0.2 V. Following electrodeposition, a thiolated reRNA (2 × 10 − 6 M) was immobilized onto both gold nanostructured working surfaces overnight at 4 • C in a humid environment. It is worth noting that thiolated reRNA was activated using TCEP solution (1 × 10 − 2 M) for 1 h at RT in darkness. The reRNA-modified gold nanostructured electrodes were then thoroughly washed with WB and dried with nitrogen gas flow and passivated using MCH solution (1 × 10 − 5 M in 1X PBS pH 7.4) for 10 min. Following the MCH treatment, the biosensor was washed using WB for 5 min. The biosensing surface was then dried with nitrogen gas before being treated by the CRISPR system and can be stored in 1 × 10 − 2 M Tris buffer containing 1 × 10 − 1 M NaCl at 4 • C for a short period. Cas13a-crRNA assembly LwCas13a was expressed from the pC013-Twinstrep-SUMO-huLwCas13a vector (Addgene, Massachusetts, USA) and purified by Ni-NTA affinity chromatography (Fig. S7). Cas13a-crRNA duplex assembly was performed in a tube containing 1.25 × 10 − 7 M purified Cas13a, 6.25 × 10 − 8 M crRNA in nuclease free assay buffer. The tube was then placed in an incubator at 37 • C to allow the assembly reaction to proceed for 10 min. Subsequently, for RNA target detection, variable amounts of S or Orf1ab gene target RNA was added into the Cas13a-crRNA duplex and incubated for 10 min at 37 • C. Next, the reaction mixture was introduced to the reRNA-modified sensor to perform collateral activity at 37 • C for 3 h. After the on-chip CRISPR reaction, the sensor was thoroughly washed using WB. Biosensor performance assessment Calibration curves were obtained in the presence of both COVID-19 RNA oligos (S, and Orf1ab genes) over a range (1 × 10 − 17 to 1 × 10 − 11 M). Experiments with one or more missing components were also carried out as controls. The control samples lacked one or more of the following components: (1) LwCas13a, (2) crRNA, (3) RNA target, and (4) reRNA. The specificity of the biosensor was evaluated using synthesized mismatched SARS-CoV-2 RNA sequences (5 × 10 − 13 M), as well as synthesized Influenza-A RNA sequences (5 × 10 − 13 M). Finally, the performance of the biosensor was evaluated for the detection of the COVID-19 RNA fragments in clinical samples. For electrochemical detection, a 1 × 10 − 2 M Tris buffer containing 1 × 10 − 1 M NaCl was applied as the electrolyte. SWV was used to recorder the signal changes before/after introduction of Cas13a-crRNA-target. A potential range from − 0.6 to + 0.5 V, a frequency of 15 Hz, an amplitude of 2.5 × 10 − 2 V, and a step potential of 5 × 10 − 3 V were used. The signal retention J/J 0 was obtained by the comparison of the resulting current density before (J 0 ) and after (J) the introduction of the Cas13a-crRNA-target complex to the biosensor. Standard PCR method RNA samples extracted from 39 anonymized respiratory clinical patients were used as the RNA target using the electrochemical biosensor employing CRISPR platform. Samkwang Medical Laboratories (Seoul, South Korea) provided the SARS-CoV-2 positive samples in the form of purified RNA under IRB code number S-IRB-2020-029-09-17. They were ready-to-use samples that did not require pre-treatment before use. The isolated RNA samples were subjected to RT-qPCR analysis to be compared with the electrochemical results. Two different primer-probe sets were used in clinical SARS-CoV-2 RNA detection assays. The primer sets of the S and Orf1ab genes were designed by Park et al. [37]. All primer sets were synthesized by Integrated DNA Technologies. RT-qPCR assays were performed using Taq-Path™ 1-Step RT-qPCR Master Mix (ThermoFisher Scientific, Applied Biosystems™, USA) using a QuantStudio 5 Real-Time PCR System. Each 20 μL reaction mixture contained 5 μL of RT-qPCR master mix, 1 μL of each 10 μM forward and reverse primer, 2 μL of 100 nM probe, 1.2 μL of RNase-free water, and 2 μL of the template. The thermal cycling condition was 15 min at 50 • C for cDNA synthesis, 2 min at 95 • C for reverse transcription inaction and pre-denaturation, and 45 cycles of 5 s at 95 • C and 30 s at 55 ~ 60 • C. Concept and construction of E-CRISPR for the detection of SARS-CoV-2 Scheme 1 represents an overall workflow of E-CRISPR for the detection of synthetic, in-vitro-transcribed (IVT) SARS-CoV-2 RNA sequences as well as SARS-CoV-2 RNA sequences extracted from patients' samples. This platform takes advantage of collateral cleavage activity of CRISPR-Cas13a assay and electrochemical biosensing simultaneously for detecting SARS-CoV-2 RNA sequences. Given that LwCas13a's collateral activity is more favorable for uracil (U) and adenine (A) cleavage [18,23], a reRNA sequence containing U/A ribonucleotides was designed and used for the biosensing. The collateral cleavage activity was investigated based on the Cas13a-crRNA duplex targeting S and Orf1ab target RNAs. First, a triplex of Cas13a-crRNA-target RNA was created by combining S and Orf1ab target RNAs with their corresponding Cas13a-crRNA duplexes. Following that, Cas13a-crRNA-target RNA was incubated on the biosensing surface. Only in the presence of target RNA that can complement with the corresponding guide region positioned in the crRNA, the Cas13a collateral activity is triggered and the ssRNA cleavage process is initiated. The cleavage process removes redox labels (of MB and Fc) from the biosensing surface, causing the electrochemical signal to decrease. Evaluation of the optimized condition for E-CRISPR biosensing platform Due to the low quantity of SARS-CoV-2 RNA sequences in clinical samples, the detection sensitivity is vital, and it can be improved through optimizing various parameters affecting the biosensing The E-CRISPR working principle and its main components. The redox probe conjugated reRNA-modified biosensor is exposed to the enzymatically activated Cas13a-crRNA-target RNA triplex. Activated Cas 13a cleaves the reRNA, resulting in the release of the redox probe from the reRNA and eventually, a decrease in electrochemical signal. To create a gold-nanostructured surface, the dual SPGEs were electrodeposited with a gold solution. Nanostructure formation increases the surface-to-volume ratio, allowing for more reRNA capture. Cyclic voltammetry was used to investigate the active surface area of smooth and gold-nanostructured electrodes (GN/SPGE) (Fig. S1a). Despite the fact that both electrodes exhibit a similar cathodic peak due to electrochemical reduction of gold oxide formed during the anodic scan, because of the larger surface area, the reduction peak current of the nanostructured electrode is significantly higher than that seen on the smooth gold surface. FE-SEM was used to examine the surface roughness of both the smooth gold electrode and the GN/SPGE. The GN/SPGE surface micrograph shows a nano-flaked structure, whereas the gold substrate shows a smooth structure, which is consistent with the electrochemical results (Fig. S1b, c). Additionally, the chemical composition and its elemental state of the GN/SPGE surface was investigated by XPS. Based on the XPS survey scan on Fig. S1d, the gold element significantly dominates, with a percentage of Au atoms of 93.06%. Fig. S1e showed that the GN/SPGE surface were partially oxidized. A high-resolution spectrum of the Au4f core could be characterized by 3 pairs of Au4f 7/2 and Au4f 5/2 spin-orbit coupling. The position of the most significant pair with binding energies (BEs) of 84.9 and 88.9 eV were related to gold elemental (Au 0 ) which respective atomic percentage was calculated based on relative peak area as 85.3%, whereas those of the other pairs were related to the two stable gold oxide states, Au + (BEs of 86 and 89.8 eV) and Au 3+ (BEs of 87.8 and 91.3 eV), accounting for atomic percentage as 9.2% and 5.5%, respectively. To achieve optimal immobilized reRNA onto the biosensing surface, various concentrations of reRNA-labeled with MB/Fc were immobilized on the gold-nanostructured electrode and the electrochemical signal of redox probes was monitored. The MB/Fc oxidation currents increased continuously as the concentration of reRNA increased up to 2 × 10 − 6 M and then plateaued (Fig. S2). As a result, for future experiments, a concentration of 2 × 10 − 6 M reRNA was chosen. Furthermore, the length of the immobilized reRNA was evaluated (Fig. 1). Because of the exposed length difference, we assumed that different lengths of reRNA would result in different cleavage efficiency. As a result, different lengths of reRNA were tested at the same concentration using the same CRISPR-Cas13a reaction condition. As Fig. 1 shows, in the presence of a short reRNA, the electrochemical oxidation current is larger than that of a long reRNA due to the short electron transfer distance between the electrode surface and the redox probe. Subsequently, short reRNA produces a high background current, whereas long strands produce a low background current. As a result, signal changes for long reporters are comparable with those of short strands, and reRNA length has insignificant effect on signal variations. Finally, because of its greater reproducibility, the 14 nt reRNA was chosen for additional experiments. In the following phase, the effect of various passivation agents of varying lengths was investigated in order to achieve optimal cleavage activity followed by improved signal changes. As shown in Fig. 1, the greatest signal changes were obtained in the presence of MCH, so MCH was chosen as the favorable passivation agent. A passivation agent with a long carbon chain reduces electrostatic repulsion between reRNA strands and confines their steric movement, which can lower the redox tag's electron transfer kinetics, resulting in decreased signal changes. A short-length passivation agent, on the other hand, would be unable to lift the reRNA to the upright position and would be unable to confine the steric movement of the reRNA sufficiently, resulting in loosely oriented reRNA strands and, as a result, a high background signal. In the final step of optimizing parameters affecting biosensing surface performance, the biosensor's shelf life was evaluated under humidified conditions at 4 • C (Fig. S4). The electrochemical signal was stable for nearly three days, which is satisfying for clinical applications [15]. The Cas13a enzyme's collateral activity is crucial for the E-CRISPR biosensing platform to transduce the electrochemical signal and directly impact on the detection sensitivity. Therefore, for electrochemical detection of SARS-CoV-2 RNA, the potential factors, influencing the collateral activity were investigated (e.g., LwCas13a concentration for cleavage process and catalytic activity time). To begin, different concentrations of Cas13a-crRNA (at a 2:1 ratio) were tested in response to the same concentration of S or Orf1ab genes (1 × 10 − 8 M). As shown in Fig. S5a, signal retention decreased as the Cas13a-crRNA concentration increased from 2.5 × 10 − 8 to 1.25 × 10 − 7 M. At high concentrations above 1.25 × 10 − 7 M, signal retention was increased because Cas13a's activity towards a nonspecific reRNA was reduced due to Cas13a's large size, which hindered its accessibility to reRNA. As a result, the optimal concentration for Cas13a-crRNA duplex collateral activity was determined to be 1.25 × 10 − 7 M. Furthermore, collateral cleavage activity time was evaluated since reRNA cleavage is a time-dependent process (Fig. S5b). Extending the incubation time by 3 h results in more cleaved reRNA and thus less signal retention. In contrast, a longer incubation period had no effect on signal retention. This could be because the solution contains a low concentration of target RNA sequences, resulting in an insufficient number of active Cas13a enzymes. As a result, Cas13a's total catalytic activity will be inadequate for further improvement in signal changes at very low concentrations of target genes. The effect of incubation temperature on collateral cleavage activity was investigated. The signal retention of the cleavage process was highest at 25 • C, then decreased at 37 and 42 • C, demonstrating that the higher the temperature, the more efficient the cleavage activity. However, signal retention at 37 and 42 • C was comparable, indicating that increasing temperature above 37 • C did not further increase cleavage activity. As a result, 37 • C was chosen as the optimal temperature for the CRISPR cleavage process, in accordance with the published literature [24]. E-CRISPR for detecting SARS-CoV-2 RNA sequences Based on optimized conditions for biosensing surface and collateral activity, the E-CRISPR platform performance was evaluated for simultaneous, multiplexed, highly sensitive, and specific detection of S and Orf1ab genes. A wide dynamic range of 7 orders of magnitude from 1 × 10 − 17 to 1 × 10 − 11 M was achieved with an LOD of 2.5 and 4.5 ag/µL (26.2 and 53.5 copies/µL) for S and Orf1ab genes, respectively. To obtain calibration curves, Cas13a-crRNA-target RNA solutions containing various concentrations (1 × 10 − 17 to 1 × 10 − 11 M) of the S and Orf1ab genes were prepared, incubated at 37 • C for 3 h, and then applied to the reRNA modified gold-nanostructured electrode (Fig. 2). The measured faradic current changes from SWV for both S and Orf1ab genes were then fitted to a four-parametric sigmoidal curve using the following regression equations: The LOD of 2.5 and 4.5 ag/µL (26.2 and 53.5 copies/uL) were achieved for the S and Orf1ab genes, respectively. It is worth noting that the LODs obtained by the E-CRISPR platform are comparable to, and even outperform, previously published LODs for SARS-CoV-2 RNA strands ( Table 1). The overall intra-and inter-assay variability of less than 10% was also achieved for E-CRISPR (Fig. S6). Clinical measurements and standard method RNA samples extracted from nasopharyngeal swab samples of 23 COVID-19 positive patients and 16 healthy people were analyzed to test the feasibility of the E-CRISPR platform for SARS-CoV-2 detection in real samples (Fig. 3). For electrochemical detection, RNA sequences isolated from clinical samples were used as the target RNAs in the E-CRISPR platform. Each sample was tested using three different multiplexed biosensors. The signals obtained from COVID-19 positive patients for the S and Orf1ab genes were normalized to their blank counterparts. As illustrated in Fig. 2, the CRISPR-powered biosensor clearly discriminated COVID-19 positive patients from negative ones. All negative samples exhibited an electrochemical signal that was similar to the blank sample (J/J 0 > 0.85), whereas positive samples had a lower Fig. 2. E-CRISPR analysis of S and Orf1ab genes. Calibration curves for the (A) S gene and (B) Orf1ab gene. A four-parametric logistic fit was used to fit the data, yielding a LOD of 2.5 and 4.5 ag/µL for S and Orf1ab genes, respectively. The standard deviation (SD) for n = 3 replicates is represented by the error bars. signal, with the majority of them having a J/J 0 < 0.7. The cleavage of MB/Fc-labeled reRNA in the presence of Cas13a-crRNA-target triplex and the removal of redox labels from the biosensing surface cause a significant drop in the signal. The cleavage phenomenon is controlled by the stage of the disease and the viral RNAload of clinical samples. In the advanced stage of the disease, the higher viral RNA load leads to more cleavage of MB/Fc-labeled reRNA and eventually lower electrochemical signal. Our CRISPR-powered biosensing system's LODs of 2.5 and 4.5 ag/μL for the S and Orf1ab genes, respectively, are lower than the reported viral RNA load in clinical samples [52]. In addition to adequate LOD, the relatively short turnaround time and multiplexed detection of our proposed biosensor may make it a suitable candidate for point-ofcare applications. Eventually, the CRISPR-powered biosensing system's results were compared to the results of a gold standard, RT-qPCR. The electrochemical signals correlated well with the Ct values, indicating that the biosensor can be used as an alternative for COVID-19 detection assays. Specificity and control test To investigate the CRISPR-powered biosensor's specificity, various target RNAs, differing from the S and Orf1ab genes by a single nucleotide or several nucleotides, as well as synthetic Influenza-A RNA, were tested (Fig. 4a). For this, a concentration of 5 × 10 − 13 M of each strand was used. A variety of RNA strands were also used, including the complementary S and Orf1ab genes (Sg-Og); Sg-Og, and single-mismatched S and Orf1ab genes (Sg-Og_SMM); Sg-Og, and multi-mismatched S and Orf1ab genes (Sg-Og_MMM); Sg-Og, and Influenza-A (IAV) (Sg-Og_IAV). The final concentration of each strand in solution was 5 × 10 − 13 M. It is worth noting that all sample solutions contained the Cas13a and the crRNA, complementary to S and Orf1ab strands. For both S and Orf1ab genes containing single or multi-base mismatches, the Cas13a collateral activity will not be triggered because the mismatched targets do not sufficiently hybridize to the crRNA, consequently the enzyme's activity cannot be activated. No signal changes were also observed for IAV RNA because noncomplementary bases prevent hybridization of the IAV target strand and crRNA, and thus the enzyme's cleavage activity cannot be initiated. Alternatively, the S and Orf1ab genes produced low electrochemical signals. The signal reduction is caused by perfect hybridization of the S and Orf1ab genes with the corresponding crRNAs, which then activates the enzyme's cleavage activity. The presence of non-or partially complementary (i.e., IAV, SMM, and MMM) strands in samples containing S and Orf1ab genes showed a similar pattern and yielded a low electrochemical signal, demonstrating the high specificity of the CRISPR platform. Control experiments were also carried out to validate the E-CRISPR performance in the absence of target RNA and/or other biosensing assay components. As shown in Fig. 4b, if one or more components (e.g., target RNA, crRNA, Cas13a) are missing, Cas13a's trans-cleavage activity is not triggered and it cannot cleave reRNA, resulting in no changes in electrochemical signal. This evidence reveals that the E-CRISPR platform can detect SARS-CoV-2 RNA sequences only when all of the components are present (Fig. 4b). Conclusions The CRISPR/Cas13a -powered electrochemical biosensor was developed for highly sensitive, specific, rapid, multiplexed, and nucleic acid amplification-free of SARS-CoV-2 RNA fragments. The exciting figures of merit namely sensitivity, specificity, relatively short turnaround time, as well as expandable feasibility for clinical samples were experimentally demonstrated. The CRISPR platform delivered an ultralow limit of detection of 2.5 and 4.5 ag/µL for S and Orf1ab genes, respectively, which meets the sensitivity requirement. Its excellent specificity provided the capability of differentiating target strands among related RNA target sequences, and we foresee this technology will be a powerful tool in detecting SARS-CoV-2 RNA targets in the early-stage of the disease. Therefore, we are working to improve the proposed method's specifications for point-of-care applications (e.g., replacing the CHI-660E potentiostat with a portable one, integrating a microfluidic device with the biosensor to continuously accumulate the RNA strands onto the sensing surface, etc.) so that it can be used not only in central laboratories but also as a near-patient test anywhere. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Data availability No data was used for the research described in the article.
2023-01-05T14:05:26.860Z
2023-01-04T00:00:00.000
{ "year": 2023, "sha1": "7f9c3d064cd9c72e0169614e3c9711ec57df928d", "oa_license": null, "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9821849", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "96f81f7852482382b33664a70a81d8ca5303226b", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
246008456
pes2o/s2orc
v3-fos-license
Heavy metal co-resistance with antibiotics amongst bacteria isolates from an open dumpsite soil Heavy metal co-resistance with antibiotics appears to be synergistic in bacterial isolates via similar mechanisms. This synergy has the potential to amplify antibiotics resistance genes in the environment which can be transferred into clinical settings. The aim of this study was to assess the co-resistance of heavy metals with antibiotics in bacteria from dumpsite in addition to physicochemical analysis. Sample collection, physicochemical analysis, and enumeration of total heterotrophic bacteria counts (THBC) were all carried out using standard existing protocols. Identified bacteria isolates were subjected to antibiotics sensitivity test using the Kirby Bauer disc diffusion technique and the resulting multidrug resistant (MDR) isolates were subjected to heavy metal tolerance test using agar dilution technique with increasing concentrations (50, 100, 150, 200 and to 250 μg/ml) of our study heavy metals. THBC ranged from 6.68 to 7.92 × 105 cfu/g. Out of the 20 isolates subjected to antibiotics sensitivity, 50% (n = 10) showed multiple drug resistance and these were B. subtilis, B. cereus, C. freundii, P. aeruginosa, Enterobacter sp, and E. coli (n = 5). At the lowest concentration (50 μg/ml), all the MDR isolates tolerated all the heavy metals, but at 250 μg/ml, apart from cadmium and lead, all test isolates were 100% sensitive to chromium, vanadium and cobalt. The control isolate was only resistant to cobalt and chromium at 50 μg/ml, but sensitive to other heavy metals at all concentrations The level of co-resistance shown by these isolates is a call for concern. Introduction The therapeutic paradigm changed with the introduction of antibiotics into clinical use [1,2] which have continued to save lives around the globe [3]. Sadly, these gains are challenged globally with one of the greatest public health issues of our time, antibiotic resistance [3][4][5]. The range of infections caused by infectious agents and development of antimicrobial resistance have outpaced the development of newer antibiotics [3,4]. At an alarming rate, the spectrum of effective antibiotics to treat infectious diseases continue to narrow due to antibiotics resistance. The acquisition of resistance to antibiotics gives a microorganism a survival advantage over the sensitive ones [6]. The mechanisms of antibiotics resistance acquisition vary and include inactivation, the use of efflux pumps and even modification of targets [7]. The issues emanating from the misuse of antibiotics in medicine and agriculture are twofold: antibiotics pollution and abundance of antibiotics resistance genes (ARGs) [8]. Estimates indicates that antibiotics resistance presents with different economic burdens in different climes [9]. In the United States of America (USA), ARG is estimated to cost about $20 billion and $35 billion in direct and societal costs, respectively. In Europe, the cost is placed at £ 1.4 billion yearly while in developing countries, estimates are hard to come by as health care is largely privately funded [9]. More complicated is the fact that these ARGs have been shown to co-evolve with heavy metal resistance genes [10,11]. In trace amounts, heavy metals are needed by microbes for proper functioning, but at higher concentrations, they become toxic [12,13]. Due to the ever-increasing human population, industrial activities continue to accumulate these metals in several ecosystems and as result, the autochthonous microbes have adapted means to handle these heavy metals [13]. Some of these techniques include the use of efflux pumps, complexing them and using them as an electron acceptor [14,15]. Studies have shown that, co-resistance of antibiotics resistance with heavy metals is via similar functional and structural strategies that are either plasmid or chromosomally borne [16,17]. Antibiotic resistance genes occur naturally in various environments in low abundance. However, studies have shown that their abundance increase in the presence of several pollutants such as heavy metals, crude oil and sewage [10,11,18]. Where this happens, these ARG can move from one microorganism to another via horizontal and vertical mechanisms triggering multi-drug resistance and compounding clinical outcomes of infectious diseases. Furthermore, these ecosystems can act as reservoirs for ARG crossing over from environmental into clinical settings [11,19]. There are concerns that ecosystems polluted with metals such as open dumpsites used in the management of solid wastes in developing countries are becoming more common these days [20,21]. Solid wastes are often poorly handled in developing countries such as Nigeria and are managed using open dumpsites which are more common than the more sanitary landfills [21]. These solid wastes are composed of degradable and non-degradable fractions [19,20]. Dumpsites receive large amounts of wastes daily from cities and thus, they are capable of accumulating heavy metals and as well as other pollutants [19,20,[23][24][25][26]. These and other factors have the capacity to shape the microbial community in an open dumpsite soil, and its leachate [19]. The abundance of plasmid bearing bacteria have been shown to be comparatively higher in contaminated soils [17] and resistance plasmids in non-agricultural soils [27]. Heavy metals and antibiotics cross-resistance in impacted soil have been reported [10]. These environments could be reservoirs for antibiotic resistant genes and pathogens of humans [19,28] where these genes are turned on at low concentrations of antibiotics with potentials to be transferred to other hosts [28]. Antibiotics genes have been discovered in all kinds of environment even amongst the so-called pristine and controlled environments like compost thus supporting the fact that these genes have their origin in the environment [1,8,29,30]. Yet, the role of environmental factors in the spread of antimicrobial resistance is largely overlooked. Several studies exist that have established antibiotics and heavy metal co-resistance in polluted environments [26,31] but similar studies are few for dumpsite environment. The aim of this study was to evaluate co-resistance of heavy metal amongst multi-drug resistant bacteria isolates from an open dumpsite soil. Study site Soil samples were collected from an open dumpsite popularly called Lemna dumpsite. The dumpsite is located on coordinates: 4 • 13 ′ E and 5 • 15 ′ E and 8 • 15 S and 8 • 21 ′ S in Calabar Municipality, Capital City of Calabar, Cross River State, Nigeria. The study area is characterized by tropical wet and dry seasons, high annual rainfall of 3500-4000 mm, mangrove vegetation and an estimated run-off of 90%. The dumpsite receives huge quantity of wastes on a daily basis from domestic and industrial areas in the state. It covers a total area of approximately 3,265 m 2 and is less than a 1 km from a Ikot Effanga Mkpa stream, a source of untreated water for many households around the study site (see Fig. 1). Collections of samples Soil samples from three different locations (3 m apart from each point) within the dumpsite were collected in triplicates, totaling 9 samples as previously reported [26]. From each sampling point, 100 g of soil were collected using sterilized hand trowel from a depth of 0-15 cm after removing the top soil. The collected samples were first placed in oven sterilized aluminum sample plates and then into sterile plastic containers before transporting to the Laboratory for further analysis. Heavy metals (vanadium, chromium, nickel, lead, cobalt, cadmium and copper) and commercial antibiotic discs (Hardy Diagnostics, USA) were obtained from Globus Scientific Store in Calabar, Cross River State. Analysis of physical and chemical parameters Physicochemical parameters including pH, electrical conductivity, THC, total moisture, TOC, sodium, phosphorus, magnesium, potassium and calcium were evaluated as previously reported [32,33]. Heavy metals analysis Collected soil samples were dried and passed through a 2 mm sieve. Exactly 5 g of the soil samples were digested with concentrated nitric acid and transferred to a 100 mL Teflon beaker. Thereafter, 10 mL of ultrapure concentrated HNO 3 (Merck) was added and the sample was heated to 100, 150, 210 and 280 • C on a hot plate for 0.5, 0.5,0.5 and 2 h with DK-20 heating digester. Finally, 2 mL of 1 N HNO 3 was added to the residue and the solution evaporated again on the hot plate, continuing until every sample was completely digested. After cooling, a further 10 mL of 1 N HNO 3 was added. The solution was then diluted and filtered through a 0.45-l.0 mm nitrocellulose membrane filter. The concentration of heavy metals in the soil was determined using inductively coupled plasma atomic emission spectrometer (ICP, AES) (YobinYvon JY-24) [34]. International certified reference material (CRM 029) was used and analyzed at the beginning and end of each batch of samples to assess accuracy and precision of the analytical method. Heavy metal concentrations determined in the standard reference materials were compared with certified values. The instrument performance during analysis was monitored using internal standards. The calculated recovery capacities ranged from 90.2% to 108%, with regression coefficients (r 2 ) ranging from 0.91 to 0.97. The limits of detections were 0.052 (Pb), 0.018 (Cd), 0.015 (Mn), 0.025 (Ni), 1.790 (Zn), 0.079 mg/kg (Cr), 0.02 mg/kg (Fe) and 0.01 mg/kg (Co). Concentrations of heavy metals were expressed in mg/kg dry weight [35]. Enumeration of THBC Total heterotrophic bacterial count (THBC) was enumerated as previously reported [36]. After serial dilution, exactly 0.1 mL of U.O. Edet et al. desired dilution was used for enumeration on nutrient agar to which 50 μg/ml of Nystatin was added to inhibit fungal growth and plates incubated at 37 • C for 24 h and bacterial counts recorded after 24 h of incubation. The same volume was also plated out on Eosin Methylene Blue (EMB) agar and incubated for 37 • C for 24 h. Purification of isolates and identification of isolates Distinct colonies were picked from both agars and sub-cultured twice onto freshly prepared nutrient agar plates for purification. Pure isolates were characterized using Gram reaction and biochemical tests as reported previously [37]. Heavy metal resistant test Isolates showing multiple drug resistance (resistance to atleast 2 antibiotics) were selected for heavy metal resistance test. This was done using as previously reported [26,39]. Briefly, a loopful of 12-16 h old bacteria culture in tryptic soy broth was inoculated on Mueller-Hinton Agar plates bearing different concentrations (50, 100, 150, 200 and 250 μg/ml) of the test heavy metal (chromium, vanadium, cobalt, cadmium and lead). Incubation was done at 37 • C for 24 h. After incubation, plates showing growth were regarded as resistant while those without growth were taken as sensitive. The most sensitive isolate from the sensitivity test was used as control against the heavy metals. Data analysis Replicate readings for the physicochemical parameters were analyzed using one way analysis of variance (ANOVA) with the significance set at p < 0.05. Results The results of the physicochemical and heavy metal analyses are presented in Tables 1 and 2. From Table 1, the parameters examined were pH, electrical conductivity, total hydrocarbon content (THC), total moisture, total organic carbon, phosphorus, magnesium, potassium, calcium, silt, clay, and sand. The pH values ranged from 6.50 to 7.60 and was highest in DS2. The EC values ranged from 13.20 to 16.60 μ/Scm − 1 with DS3 having the highest value. The total hydrocarbon content (THC) was highest in DS2 with a value of 9.50 and least in DS3 with a value of 0.95 mg/kg. Total organic carbon (TOC) was highest in DS1 followed by DS3 and then DS2 with values of 5.0% 4.5% and 3.5%, respectively. Potassium values ranged from 0.50 to 0.90 mg/kg. Calcium on the other hand, gave much higher values with DS1 and DS2 having values of 8.50 and 8.00, respectively. The amounts of sand, slit and clay in the samples as presented in Table 1 shows that they vary significantly from one sample location to another. Across all the heavy metals examined in the dumpsite, DS2 recorded the highest values for all the evaluated heavy metals. The values were 22.67, 4.70, 0.80, 1.35, 5.05 and 0.37 mg/L, for Iron, zinc, cadmium, lead, manganese, nickel and chromium, respectively. Fe and Zn recorded the least values in DS3 while Cd had the least value of 0.20 mg/L in DS1. When compared to World Health Organization (WHO) [40] permissible standard, Fe, Cd and Pb were higher for all the study samples. Furthermore, the values of Ni in DS2 and DS3 were higher than those of WHO [40] but not DS1. Similar trend was also observed for Co whose values were higher than those of WHO [40]. However, the Cr values in DS2 were higher than WHO [40] acceptable limit, but not those of DS1 and DS3. Table 3 shows the total heterotrophic counts for the various dumpsite samples. From the result, the microbial counts were 7.92 × 10 5 , 6.91 × 10 5 and 6.68 × 10 5 for DS1, DS2 and DS3, respectively. The counts were lower on EMB and these were 2.84 × 10 4 , 2.36 × 10 4 and 3.04 × 10 4 , respectively. The bacterial isolates obtained in our study were B. subtilis, B. cereus, S. marcescens Enterobacter sp, V. cholera, C. freundii, Yesinia Sp, Salmonella sp and Shigella sp. In addition, E. coli was also obtained and were 10 in number. Some of these isolates especially E. coli, P. aeruginosa; B. subtilis and B. cereus were obtained in all three dumpsite samples (Table 4). In our study, 20 isolates were subjected to antibiotics sensitivity. A total of 50.00% (n = 10) showed multiple drug resistance, that is, were resistant to atleast two of the test antibiotics as shown in Tables 5 and 6. From Table 5, Shigella sp, Salmonella sp, Vibrio cholera, S. marcescens and Yersinia sp were 100% sensitive to the test antibiotics. B. subtilis, B. cereus, C. freundii, Enterobacter sp and P. aeruginosa. The test isolates in Table 5 were 100% susceptible to ceftriaxone and chloramphenicol while those in Table 6 only showed 100% susceptibility to ciprofloxacin. Table 7 shows the heavy metal resistance profile of the multi-drug resistant isolates. The resistance profile of the multiple drug resistant isolates to the heavy metals showed that it was concentration dependent. Consistently, at 50 μg/ml, all the isolates showed high resistance that was 100, 90, 90, 100 and 100% for Cr, V, Cd, Co and Pb, respectively. At 100 μg/ml, resistance was 90, 70, 90, 90 and 90% respectively for Cr, V, Cd, Co and Pb, respectively. At 150 μg/ml, the isolates showed 60, 60, 80, 50 and 60% resistance, respectively to the test heavy metals. However, at 250 μg/ml, the isolates were most sensitive and apart from Cd and Pbthat the isolates showed 10 and 20% resistance, respectively, while to the rest of the heavy metals, the isolates were 100% sensitive. The control isolate was sensitive to the heavy metals at the lowest concentration, however, it showed resistance to Cr and Co. Unlike the test MDR, the control isolates showed 100% sensitivity to other concentrations. Discussion Dumpsites are usually rich with organic wastes in solid and liquid forms [21]. It is therefore not surprising that our soil sample was abundant with bacteria. Osazee et al. [41], reported lower bacteria counts in dumpsite and control soil samples which were 1.8 × 10 4 and 1.7 × 10 3 cfu/g, respectively. However, much higher THBC counts that ranged from 7.4 × 10 6 to 1.2 × 10 7 cfu/g for dumpsites in Bwari, Abuja were reported earlier [42]. Similar higher counts to our findings were obtained from a dumpsite from Port Harcourt by William and Hakam [43] who reported THBC that ranged from 4.4 × 10 7 and 1.2 × 10 8 . These counts were within range of our THBC that ranged from 6.68 to 7.92 × 10 5 cfu/mg. In this study, E. coli had a prevalence of 50%. Serratia sp, Klebsiella sp, Pseudomonas sp and Bacillus have been reported as overlapping species in both polluted and pristine soil samples [26]. Furthermore, Osazee et al. [41] reported some similar isolates and these were Bacillus sp, Pseudomonas sp, Aeromonas sp, Enterobacter sp, Klebsiella sp, and Staphylococcus sp. Furthermore, William and Hakam [43] reported similar isolates in addition to Streptococcus and Staphylococcus species. Dumpsites often usually accumulate pollutants that can alter the physicochemical parameters of the receiving environment as well as the microbial diversity and function. In an earlier study, a pH range of 6.7-7.6 was reported by Bassey et al. [21] for the same study site. In addition, they also reported electrical conductivity values that ranged from 14.30 to 15.6 μ/scm − 1 . Both pH and electrical conductivity value were more agreeable to our findings. These were higher than our reported values. TOC, Na, P, Mg and Ca values reported in our study were more agreeable to those of previous studies by Bassey et al. [21] and Bassey et al. [22] for the same study site. However, our findings are lower than those previously reported by Adeyemi et al. [44] and Narty et al. [45] for dumpsite leachates and soil samples, respectively. In another earlier study, higher values of EC, Pb, Fe, Zn, Cr, Cu and TOC in Benin dumpsite than our study sites were reported with values that ranged from 164.00 to 540 μ/scm − 1 , 0.025 to 0.015, 3.09 to 2.41, 1.99 to 1.47, 0.079 to [40]. Antibiotics resistance is a complex global problem [46][47][48][49]. Resistances to antibiotics lower their effectiveness and increase economic burdens to individuals, public health and the society at large [50]. Antimicrobial drug resistance is driven by misuse and its genes can be transferred to other hosts in the environment [5,48]. In our study, 50% of the test isolates showed multidrug resistance to the test antibiotics and these are largely isolates that are often implicated in clinical settings in causing diseases in humans. They showed resistance to the beta lactamase, quinolones and aminoglycosides classes of antibiotics. It has been suggested that beta lactamase evolved millions of years ago implying that their presence even before the arrival of antibiotics in clinical usage [8,14] suggesting that bacterial resistance to antibiotics is evolutionary and seems inevitable and unstoppable. It is important the we understand how resistance genes spread from one bacterium to another [51] and from environment into clinical settings [19]. Resistance to antibiotics could be acquired via mutations which can be induced by very low concentrations of antibiotics in the environment or horizontal gene transfer of the resistance gene or just as seen in the Gram-negative bacteria whose outer membrane makes them resistant to several antibiotics [1,51]. This could be the reason different isolates showed resistance to several antibiotics such as perfloxacin, augmentin, amoxicillin, ceftriaxone, gentamycin, septrin, and streptomycin used in this study. ARG in Gram positive and negative isolates as observed in this study comes with high mortality and morbidity rates, and difficult to treat and S. marcescens 4. Escherichia coli Key: CPX= Ciprofloxacin, SXT = Septrin, S= Streptomycin, CN = Gentamycin, CEP = Ceftriaxone, OFX= Ofloxacin, AM = Amoxicillin, PEF= Pefloxacin, CH=Chloramphenicol, and AU = Augmentin and "-"represents no inhibition (resistance). Key: CPX= Ciprofloxacin, SXT = Septrin, S= Streptomycin, CN = Gentamycin, CEP = Ceftriaxone, OFX= Ofloxacin, AM = Amoxicillin, PEF= Pefloxacin, CH=Chloramphenicol, and AU = Augmentin, and "-"represents no inhibition (resistance). (%) 50 0 0 50 90 10 50 90 10 50 100 0 50 100 0 100 90 10 100 70 30 100 90 10 100 100 0 100 90 10 150 60 40 150 60 40 150 80 20 150 50 50 150 60 40 200 10 90 200 10 90 200 50 50 200 60 40 200 60 40 250 0 100 250 0 100 250 10 90 250 0 100 250 manage infections [47]. Antibiotics resistance has reached a global dimension and it is no longer confined to clinics. Azam et al. [6] identified a pan drug resistant E. coli MRC11 that showed resistance to 20 out of 21 antibiotics in their study. Furthermore, they showed that the addition of certain heavy metals even at high concentration did not increase their susceptibility to these antibiotics. In our study, 5 out 10 of the E. coli isolates showed MDR to the test antibiotics, however, higher concentrations of the heavy metals were toxic to all the test isolates. The discharge of untreated municipal sewage and industrial wastes creates means of selection, multiplication and spread of resistance amongst bacteria [6]. Wesgate et al. [52] showed that exposure to low concentrations of triclosan but not to chlorohexidine, and hydrogen peroxide based biocidal agent drive ARGs. Apart from these pollutants, heavy metals resistance has been shown to be co-selected in environmental settings with that of antibiotics. In an earlier study, S. aureus, E. coli and P. aeruginosa were reported as the most resistant isolates to antibiotics and these isolates apart from S. aureus showed MDR in our study. Furthermore, these MDR in their study showed abundant to moderate growth with iron and zinc at higher concentrations [39]. Compared to our findings, B. cereus, B. subtilis, and E. coli all showed complete resistance to Pb. One isolate of E. coli showed complete resistance to Cd. The rest of the isolates in our study were sensitive at higher concentrations of the heavy metals. Sources of these metals such as mercury, cadmium, copper, and zinc vary and they include solid wastes [20,21] as well as agriculture and aquaculture [11,35]. As these metals accumulate in the environment, at certain critical levels, they trigger co-selection mechanism with antibiotics resistance [53]. Chen et al. [54] investigated heavy metals and ARG in a copper tailing dam in China and found out that, genes coding for arsenic resistance and macrolides were the most abundant even though copper and lead where more abundant than arsenic in concentration. Furthermore, the abundance of the heavy metal resistance genes gave positive correlation with Cd suggesting that the metal plays an essential role in the selection of heavy metal resistance genes. Our study shows that the at highest concentration, 10% showed resistance to Cd while 20% showed resistance to lead. Antibiotics resistance and tolerance of heavy metals in the environment are a growing global public health concern [19,55]. In an earlier study, a total of forty (40) aerobic bacteria isolated from sediment and water were subjected to various antibiotics including carbenicillin, gentamicin, kanamycin, chloramphenicol, and nalidixic acid [55]. Their results indicate that the isolates (37.50%) showed resistance to one or more antibiotics while 22% showed complete sensitivity. Compared to our findings, 50% of our isolates displayed multi-drug resistance to our test antibiotics. Furthermore, they subjected a total of 29 isolates to different concentrations of various heavy metals and found out that all their isolates were tolerant and able to grow at the various concentrations used in their study. In our study, the isolates were able to grow and tolerate all the concentrations used. Conclusion Antibiotics resistance genes have been found in all kinds of environment including those that are pristine and has the potential to cross into clinical settings with huge public health significance. In our study, we aimed to evaluate the co-resistance of heavy metals with antibiotics amongst isolates from dumpsite soil. Our results indicated the presence of heavy metals in concentrations higher than those permissible by the WHO. The bacterial isolates in our study were isolates that are frequently implicated in human infections in clinical settings. Isolates that showed multiple drug resistance were also able to tolerate higher concentrations of heavy metals utilized in the study. These findings suggest that pollutants like heavy metals could have their tolerance genes co-evolving with ARG in a synergistic manner. Furthermore, our MDR isolates showed heavy metal resistance that was dependent on the concentration of the heavy metals. These findings are a clear call for concern. Author contribution statement Uwem Edet, Ph.D: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. Ini Ubi Bassey, PhD; Akaninyene Joseph, PhD: Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. Funding statement This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Data availability statement Data included in article/supp. material/referenced in article. Additional information No additional information is available for this paper. Declaration of interest's statement The authors declare no competing interests.
2022-01-18T16:06:29.070Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "42697a58e00bbb0b4646a31dfcdebcd1d7033364", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "869419ccbb83e4d4e089d3e14deb32e564694100", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
258110980
pes2o/s2orc
v3-fos-license
Extraction optimization, structure features, and bioactivities of two polysaccharides from Corydalis decumbens Two polysaccharides (CPS1 and CPW2) from Corydalis decumbens were obtained to develop insights into natural medical resources. Optimal extraction conditions of total sugars were researched using the method of response surface methodology, polysaccharides were purified using a combination of ethanol precipitation and anion-exchange chromatography, and structure features were analyzed by scanning electron microscopy, transmission electron microscopy, and Congo-red assay. The bioactivities were estimated in terms of antioxidant and anti-inflammatory effects. Total sugars were extracted with an experimental yield of 32.74% under optimum conditions. CPS1 and CPW2 were purified with yields of 12.01% and 8.23%, respectively. CPS1 was a unique polysaccharide with a molecular weight (Mw) of 360 kDa and consisted of glucose, galactose, mannose, and arabinose in a ratio of 4.9:2.0:1:1.9, and CPW2 was composed of glucose with the Mw of 550 kDa. CPS1 possessed a four-helix conformation, and CPW2 was identified as a linear molecule without branched and entangled chains. The mRNA expressions of TNF-α (71.80%), IL-1β (56.55%), IL-6 (43.98%), and COX-2 (91.88%) in LPS-stimulated RAW 264.7 cells were significantly inhibited by 75 μg/mL CPS1 (P < 0.0001), while CPW2 showed lower inhibitory effects than CPS1. Compared with CPW2, CPS1 showed stronger scavenging abilities for hydroxyl (EC50 = 520.46 μg/mL), ABTS (EC50 = 533.99 μg/mL), and superoxide (EC50 = 1512.06 μg/mL) radicals. CPS1 with four-helix conformation exhibited more outstanding bioactivities than CPW2 without entangled chains. Introduction Corydalis decumbens is an herb belonging to Papaveraceae families and the species are distributed in different provinces in China [1]. Its dried stem tubers are bitter and pungent and have been utilized as a traditional Chinese medicine for thousands of years [2]. Modern pharmacological studies have shown that C. decumbens has anti-inflammatory, anti-cerebral infarction, anti-arrhythmia, and brain nerve protection effects [3,4]. It has been reported that C. decumbens comprises mainly alkaloids, such as protopine, tetrahydropalmatine and palmatine, and then its health-beneficial effects are also owing to the alkaloid ingredients [5]. Polysaccharides, as important activity substances, exist broadly in the water extract of plants and may play a marked role in beneficial ingredients [6]. However, there is no report concerning chemical compositions, structural characteristics, anti-inflammatory effects, and antioxidant activities of polysaccharides from C. decumbens. The over-expression of inflammatory factors can cause multiple diseases, including intracerebral hemorrhage, Alzheimer's disease, and atherosclerosis [7,8]. Some toxic substances, such as chemokines and inflammatory factors, will be released and aggravate the damage of neurons after the onset of intracerebral hemorrhage [9]. It has been reported that the chemokine TNFα can increase plaque deposition, lead to neurotoxicity and increase significantly in Alzheimer's patients [10]. So, it is necessary to control the over-expression of inflammatory factors. Some studies have shown that natural compounds from extracts of medicinal plants have good anti-inflammatory effects attributed to their polysaccharide compounds [11]. For example, Wang et al. [12] reported that the polysaccharide extracted from Gynostemma pentaphyllum herb showed a high anti-inflammatory activity by decreasing the levels of the factors TNF-α and IL-6. As much, polysaccharides from medicinal plants can also show excellent antioxidant capacities. Gu et al. [6] found that three polysaccharides from Sagittaria sagittifolia L. exhibited stronger antioxidant activities by scavenging reactive oxygen species (ROS) including ABTS, DPPH, and hydroxyl radicals. Neurons are most susceptible to oxidative injury by ROS and oxidative stress is also a feature of neurological diseases [13]. Therefore, it remains to be one of the interesting research fields to search for natural anti-inflammatory and antioxidant polysaccharide compounds. In the present study, we isolated two novel polysaccharides from C. decumbens, demonstrated fine structure features, and related it to biological functions such as anti-inflammatory and antioxidant activities. To our knowledge, this is the first time that detailed properties of polysaccharides from C. decumbens were reported. This study may also provide new insights into beneficial ingredients from C. decumbens and facilitate its pharmaceutical application, especially as potential preventive-therapeutic agents for the treatment of inflammation-related chronic human diseases such as intracerebral hemorrhage and atherosclerosis. Materials and chemicals Dried stem tubers from C. decumbens were purchased in a local drugstore and their origin was in Jiangxi Province, China. The species was identified as C. decumbens by professor Zhihong Huang, Huaqiao University, China. Extraction optimization of total sugars Extraction of total sugars. Dry C. decumbens powder (1 g) was added to a certain volume of distilled water at a selected temperature for some time in each test. After the water extraction, the supernatant was collected to determine total sugar content using the phenol-sulfuric acid method, and the yield of total sugars (%, w/w) could be obtained [14]. (GC) system (7890, Agilent Technologies, Palo Alto, CA, USA) fitted with an HP-5 column (30 m × 0.25 mm × 0.25 μm). The detector was a flame-ionization detector (FID) and the column was run under the condition of programmed temperatures (160-180˚C at 20˚C/min, 180-220˚C at 8˚C/min, hold 2 min). Congo red assay. Congo red assay was used to determine the helix conformation of polysaccharides. 80 μmol/L Congo red solution was mixed with 0.5 mg/mL sample solution in equal volumes, followed by adding 4 mol/L NaOH to adjust the mixture solutions to different final concentrations (0.1-0.5 mol/L) and standing for 10 min at room temperature. After that, UV spectra were recorded on a UV-1800PC spectrophotometer at a range of 400-700 nm, by which the maximum absorption wavelengths (λ max ) of different solutions could be acquired [20]. Transmission electron microscopy (TEM) observation. TEM was used to observe the molecular morphology of polysaccharides. 1 mg/mL of polysaccharide solution (2 mL) was mixed with 1 mg/mL sodium dodecyl sulfate (SDS) solution (2 mL), and the mixed solution was incubated at an 80˚C water bath for 2 h and allowed to continue for another 2 h after diluting with distilled water to 5 μg/mL [21]. After a single droplet of reaction solution passing through a 0.22 μm cellulose membrane was deposited on a carbon film of 200 mesh and dried at room temperature, the specimen was visualized on TEM (H-7650, Hitachi High-Technologies Corporation, Tokyo, Japan). Scanning electron microscopy (SEM) observation. The microstructure of polysaccharides was investigated by grounding sample powder onto a metal platform, sputtering with a thin layer of platinum, and using an SEM (S-4800, Hitachi High-Technologies Corporation, Tokyo, Japan) to observe at 5.0 kV accelerating voltage [22]. Analysis of anti-inflammatory property. Cell culture and cytotoxicity tests. RAW264.7 cells were cultured in a 5% CO 2 atmosphere at 37˚C for 48 h with high glucose DMEM medium containing 10% FBS and 1% penicillin. Using the MTT method, the cytotoxicity test of polysaccharides against RAW264.7 cells was carried out with mass concentrations of 600, 300, 150, 75, 25 and 1 μg/mL. Anti-inflammatory tests. According to the results from the cytotoxicity tests, the polysaccharide samples (CPS1 and CPW2) with the amount of 150 and 75 μg/mL were used for the antiinflammatory tests. All the anti-inflammatory tests included blank, negative control, positive control, and sample groups. After RAW264.7 cells were cultured on 6-well plates for 24 h, 75 μg/mL dexamethasone (DXMS), 150 μg/mL CPS1, 75 μg/mL CPS1, 150 μg/mL CPW2 and 75 μg/mL CPW2 were added into a positive control group and four sample groups, respectively [23]. The experiment groups were allowed to continue for another 20 h, followed by adding 1 μg/mL LPS into all the groups except the blank for 4 h to construct inflammatory models. Determination of mRNA expression levels. The total RNA for each group was extracted by a total RNA isolation system with the manufacturer's instruction, after that, the cDNA reverse transcription was carried out using a reverse transcription system under the experimental conditions of 42˚C for 60 min and 70˚C for 15 min. The target genes, including glyceraldehyde-3-phosphate dehydrogenase (GAPDH), TNF-α, IL-1β, IL-6, and COX-2, were amplified using a Real-Time PCR System (Roche 480II, Applied Biosystems). The primer sequences of target genes were provided in the previous report [24] and the experimental conditions were 95˚C for 15 sec, 60˚C for 30 sec, and 72˚C for 30 sec (45 cycles) [23,25,26]. The molecular size of amplified genes was identified using the method of agarose gel electrophoresis. According to the C T values and using the 2 -ΔΔCT method, the mRNA expression levels of the inflammatory factors relative to the GAPDH gene were able to be obtained. Analysis of antioxidant property. For the identification of antioxidant activities, scavenging assays of superoxide [27], hydroxyl [28], and ABTS [6] radicals were conducted. The related steps were described as follows. Superoxide radical assay. 0.1 mL sample solution of different reaction concentrations (15-3000 μg/mL) was mixed with 1 mL of 557 μmol/L NADH, 45 μmol/L PMS, and 108 μmol/L NBT solutions, followed by keeping at room temperature for 5 min and measuring the absorption values at 560 nm. Hydroxyl radical assay. 0.1 mL of sample solution (10-4000 μg/mL reaction concentrations) was mixed in sequence with a mixture (0.6 mL, including 2.67 mmol/L deoxyribose and 0.13 mmol/L EDTA), 0.4 mmol/L ferrous ammonium sulfate (0.2 mL), 2.0 mmol/L ascorbic acid (0.05 mL) and 20 mmol/L H 2 O 2 (0.05 mL), and incubated at a 37˚C water bath for 15 min, followed by boiling for 15 min and cooling to room temperature after adding 1 mL of 1.0% thiobarbituric acid and 2.0% trichloroacetic acid. Then the absorbance of reaction solutions was measured at 532 nm. ABTS radical assay. For the ABTS+ solution, 2.45 mmol/L persulfuric acid solution was dissolved into 7 mmol/L ABTS solution for 16 h in the dark, followed by diluting it 20 times with distilled water to obtain the working concentration. Sample solutions (1.25-5000 μg/mL reaction concentrations) were added to the ABTS+ working solution for 10 min in the dark and the absorbance was measured at 734 nm. Scavenging rate. The scavenging rate for each radical was calculated based on the following formula. Radical scavenging rate Where A 1 is the absorbance of the sample, A 0 is the absorbance of the control (distilled water without sample), and A 2 is the background absorbance. Statistical analysis. All the experiments were repeated three times. Data were analyzed using the Origin Software 2017 software and determined as mean ± standard deviation (S.D.). Statistical significance differences were estimated using the method of one-way analysis for variance (ANOVA) and values of P < 0.05 were considered to be statistically significant. Extraction optimization of total sugars Results from the single-factor investigation were demonstrated in Fig 1. The results showed that total sugar yields decreased with the increase of liquid-solid ratio from 20 to 200 ( Fig 1A) and increased with the increase of extraction time from 90 to 270 min ( Fig 1B). The yields were below 10% after the liquid-solid ratio reached 100 or when extraction time was below 150 min, while yields increased with elevating temperatures from 30 to 65˚C and decreased gradually after 65˚C (Fig 1C). In addition, the factor of extraction numbers was also investigated. The extraction numbers were selected from 1 to 5, and total yields were 22.92%, 25.45%, 25.11%, 24.87%, 23.23% in one condition (60 mL/g, 60 min, 65˚C), and 12.89%, 14.13%, 14.91%, 13.87%, 12.19% in another condition (60 mL/g, 210 min, 80˚C). The results suggested that the changes in total yields were unobvious with the increase of extraction numbers, and then the factor of extraction numbers will not be included in the next experiment considering the extraction cost. According to the single-factor results, the Box-Behnken design was assigned and the results of 17 experiments were listed in Table 1. Using the Design-Expert Software, a regression model was obtained and described as follows. Statistical analysis showed that a p-value was 0.0083 (p < 0.05), lack of fit was 0.0889 (> 0.05), and the R 2 value for the model was 0.9021, indicating that the proposed model was suitable to calculate the extraction yield of total sugars from C. decumbens. Using the model, optimum conditions and a predicted optimum response value were achieved. Under the predicted optimum conditions (A = 60 mL/g, B = 250 min, C = 68˚C), an experimental yield was (32.74 ± 0.072)%, which matched well with the expected value of 32.38%. Extraction and purification of polysaccharides Based on the hot-water extraction, crude products after ethanol precipitation were obtained from C. decumbens. Other high Mw compounds such as nucleic acid and protein were also obtainable under the hot-water extraction condition, which would create difficulties for the extraction of polysaccharides. To recover polysaccharides from the crude products, ethanol of different concentrations was used to separate two polysaccharide components (CP1 and CP2) into different grades with yields of 24.82% and 17.47%, respectively. By using the DEAE-Sepharose CL-6B (Fig 2A and 2B) combined with different elution conditions, two purified C. decumbens polysaccharides (CPS1 and CPW2) were further obtained from CP1 and CP2, respectively, which make both the yields to decrease to 12.01% and 8.23%. Homogeneity of CPS1 and CPW2 The HPSEC profiles of CPS1 and CPW2 in a Sugar KS-804 column by RID were demonstrated in Fig 2. For the CPS1 molecule, the elute was 0.24-0.45 mol/L NaCl solutions in the DEAE--Sepharose CL-6B chromatography, and a single peak at 6.21 min was observed in the HPSEC profile of CPS1 (Fig 2C). To study gradient elution conditions, a polysaccharide was initially achieved with different elution solutions (0-1 mol/L NaCl) in the purification process of DEAE-Sepharose CL-6B chromatography, and its HPSEC profile was presented in Fig 2D. As shown in Fig 2D, peaks at 1.89, 4.91 and 8.93 min were derived from other impurity substances in the polysaccharide by comparing with the HPSEC profile of CPS1 (Fig 2C, 6.21 min), while the impurity contents were calculated to be 35% according to their peak areas. These results indicated the special elution condition in the purification process of C. decumbens polysaccharide was able simply and effectively to help obtain the single and purified polysaccharide CPS1. The HPSEC profile of CPW2 was demonstrated in Fig 2E, in which a singular narrow peak at 5.95 min was observed, which indicated that CPW2 was also a single component. The UV spectra of CPS1 and CPW2 were shown in Fig 2F, where smooth curves were observed in CPS1 and CPW2 molecules, suggesting that containments covering protein and nucleic acid did not exist in both molecules. Molecular weight (Mw) According to the data from standard dextran in HPSEC, a calibration curve of Mw was determined as y = -3.92x + 6.31 (R 2 = 0.99). Based on the calibration curve and the retention times of CPS1 and CPW2 in HPSEC by RID, the Mw of CPS1 was calculated to be 360 kDa and that of CPW2 was 550 kDa. FTIR spectroscopic analysis Using the FTIR, functional groups of compounds were identified, by which we could conclude the type of isolated compound. FTIR spectra of CPS1 and CPW2 were presented in Fig 2G, where the peaks at 3377, 2952, 1650-1651, 1440-1441, and 1017-1128 cm -1 were attributed from O-H, stretching C-H, C = O, angle C-H, and C-O-C bonds, respectively. All of the bonds were regarded to be the characteristic functional groups of carbohydrate compounds [29]. In addition, the peak at 849 cm -1 in CPS1 was assigned to the α glycosidic bond, and the peak at 883 cm -1 in CPW2 was derived from the β glycosidic bond, suggesting the existence of different configurations between CPS1 and CPW2. Molecular compositions Total carbohydrate contents in CPS1 and CPW2 were 98.36 ± 1.14% and 97.86 ± 2.02%, respectively. The dominant component was neutral sugar and there was no uronic acid in both of the polysaccharides. Using GC fitted with the HP-5 column, the derivatization products after the hydrolysis and derivatization of CPS1 and CPW2 were identified, and GC chromatograms were presented in Fig 3. Mannitol, as an internal standard substance, was added to the tests and marked to be an asterisk in Fig 3. By comparison of data with the GC chromatograms of standard monosaccharides, we could conclude that CPS1 was composed of glucose, galactose, mannose and arabinose in a molar ratio of 4.9:2.0:1:1.9 and CPW2 comprised of glucose. SEM analysis The surface morphology of CPS1 and CPW2 was observed using SEM and the SEM images were presented in Fig 4 at magnifications of 300-20 k. As shown in Fig 4, the image of CPS1 at a magnification of 300 presented some irregular particles with non-uniform sizes. The rough surface with some holes was observed in the image with a magnification of 3000, which indicated the existence of a branched structure in the CPS1 molecule [22]. The image at a magnification of 10 k displayed many inhomogeneous lumps, and the agglomerations suggested that there were aggregated chains in polysaccharide CPS1 and the structure of CPS1 was entangled [20]. These deductions were further supported by the following results from TEM and Congo red tests. The image of CPW2 at magnifications of 300 and 2000 presented some loose particles with a relatively flaky surface. At the magnification of 20 k, it could be observed that CPW2 was mainly composed of regular slices, and all the slices were stacked together. The schistose surface in CPW2 was even without holes based on the high magnification image, indicating that the CPW2 molecule was a linear structure without branched and entangled chains, which was further confirmed by the TEM results. TEM analysis TEM is a useful tool to identify the microscopic morphology, which is important to characterize the fine structure of polysaccharides. After polysaccharides are dispersed by the SDS solution, it is easy to observe the molecule morphology using TEM [30]. As shown in Fig 5, the TEM image of CPS1 at a magnification of 5000 displayed many subunits (labeled with arrows in Fig 5), and one subunit connected with another. When it had been magnified to 10 k, the molecule morphology with four single chains in CPS1 was visible and the chains trended to form an entangled structure. Similar to the triple-helix conformation, CPS1 molecules contained even more than three entangled chains. Side chains were also observed in CPS1 molecules, by which one subunit with four-helix chains combined with another. The results matched well with the above SEM deductions. For the CPW2 molecule, TEM images showed that CPW2 possessed many single chains without entangled and hairy chains, which was similar to the results from SEM analysis. As described in previous references [6,31,32], the triple-helix structure for polysaccharide molecules is helpful to improve its biological activities such as antitumor, immunoregulation, and anti-inflammation activities, therefore the structural similarity of CPS1 with four-helix conformation to that indicated it could have similar outstanding bioactivities. Helix structure Using the Congo red test, the triple-helix structure of polysaccharides can be simply evidenced [31]. After polysaccharides possessing helix conformation are mixed with Congo red solution, the special complex can be formed and the λ max value of the complex will be higher than that of the Congo red control. However, the helix conformation will be damaged by strong alkaline (NaOH), and then the λ max value of the Congo red-polysaccharide complex move to a low wavelength. To identify the four-helix conformation, Congo red tests were also applicable, which were described as followed. Congo red tests of CPS1 and CPW2 were conducted and the results were presented in Fig 6. Compared with the Congo red control, λ max values of Congo red-CPS1 complexes showed large redshifts and that of Congo red-CPW2 complexes had a similar trend, suggesting that CPS1 molecules possessed helix conformation and CPW2 had no that [20]. For CPS1 molecules, λ max values decreased gradually under stronger NaOH conditions (more than 0.2 mol/L), and a decline in redshift effects was observed, which was due to the destruction of the helix structure in CPS1 molecules. The results from Congo red tests agreed well with the above SEM and TEM analyses. Anti-inflammation effects The inhibitory effects of CPS1 and CPW2 for cell proliferation were analyzed using the MTT method, and the dosages of CPS1 and CPW2 in the following anti-inflammation tests were determined to be 75 and 150 μg/mL final treatment concentrations which did not influence the proliferation of RAW 264.7 mouse macrophage cells. Relevant inflammation factors including TNF-α, IL-6, IL-1β, and COX-2 were used to evaluate the anti-inflammatory effects of CPS1 and CPW2 molecules in LPS-induced RAW 264.7 cells, and the results from real-time PCR tests were demonstrated in Fig 7A and 7B. Amplification curves of GAPDH, TNF-α, IL-6, IL-1β, and COX-2 genes showed to be S-shaped curves, and melting peak curves presented a single peak for each gene (marked as gene names in Fig 7B), which indicated the validity of real-time PCR tests and the good primer specificity for the amplified genes. Electrophoresis diagrams of amplified genes were shown in Fig 7C, where all the molecular weights for the amplified region of each gene were between 100 bp and 250 bp, which was consistent with the expected results. According to the 2 -ΔΔCT method, the suppression effects of CPS1 and CPW2 on TNF-α, IL-6, IL-1β, and COX-2 mRNA expression in LPS-induced RAW 264.7 cells were estimated and the results were shown in Fig 7D-7G. By comparison of data with blank groups, all of the mRNA expression levels in negative groups were significantly increased by LPS (P < 0.0001), which indicated that all inflammation models had been successfully constructed. The inhibitory effects of CPS1 and CPW2 on LPS-induced mRNA expressions of inflammation factors were obtained based on the comparison of data from a sample and negative groups, and the results were summarized in Table 2, where the suppression effects of CPS1 on the TNF-α, IL-1β, IL-6, and COX-2 mRNA expressions were 71.80% (P < 0.0001), 56.55% (P < 0.0001), 43.98% (P < 0.0001) and 91.88% (P < 0.0001) at the 75 μg/mL final concentration, respectively. The CPW2 molecule at the 75 μg/mL final concentration was also able to suppress mRNA expressions of TNF-α (50.07%, P < 0.05), IL-6 (11.07%, ns) and COX-2 (77.55%, PLOS ONE Study on two polysaccharides from Corydalis decumbens P < 0.0001), while CPW2 showed lower inhibitory effects in the test concentrations compared to CPS1 molecules. In addition, at the experiment concentrations, the inhibitory effects of CPS1 on TNF-α and COX-2 mRNA expressions had no significant difference (P > 0.05) by comparing with positive groups. These results showed that CPS1 with the four-helix structure possessed more significant anti-inflammation effects than CPW2 without entangled chains. Antioxidant activities In the present study, different methods including superoxide, hydroxyl, and ABTS radical scavenging assays were used to estimate the antioxidant activities of CPS1 and CPW2, and the results from antioxidant tests were presented in Fig 8. Vitamin C (Vc), as the positive control, was assigned. As shown in Fig 8A-8C, the scavenging activities of CPS1 and CPW2 for all the radicals including superoxide, hydroxyl and ABTS were dose-dependent. For ABTS radicals, CPS1 showed a good scavenging capacity while exhibiting a trend similarity to Vc as demonstrated in Fig 8A. In detail, 72.91% of ABTS radicals could be inhibited by CPS1 at 7500 μg/mL treatment concentration and the scavenging activity of CPW2 at the same dosage was 49.25%, by which we could conclude that CPS1 had a stronger ABTS radical scavenging ability than CPW2. According to the scavenging activities, the EC 50 value of CPS1 for the ABTS radical scavenging ability was calculated to be 533.99 μg/mL. The hydroxyl radical scavenging capacity of CPS1 was more than 50% at the final concentration of 1000 μg/mL, while the scavenging activity of CPW2 at the same dosage was less than 25%, and then the EC 50 value of CPS1 for the hydroxyl radical scavenging effect was determined to be 520.46 μg/ mL. Based on the scavenging tests for superoxide radicals, the EC 50 value of CPS1 was calculated as 1512.06 μg/mL, which was better than that of CPW2 (less than 50% of scavenging activities at all the treatment concentrations). In addition, CPS1 molecules (EC 50 = 520.46 μg/ mL) exhibited a stronger hydroxyl radical scavenging ability than Vc (EC 50 = 2513.06 μg/mL, P < 0.01). In summary, CPS1 with the four-helix structure showed more excellent antioxidant activities than CPW2, and it even had a more outstanding performance in terms of hydroxyl radical scavenging ability compared to Vc. Hydroxyl radicals, normally termed the most active radicals, may cause serious damage to biomolecules in cells and give rise to cytotoxicity and cancer [33]. Conclusions In this study, optimal extraction conditions of total sugars from C. decumbens were obtained. Two polysaccharides (CPS1 and CPW2) were isolated and purified from C. decumbens, and they possessed different structural characteristics and bioactivities. The Mw of CPS1 was 360 kDa, and the CPS1 molecule possessed a four-helix conformation with side chains and comprised glucose, galactose, mannose, and arabinose. The CPW2 molecule was composed of glucose with 550 kDa Mw, and the molecular microstructure displayed linear chains without branched and entangled conformations. In the term of anti-inflammatory activity, CPS1 exhibited more outstanding suppression effects on TNF-α, IL-1β, IL-6, and COX-2 mRNA expressions in LPS-stimulated RAW 264.7 cells than CPW2 (P < 0.0001), in which the inhibitory activities of CPS1 for TNF-α and COX-2 mRNA expressions even had no significant difference with the positive group (P > 0.05). The more excellent scavenging abilities for hydroxyl and ABTS radicals were also evidenced in CPS1 than CPW2, and then CPS1 (EC 50 = 520.46 μg/mL) showed a stronger capacity of scavenging hydroxyl radical compared to Vc (EC 50 = 2513.06 μg/mL). For polysaccharides, bioactivities are likely to be influenced by their structural features such as Mw, monosaccharide compositions, and chain conformations [34,35]. The chain conformation might play the most important role among structural features and it has been reported that the triple-helical conformation in polysaccharide molecules may be related to outstanding bioactivities [32]. However, there are no reports concerning polysaccharides from C. decumbens and the four-helical structure of polysaccharides. CPS1 possessed the four-helical structure, which may be contributed to its good bioactivities. These results provide new information on the four-helical structure concerning higher anti-inflammatory and antioxidant attributes and give insights into the potential application of polysaccharides from C. decumbens in the field of medicine.
2023-04-14T06:18:22.206Z
2023-04-13T00:00:00.000
{ "year": 2023, "sha1": "ea4c965783915fa3a6bdfa06b1103af9df29f249", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "28f9af0802cb3d0c3cf156aec61ee8bbe7c1f166", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
259869541
pes2o/s2orc
v3-fos-license
A Switched-Capacitor-Based 7-Level Self-Balancing High-Gain Inverter Employing a Single DC Source Tis paper discloses a novel switched capacitor (SC)-based 7-level inverter with a single DC source. Te proposed inverter has the ability to self-balancing the voltage of the capacitor without using a closed-loop voltage balancing circuit. Two capacitors are equally charged by the input source owing to the series-parallel charging and discharging continuously in a full cycle. Te proposed 7-level SC inverter requires less number of switches, driver diodes, and capacitors and a lower number of semiconductor switches than most recently developed topologies. Furthermore, four out of the eight switches operate at the fundamental frequency, which simplifes the control scheme. A fundamental frequency switching scheme is used to control the output of the inverter. Te self-balancing and voltage-boosting features of the proposed structure are validated on MATLAB/software platform and verifed experimentally. Introduction Now a days, switched capacitor-based multilevel inverters (SC MLI) play an important role in the conversion of DC-AC power due to their excellent performance [1]. Te confguration of SC MLI augments with renewable energy (RE) and electric vehicles with better performance [1,2]. For the generation of staircase waveform of output voltage with less distortion multilevel inverters (MLI), the structure is more appropriate for enhancing the power quality. Numerous types of MLIs are designed by the researchers in which three classical confgurations of MLIs are diode clamped MLIs (DC MLIs), fying capacitor MLIs (FC MLIs), and cascaded H-bridge inverters (CHB MLIs). Tese conventional MLIs have various advantages over the 2-level inverters. On the other hand, 3-level MLIs sufer from related as well as diferent limitations. Tese conventional MLIs use full for diferent industrial work for producing specifc voltage output up to fve levels. When producing a higher-level voltage waveform, they require a large number of devices. DC MLIs require a large number of diodes and DC link capacitors, FC MLIs require more capacitors, and CHB MLIs require more number of DC power supply. Hence, few demerits present in conventional MLIs like more DC sources and switches are required with an increase in the volume, size, and cost of inverters [3]. In the current year, various researchers focused on topological development in MLIs confguration to solve the unbalance problem of capacitor voltage. Numerous reduced device count structures of MLIs have been proposed in recent years [4,5]. However, these confgurations do not have the ability to self-boosting due to complex support algorithms and thus circuits as in [5] have been proposed to mitigate the unbalance problem in capacitor voltage. With the advantage of the switchedcapacitor MLI (SC MLI)-based approach, a 7-level topology is also presented in [6]. At the same time, the number of switches is reduced in [7] by only using ten switches for producing 7-level voltage output. Later on, several SC MLIs have been disclosed in the literature. Some of them have requirement of a higher switch count, more number of capacitors, more voltage stress, or low voltage gain. Tis has motivated the development of a new compact module structure. In this paper, a novel 7-level SC MLI is designed using only 8 switches and a single DC source. Tis confguration can generate a 3-time boosted staircase output by the use of only one DC source. Various switching techniques are evolved recently for the control of MLIs [8,9]. Diferent techniques such as the sinusoidal switching pulse technique with multitriangular carriers in [9], multivector space technique in [9,10], and selective harmonic elimination pulse-width modulation (SHE PWM) in [10] are widely used. In overall, the SHE PWM control technique is superior, low switching frequency-based, and easy to control and eliminates the harmonics from output voltage [10]. In this paper, the SHE PWM control technique is used to control the switching angles of the inverter and to obtain the output voltage. Te proposed inverter is suitable for renewable and sustainable energy applications, where the lowinput-side DC voltages require stepping-up. Te operation of the proposed topology is tested by MATLAB/software simulation and verifed experimentally. Principle of Proposed 7-L SC MLI Te proposed 7-L SC MLI topology consists of 8 switches S 1 to S 8 , 3 diodes D 1 , D 2 , and D 3 , and two capacitors C 1 and C 2 . Te input voltage source V in is used as the input of the inverter and Vo is the output voltage. Figure 1 represents the 1-phase 7-L switched-capacitor-based multilevel inverter (7-L SC MLI) with a single DC source. Operation of 7-L SC MLI. Two capacitors and 8 switches are employed to produce 7-L staircase waveform output voltage. Te 7 levels are six bipolar levels and a zero level at the output voltage waveform. Using input voltage source V in , the structure produces ±V in , ±2V in , ±3V in , and 0. All the switches are consisting of antiparallel diodes, and taking into account inductive load, the operational analysis is presented. Te diferent modes of operation to generate the output levels of SC MLI are as follows: Mode I (V dc ). Te output level +V dc is obtained when switches S 1 , S 2 , S 3 , S 7 , and S 8 are OFF and S 4 , S 5 , and S 6 are in conducting mode. Diode D 3 is in forward conduction and the capacitor C 2 is charged. Mode II (+2V dc ). In this mode, switches S 1 , S 4 , S 7 , and S 8 are OFF and the remaining switches S 2 , S 3 , S 5 , and S 6 are in conduction mode. Due to this, the capacitor C 2 , which is earlier charged to the input voltage magnitude now discharged and the capacitor C 1 gets charged at the same time. Mode III (+3V dc ). In this mode, switches S 2 , S 3 , S 4 , S 7 , and S 8 are OFF and the remaining switches S 1 , S 5 , and S 6 are in conduction mode. Both the capacitors discharge in series with the DC source to produce the maximum voltage output in this mode. Mode IV (−3V dc ). In this mode, switches S 2 , S 3 , S 4 , S 5 , and S 6 are OFF and the remaining switches S 1 , S 7 , and S 8 are in conduction mode. Te operation of capacitors is similar to the maximum positive level in this mode. Both capacitors discharge simultaneously with the DC source. Mode V (−2V dc ). In this mode, switches S 1 , S 4 , S 5 , and S 6 are OFF and the remaining switches S 2 , S 3 , S 7 , and S 8 are in conduction mode. Due to this, the capacitor C 2 discharges in series with the input source to produce the second negative level output. Mode VI (−V dc ). In this mode, switches S 1 , S 2 , S 3 , S 5 , and S 6 are OFF and the remaining switches S 4 , S 7 , and S 8 are in conduction mode. Only the DC source is accountable to generate the output−V dc in this mode. Te capacitor C 2 is charged at the same time and C 1 is in idle condition. Mode VII (0V dc ). In this mode, switches S 1 , S 2 , S 3 , S 4 , S 5 , and S 7 are OFF and the remaining switches S 6 and S 8 are in conduction mode. Two upper switches from the bridge circuit or two lower switches can be triggered to generate the zero level output. Te switching scheme of the 7-L SC MLI with seven diferent voltage levels (+V dc , +2V dc , +3V dc , and 0V dc ) and the charging and discharging period of capacitors are shown in Table 1. Self-Voltage Balancing Analysis. From the operation analysis discussed in Section 2.1, it is clear that the capacitor C 1 is charged during 2V dc and −2V dc . On the other hand, the capacitor C 2 is charged during V dc and −V dc . In addition to this, the capacitor C 1 is discharged during 3V dc and −3V dc , whereas the capacitor C 2 is discharged during 2V dc , 3V dc , −2V dc , and −3V dc . Terefore, symmetrical charging and discharging operation is attained. Also, the capacitors are International Transactions on Electrical Energy Systems charged in parallel connection with the DC source and discharged in series connection with the source to the load. It is also noteworthy that the parasitic resistance is kept low and each capacitor gets sufcient time to charge and discharge within one fundamental cycle. Owing to this, the voltage across the capacitor is naturally maintained at input DC source magnitude throughout the circuit operation. Tis validates the self-balancing nature and appropriate capacitance is chosen taking into account the maximum time to discharge, while allowing least voltage ripple as follows: where ∆Q c is the amount of capacitor discharging rate, ∆V c and k (7-8%) are the ripple voltage and percentage ripple of the capacitor C n , I op is the maximum value of load current that fows through the capacitors, θ n is the stating instant of discharging of a capacitor, Φ is the load power factor angle, and ω is the frequency (2πf ). Modulation Technique. Te objective of MLIs with low switching frequencies is to produce staircase voltage waveforms. Te sequence of each switching function can be chosen to minimize the total harmonic distortion (THD). Tis is called selective harmonic elimination (SHE) pulse width modulation (PWM). Tere are also high-frequency modulation techniques, which produce a lower THD with the compromise of having more switching losses. Some examples of high-frequency modulation techniques are sinusoidal pulse width modulation (SPWM) with triangular carriers [11,12] and space vector modulation. Te advantage of SHE PWM [13][14][15][16] is its reduced switching frequency. Te SHE PWM is commonly used in large power inverters in which switching losses may be very large; if the switching frequency increases, an optimal selection of switching functions eliminates selected harmonics from the output voltage SHE PWM technique pilot to inverter operation at lower switching frequency waveform. Newton-Raphson method and the resultant theory technique are also used for calculating the transcendental equations to fnd out the switching angles [16]. Te former approach requires a good initial guess and gives only a few sets of solutions. In this article, the SHE PWM technique is used to generate the optimal value of switching angles for 7-L SC MLI. Te synthesized voltage waveform of 7-L SC MLI is shown in Figure 2. Tis method utilizes multiple switching for each output voltage step with an enhanced quality of voltage waveform of output, and hence, it is appropriate for highpower converters. Te multiple set of solutions for the nonlinear transcendental equations is stored in the form of the look-up table. Comparative Estimation In order to evaluate the advantage of the proposed SC MLI compared to the recently developed MLIs [17][18][19][20][21][22][23][24][25][26][27], Table 2 presents the diferent performance parameters, i.e., the required number of switches (No sw ), number of diodes (No d ), number of capacitors (No c ), boosting performance, and capability of handling an inductive load. Te MLIs in [17,23] require a smaller number of switches, but the frst one has no boosting ability and the second one is unsuitable for low power factor loading. MLIs introduced in [18,19] both generate the 7-L output voltage with an equal number of switches, but boosting gain (G) is only 1.5 times of input. In [20], the circuit is capable of boosting voltage 3 times but still requires more switches. In [21,22], the boosting gain is of 3 times, while generating a 7-L output, but the requirement of switches is more. In [24][25][26][27], the number of switch requirement is more for producing 7-L output compared to the proposed structure. Te MLI in [25] has the voltage gain limited to 1.5 with number of switches and capacitors required more. In [26], diodes are not required, whereas the structure in [27] has the requirement of high voltage rating switches. Te proposed 7-L SC MLI topology exhibits boosting of voltage to 3V in and no sensors are required for balancing of the capacitor. In every fundamental cycle, symmetrical charging and discharging are achieved and the number of components with active switches is less. It is noteworthy that the voltage stress in the proposed structure is considerably increased, while maintaining high-gain output. However, as recently there is a lot of development in power electronics components, switches with high voltage rating are readily available, which can be utilized for the proposed circuit design. In fact, a trade-of is essential between the voltage rating (stress) and voltage gain of the MLI. Te performance comparison justifes the optimality and compactness for the developed SC MLI in terms of the number of components, while attaining high-gain output. Terefore, the proposed 7-L single-input MLI is highly suitable in single-phase renewable energy applications. Simulation and Experimental Results Te 7-L SC MLI consists of IGBT switches, capacitors, and diodes as shown in Figure 1 and are simulated in the MATLAB/Simulink platform and also verifed experimentally. A single DC source of 65 V is considered for validation. Te output frequency of the inverter is 50 Hz, and the loads used during the test are an R-load (90 Ω) and RL-load (90 Ω-120 mH and 180 Ω-200 mH). Te 2200 μF rating of the capacitor is selected based on the longest discharging period. Te SHE PWM technique discussed in Section 2.2 is applied Table 1: Switching states of the 7-L SC MLI with capacitor charging (↑) and capacitor discharging (↓) period. Voltage level Conducting devices Figure 3(a) shows the PWM pulse across the switches at Mi � 0.9, and the standing voltage across each of the switches is illustrated in Figure 3(b) in which switches block only positive voltage (they are unidirectional). Figure 4 shows the results of the 7-L SC MLI with voltage THD under diferent Mi conditions. At a lower Mi value, the output is almost a 5-level output, and at a higher Mi, a clear 7-level output is synthesized. Both the capacitor voltages are maintained as desired, and the output current follows the output voltage due to a purely resistive load. It is clearly observed from the results that SHE is the best optimal value for minimum THD at a lower frequency. Te THD (%) is reduced as the lower order harmonics (5 th and 7 th ) are removed at a higher Mi. Te proposed confguration of MLI is thus suitable to be used with a low-size flter and in applications such as solar, wind, and hybrid energy sources. Figure 5 shows the dynamic operational ability of the proposed MLI. Under a sudden change in load, Figure 5(a) shows that the load current smoothly changes from purely resistive to sinusoidal-like under inductive loading. Te capacitor voltage ripples are also very less as can be verifed from the results. Figure 5 A major inevitable issue in SC circuits is the high current spikes (inrush current) during the capacitor charging process. All the SC MLIs published to date have the same concern. Nevertheless, a recently developed structure in [24] employs a quasiresonant cell in the input side that addresses the issue with capacitor current. By selecting suitable value of small inductor and capacitor (L in and C in ), the quasi resonant cell limits the capacitor inrush current during charging. Te proposed topology, as presented in the manuscript though, cannot completely eliminate the current spikes; it can be reduced by connecting the quasiresonant cell as in [24]. Te resonant cell inductance is chosen considering equivalent series resistance (R eq ) in the charging loop with equivalent capacitance (C eq ) as follows: Simulation Analysis. Figures 5(c) and 5(d) show the capacitor currents along with output voltage without and with employing the quasiresonant cell, respectively. It is clear that charging currents are quite high (≈50 times) in normal operation of the SC type MLIs even when considering the parasitic resistance in the charging path. Taking into account the resonant cell, the capacitor current is drastically reduced without afecting the 7-L operation of the MLI. In the future, much efort needs to be attempted to address the issues with SC current spikes. Experimental Analysis. Te operation of the proposed structure is further verifed experimentally on a low-scale 0.3 kW prototype. MOSFETs (IRF840) and diodes (MUR460) are used to build the prototype. An Arduinobased controller is used to control the switches, and switching pulses are processed through the TLP250-based driver circuit. Te driver amplifes the pulses from the CH1 : 100 V/div CH2 : 10 A/div CH3 : 100 V/div CH4 : 100 V/div 35 Ω -50 mH 90 Ω -120 mH International Transactions on Electrical Energy Systems control and also isolates the control circuit from the power board. Diferent loads such as resistive type and inductive type loadings are taken into account to verify the operation of the proposed MLI. Figure 6 shows the test setup of the proposed 7-L circuit. Figure 7(a) shows the output under 90 Ω loading and Figure 7(b) depicts a clearly matching output as in the simulation under dynamic, varying inductive loading. Under change in load, the load voltage is stable and only the load current varies. Figure 7(c) depicts the switching angle changes with the change in modulation index output voltage pattern changes. It is noteworthy that the fundamental 7-L output is still achievable under very low modulation index. Te maximum positive voltage level is still obtainable. Furthermore, under the variation in frequency of operation, the 7-L output is obtained in Figure 7(d). Te capacitor voltage ripple changes smoothly under frequency doubling. Te results verify the smooth operation of the proposed circuit, capacitor voltage selfbalancing, and low ripple under severe dynamic operating conditions. Furthermore, the power loss of the proposed circuit is evaluated for individual components considering 90 Ω-120 mH loading. Te conduction loss and switching losses of the switches (Pc-s, Ps-s) and diodes (Pc-d, Ps-d) are illustrated in Figure 8(a). In general, three major power loss components in the proposed SC MLI are the switching loss (Ps), conduction loss (Pc), and the ripple power loss (Prip). Te overall losses under diferent rating of the MLI are depicted in Figure 8(b). Power rating is varied by changing the loading. Te total power loss is about 9.8 W for 0.2 kW output and 17.4 W for 0.3 kW power rating. Te maximum efciency evaluated is 96.51%, which may further vary considering diferent rating devices. Conclusion In this paper, the 7-L SC MLI structure is designed based on switched capacitor concept. Te proposed 7-L SC MLI structure requires only one DC source and a smaller number of switches. Te capacitors are self-balanced and generate an output voltage with three times the input voltage amplitude. Te size of capacitors can be optimized for a high-frequency operation. A comparison is carried out with several MLI from the literature in view of the number of devices, TSV and boosting capacity, and capability of diferent loads which verifes the optimality and advancement of the proposed structure. Te switching operation is based on the SHE PWM technique which justifes very less loss operation at the fundamental frequency. Simulation and experimental results validate the suitability of producing high-gain 7-level output under diferent operational modes. Te proposed structure is highly applicable for low-and medium-power energy conversion applications. Data Availability Te data used in this study are available from the corresponding author upon reasonable request. Conflicts of Interest Te authors declare that they have no conficts of interest. Authors' Contributions Yatindra Gopal conceptualised the study, developed methodology, wrote the original draft, and validated the study. Kaibalya Prasad Panda performed the experimental validation and wrote the manuscript. Akanksha Kumari wrote the original draft, performed formal analysis, and validated the study. Julio C. Rosas-Caro supervised the study, performed formal analysis, and reviewed, and edited the manuscript.
2023-07-15T15:24:46.311Z
2023-07-11T00:00:00.000
{ "year": 2023, "sha1": "a288ab1af8ee8fe6346e8462cda7b475a6018b6f", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/itees/2023/5545081.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2e7ea0f7a9275892c0ba5c2db7342d879dac2b58", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
16608894
pes2o/s2orc
v3-fos-license
Epigenetic and genetic deregulation in cancer target distinct signaling pathway domains Cancer is characterized by both genetic and epigenetic alterations. While cancer driver mutations and copy-number alterations have been studied at a systems-level, relatively little is known about the systems-level patterns exhibited by their epigenetic counterparts. Here we perform a pan-cancer wide systems-level analysis, mapping candidate cancer-driver DNA methylation (DNAm) alterations onto a human interactome. We demonstrate that functional DNAm alterations in cancer tend to map to nodes of lower connectivity and inter-connectivity, compared to the corresponding alterations at the genomic level. We find that epigenetic alterations are relatively over-represented in extracellular and transmembrane signaling domains, whereas cancer genes undergoing amplification or deletion tend to be enriched within the intracellular domain. A pan-cancer wide meta-analysis identifies WNT and chemokine signaling, as two key pathways where epigenetic deregulation preferentially targets extracellular components. We further pinpoint specific chemokine ligands/receptors whose epigenetic deregulation associates with key epigenetic enzymes, representing potential targets for epigenetic therapy. Our results suggest that epigenetic deregulation in cancer not only targets tissue-specific transcription factors, but also modulates signaling within the extra-cellular domain, providing novel system-level insight into the potential distinctive role of genetic and epigenetic alterations in cancer. In contrast to genomic alterations, it is only more recently that studies have begun to explore how cancer-related DNAm aberrations map onto signaling pathways and protein interactomes. For instance, some previous studies have shown that cancer-associated DNAm changes tend to cluster in such PPI networks, allowing interactome hotspots of differential DNAm or of simultaneous differential DNAm and mRNA expression, to be identified (6,17,18). In the context of aging, it has been found that age-associated DNAm drift occurs preferentially at genes of exceptionally low connectivity that occupy peripheral network positions, in stark contrast to other age-related genes, including longevity-and disease-associated genes (19). A similar pattern was observed by Cheng (20) in the context of differentially methylated genes associated with cancer survival. However, no study has yet conducted an in-depth comparison of the systems-level properties of epigenetic versus genetic alterations in cancer. The recent TCGA pan-cancer resource (21) now allows for such an in-depth comparison. Specifically, we decided to conduct a pan-cancer wide analysis at a systems-level, using a highly curated PPI network, in order to address the following unexplored questions. First, do network topological properties of functional DNAm aberrations in cancer differ from those of functional somatic copy-number alterations (SCNA) or those of cancer driver mutations? Second, do epigenetic and genetic driver alterations target different domains within the cell's signaling hierarchy? Third, are there DNA methylation data. For the 10 cancer types mentioned above, DNAm data generated with the Illumina Infinium HumanMethylation450 BeadChip array were downloaded from TCGA data portal. Probes with missing data (i.e. NAs) in more than 30% of the samples were removed. The rest of the probes with NAs were imputed using the knearest neighbors (knn) (k=5) imputation procedure (33). Subsequently, BMIQ was used to correct for the type II probe bias (34). Somatic copy number data. For the 10 cancer types mentioned above, we downloaded TCGA level-3 copy number segmentation data, which were generated from the Affymetrix SNP 6.0 platform. We selected those files with probes sorted according to the hg19 reference genome, and with probes mapping to germline CNVs removed prior to segmentation. When calling alterations, thresholds were set based on the median of the log2 ratio for each array +2 or −2 computed using top 50% of the probes (ordered by their log ratios) for calling gains and losses, respectively. The median of the log2 ratio +4 or −4 was used to call amplification and deletion respectively. In order to identify the genes and features that were altered by copy number aberrations, we searched for overlap of segments with gene regions. The complete gene coordinates was given by hg19 using R package TxDb.Hsapiens.UCSC.hg19.knownGene. A patient-by-gene call matrix was generated following a similar procedure to (2) to capture different perspectives of gene alterations, which has values representing discrete copy number states. For each patient p and each gene g, we identified segments s that overlap g and assign C(p, g) with the copy number state of s. If gene g overlaps or is broken by a set of segments, S = s1, . . . , sk, where k ≥2, the copy number state of the segment with maximal severity('Neutral' < 'Gain/Loss' < 'Amplification/Deletion') was assigned, where ties were broken in samples exhibiting both a loss and gain according to the maximal absolute value of the segmented mean. The density distribution of non-zero entries of copy number call matrices across 10 cancer types are shown in Supplementary Figure S1. Mutation data. For the ten cancer types mentioned above, all mutation annotation format files were downloaded from the TCGA. Defining differentially methylated and differentially expressed genes To assign DNAm values to a given gene, we assign to a gene the average value of probes mapping to within 200 bp of the transcription start site (TSS) of this gene. If no probes map to within 200 bp of the TSS, we use the average of probes mapping to the first exon of the gene. If such probes are also not present, we use the average of probes mapping to within 1500 bp of the TSS. Justification for this procedure is provided in (18). Probes mapping to the gene body are not used. Using this gene-based methylation value, we then compute moderated t-statistics using an empirical Bayes framework (32). The same empirical Bayes procedure was applied to gene expression data. Methylation differences with false discovery rate (FDR) <0.05 and with absolute difference in mean methylation beta levels between the two groups of more than 0.1 were considered statistically significant. Gene expression differences with FDR <0.05 and with a log2 fold change between two groups of more than 1 were considered statistically significant. Using both t-statistics for each gene, we then selected genes with opposite signs of t-statistics, which indicates an anti-correlation between DNAm and mRNAm expression, and further divided them into two groups based on the directionality of differential methylation: a hypermethylated group (HyperM) and a hypomethylated (HypoM) group. Genes in each group were ranked according to the integrative statistic, as described in (18). Finally, we further filtered the gene lists using a multivariate regression framework of gene expression against DNAm and CNV as covariates (and using both normal and cancer samples), to select genes exhibiting a significant anticorrelation between mRNA expression and DNAm. This was done to ensure that (i) the anti-correlation between differential mRNA expression and differential DNAm is due to the same set of tumours, and (ii) to ensure that the anticorrelation between DNAm and mRNA expression cannot be explained by concomitant alterations at the CNV level. Finding somatic copy number altered genes After deriving the copy number call matrix, as described above, we developed a procedure to identify those SCNA that are associated with a corresponding change in gene expression. Gaussian distributions were fitted to the log2 expression values for each gene and for each cancer type, using Nucleic Acids Research, 2017, Vol. 45, No. 2 585 maximum likelihood estimates of the mean and variance. Based on this distribution, we can derive a simplified vector for each gene, where samples with expression in the 5% left tail were marked as underexpressed and samples with expression in the 5% right tail were marked as overexpressed. Therefore, for each gene in each sample, we have information as to whether it defines an amplification/deletion and overexpression/underexpression event. From this, we generate two binary matrices: one is an Amplification/Overexpression call matrix where a matrix entry is assigned 1 if it is both an amplification and an overexpression event, and 0 otherwise. The other matrix is a Deletion/Underexpression call matrix where a matrix entry is assigned 1 if it is both a deletion and an underexpression event, and 0 otherwise. For each call matrix, we select genes that have at least one non-zero entry and then rank the genes based on the number of non-zero entries in decreasing order. The resulting two ordered gene lists correspond to SCN gained and overexpressed genes (Amplification) and SCN deleted and underexpressed genes (Deletion). Finally, we further filtered the gene lists using a multivariate regression framework of gene expression against DNAm and CNV as covariates (and using both normal and cancer samples), to select genes exhibiting a significant correlation between mRNA expression and CNV. This was done to ensure that (i) the correlation between differential mRNA expression and SCNA is due to the same set of tumours, and (ii) to ensure that the correlation between SCNA and mRNA expression cannot be explained by concomitant alterations at the DNAm level. Finding significantly mutated genes We use the MutSigCV software (35) to identify significantly mutated genes which takes DNA replication time, expression and chromatin state into account when estimating the background mutation rate. Finding epigenetically regulated tissue-specific genes We generated the epigenetically regulated tissue-specific gene lists for each of 10 tissue types by comparing the DNAm data as well as gene expression data of normal samples from one tissue type with that of other nine tissue types using an empirical Bayes framework (32). Methylation differences with FDR <0.05 and at least 30% of mean methylation difference between two groups were considered statistically significant. Gene expression differences with FDR <0.05 and log2 fold change between two groups more than 2 were considered statistically significant. Using both tstatistics for each gene, we then selected genes with opposite signs of t-statistics, which indicates an anti-correlation between DNAm and mRNA, and further divided them into two groups based on the directionality of differential methylation: a hypermethylated group (HyperM) and a hypomethylated group (HypoM). Genes in each group were ranked according to the integrative statistic, as described in (18). The significance of overlap between epigenetically regulated tissue-specific gene lists and epigenetically regulated cancer altered gene lists for each cancer type was evaluated using one tailed Fisher's exact test. Protein interaction network (PIN) We used the 2015 March version from the Pathway Commons (PC2) database (36) to build the PIN. In detail, this was built by integrating the Human Protein Interaction Database (HPRD), the National Cancer Institute Nature Pathway Interaction Database (NCI-PID), the Interactome (Interact) and the Biological General Repository for interaction Datasets (BioGRID). Protein interactions included stable interactions like those defining protein complexes as well as transient interactions like post-translational modifications and enzymatic reactions found in signal transduction pathways. We focused on the largest connected component of genes with Entrez ID identifiers, which amounted to a connected network of 15 728 nodes and 1 910 396 interactions. This PIN was further pruned by removing edges which were not consistent with the signaling domain hierarchy structure (see below for definition of signaling domains). Thus, only edges with corresponding end nodes in the following signaling domain combinations were allowed: EC-EC, EC-MR, MR-IC and IC-IC, where EC = extracellular, IC = intra-cellular, MR = membrane-receptor. This resulted in a reduced PIN of 10 726 nodes and 1 306 162 interactions (maximally connected component). The sparsity (i.e. the fraction of edges to total number of possible edges) of this PIN is 0.023. Definition of signaling domains Following (37), we annotated genes into five distinct signaling domains: growth modulators (GM), secreted factors (SF), membrane receptors (MR), intracellular receptor substrates (ICRS) and intracellular non receptor substrates (ICNRS). These assignments were made using main cellular localization data of the corresponding proteins, as given in the HPRD database. Specifically, we first defined an intra-cellular domain as all those GO-terms containing the following terms: 'Extracellular','Cell junction','Synapse','Dendrite','Sec reted','Synaptic vesicle'. The IC class was subdivided into ICRS and ICNRS subclasses, according to whether the IC annotated protein interacts with a MR (if yes, then ICRS) or not (ICNRS). Similarly, the EC class was subdivided further into GM and SF subclasses, according to whether the EC annotated protein interacts with a MR (SF) or not (GM). Because genes may be annotated to multiple signaling domains, for some analysis we used a coarse grained 2-domain assignment, whereby a gene annotated to both extracellular and transmembrane domains was allocated to 'EC', and a gene annotated to both transmembrane and intracellular domains was allocated as 'IC'. Comparison of shortest path distances among genes in the different alteration groups To assess the inter-connectivity of the group of genes in each alteration group in the PIN, we compared the distribution of the shortest path length between every pair of genes within an alteration group and within each cancer type. We selected the top 100 ranked genes with different alteration types, including (i) Hypermethylated and underexpressed genes (HyperM), (ii) Hypomethylated and overexpressed genes (HypoM), (iii) SCN gained and overexpressed genes (Amplification), (iv) SCN deleted and underexpressed genes (Deletion), (v) Mutated genes (Mutation). The shortest paths length was estimated for each gene pair in the topranked list, and the average shortest path lengths were compared between different alteration groups within one cancer type. The comparison was done by computing the Pvalues using one-tailed paired Wilcoxon rank-sum test between any two different alteration types and for each of the 10 cancer types separately. For HyperM and other four groups, we tested whether HyperM group has significantly larger average shortest paths length than the other four groups. For HypoM and other three groups (except Hy-perM), we tested whether HypoM group has significantly larger average shortest paths length than the other three alteration groups. For Mutation and Amplification/Deletion, we tested whether Mutation group has significantly larger average shortest paths length than Amplification/Deletion groups. For Amplification and Deletion group, we tested whether Amplification group has significantly larger average shortest paths length than Deletion group. Comparison of signaling domain distribution within PIN We did the enrichment analysis of signaling domains for each alteration group by comparing the number of observed genes in each domain with the number of expected genes in each domain. This expected number is the number of genes in an alteration group multiplied with the percentage of genes in each signaling domain. Here we combine the extracellular and transmembrane domain as one big domain (EC+MR) (see subsection on signaling domain definitions), and use intracellular domain as the other domain. The odds ratio (OR) and P-values were calculated using the one-tailed Fisher's exact test. These analyses were done in two different ways: (i) using all the significant genes in each alteration group, and (ii) selecting the same number of topranked genes for each alteration group, which was chosen as the minimum number for all five groups as determined in (i). Comparison of signaling domains distributions in specific pathway We downloaded signaling pathway information from MSigDB (38). For each signaling pathway, we calculated the number of genes undergoing functional DNAm or SCN alterations and which mapped into either the extracellular (EC) or intracellular (IC) domain. A P-value was computed using a one-tailed Fisher's exact test to determine whether genes with functional DNAm aberrations were enriched in the EC domain compared with SCNAs. A metaanalysis P-value was computed by Fisher's combined test for each signaling pathway across 10 cancer types, and signaling pathways with meta-analysis P-values below than 0.05 were deemed to exhibit a significant differential signaling domain distribution between functional DNAm and SCN alterations. To identify signaling pathways that exhibit functional DNAm alterations preferentially in the extracellular domain, we calculated the number of genes undergoing functional DNAm alterations in the extracellular and intracellular domains, respectively, and compared them to the numbers of genes in these domains not exhibiting functional DNAm alterations. P-value was computed using a one-tailed Fisher's exact test and a meta-analysis P-value was computed by using Fisher's combined test for each signaling pathway across 10 cancer types. Construction of putative DNAm and SCNA driven cancer gene lists We downloaded TCGA Illumina 450k DNAm, SCNA and RNA-Seq gene expression data for a total of 10 cancer types for which there were reasonable numbers of normal samples ('Materials and Methods' section, Supplementary Table S1). We asked if functional DNAm alterations in cancer are distinguishable from functional SCNAs in the context of how they map onto a highly curated PPI network ('Materials and Methods' section). This analysis was performed by considering four separate classes of putative driver cancer genes: (i) genes which exhibit a hypermethylated promoter and underexpression in cancer, (ii) genes which exhibit a hypomethylated promoter and overexpression in cancer, (iii) genes with SCN loss and underexpression in cancer and (iv) genes with SCN gain and overexpression in cancer. The identification of these putative driver cancer gene sets used state-of-the-art methods, which have previously been used to successfully identify known driver genes at both SCN and DNAm levels ('Materials and Methods' section) (6,18,39). For instance, we applied the method used in the breast cancer METABRIC study of Curtis et al. (39) to identify SCN cancer drivers in the TCGA breast cancer set, revealing a highly significant overlap of the TCGA-derived driver list with the one derived from METABRIC (Supplementary Figure S2). In the case of DNAm, we ignored genes where the promoter DNAm change was not anticorrelated to gene-expression change, since positive correlations represent the minority of associations (18) and are less likely to be linked causally (40). We note that this approach of focusing on anti-correlated patterns between promoter DNAm and gene expression was used by us previously, successfully identifying a causal driver of endometrial cancer, the causal association of which was validated experimentally (6,18). Besides imposing stringent levels of statistical significance, we also demanded that differences in DNAm and mRNA expression between normal and cancer be at least 10% and larger than 2-fold, respectively (Materials and Methods). Since DNAm and SCN variation can simultaneously affect gene expression, our selection procedure further filtered genes according to statistical significance in multivariate regression models with mRNA expression as the response variable and including DNAm and SCNA as predictors ('Materials and Methods' section). The resulting sets and numbers of genes for each of the 4 putative cancer driver classes in each of the 10 TCGA cancer types are listed in Supplementary Tables S2-5. The range in the number of genes in each class across cancer types was (16 642), (94 368), (205 1877) and (161 1242) for HyperM, Hy-poM, Amplification and Deletion respectively. We note that for a given TCGA cancer type, the overlap between these 4 gene lists was minimal, especially between the DNAm and SCNA groups (Supplementary Figure S3). Functional DNA methylation alterations in cancer exhibit lower interactome connectivity compared to corresponding SCNAs and mutations In order to objectively compare the topological properties of these different putative cancer driver gene classes in a PPI network, we need to select a given identical number of top-ranked genes from each class. For a given TCGA cancer type, we thus mapped the top-100 ranked genes from each class onto our PPI network of 10 726 nodes and 1 306 162 edges ('Materials and Methods' section). For each set of top ranked genes, we studied the distribution of their connectivities/degrees (i.e. the number of nearest neighbors of each gene in the list) in the network. In each of the 10 cancer-types we observed a highly statistically significant difference (Wilcoxon rank sum test P < 1e-5), with genes undergoing differential methylation and differential expression exhibiting a significantly lower connectivity compared to genes undergoing simultaneous SCN and gene expression alterations (Figure 1; Supplementary Figures S4 and 5). We also observed statistically significant differences between the hypermethylated/underexpressed gene class and the hypomethylated/overexpressed class in five cancer types, with the former exhibiting lower connectivity (Table 1). Next, we obtained the distribution of the shortest path length between all gene-pairs within one of the four classes and asked if their distributions differed. This assesses how close the corresponding genes in each set are to each other in the network. We also included the top 100 ranked genes based on mutational frequency ('Materials and Methods' section, Supplementary Table S6). This analysis showed that, across all 10 cancer-types, genes undergoing differential methylation and differential expression generally exhibited longer shortest path lengths, compared to genes undergoing simultaneous SCN and gene expression alterations, or to frequently mutated genes (Supplementary Figure S6 and Supplementary Table S7). We note that the longer shortest path lengths exhibited by genes undergoing differential methylation and differential expression is consistent with their lower node-connectivity. Functional epigenetic and genetic cancer alterations map preferentially into different signaling pathway domains Important properties such as gene expression variance are known to vary according to the gene's signaling domain (37). Following Komurov (37), we henceforth categorized all genes of our PPI network into five signalling hierarchy classes: (i) growth modulators (GM), (ii) secreted factors (SF), (iii) membrane receptors (MR), (iv) intracellular receptor substrates (ICRS) and (v) intracellular nonreceptor substrates (ICNRS) ('Materials and Methods' section). We validated our signaling domain associations with an independent gene-family annotation from the Molecular Signatures Database (MSigDB) (38), which showed that GMs and SFs were mostly growth factors and cytokines, MRs were mostly cell surface differentiation markers and receptor tyrosine kinases, ICRS were mostly kinases, whilst transcription factors dominated the ICNRS class (Supplementary Figure S7). Each of the five previously considered gene classes (Supplementary Tables S2-6) were then mapped onto these signaling domains. Combining GM and SF into an extra-cellular (EC) domain class, as well as ICRS and ICNRS into an intra-cellular (IC) category, and further combining the EC class with the transmembrane (MR) class, we observed a striking difference between the patterns of enrichment of the various driver alterations in relation to whether they mapped to the IC or EC+MR classes ( Figure 2). Specifically, in 9/10 cancer types, we observed that genes undergoing hypermethylation and underexpression were significantly (Fisher-test P < 0.05) more likely to map to the EC+MR signaling domain compared to the IC domain ( Figure 2A). This pattern was generally stronger for hypomethylated and overexpressed genes with 10/10 cancer types exhibiting significance at P < 0.05 level ( Figure 2B). In contrast, SCNAs revealed an exact opposite pattern, with deleted underexpressed genes mapping more likely to the IC domain in 9/10 cancer types ( Figure 2C) and with amplified overexpressed genes doing so also in 9/10 cancer types ( Figure 2D). Genetic mutations did not reveal a consistent pattern of differential enrichment among signaling domains ( Figure 2E). Meta-analysis P-values confirmed that all of these associations were highly significant across cancer-types (Supplementary Table S8). To ensure that these results were not biased by different numbers of genes in each molecular alteration group, we repeated the analysis setting the number of genes in each group to be the same (the smallest number among all five gene groups), confirming that results are robust (Supplementary Figure S8 and Supplementary Table S9). Although the previous results were obtained on sets of genes that exhibit differences between normal and cancer tissue, we asked if the enrichment of epigenetically altered genes within the EC+MR class is also true for epigenetically regulated tissue-specific genes in a given normal tissuetype. To assess this, we derived for each tissue-type a set of genes which were hypermethylated and underexpressed, or hypomethylated and overexpressed, in that tissue compared to the other nine tissue types considered here ('Materials and Methods' section). We observed that these tissuespecific DNAm and mRNA expression altered genes exhibited significant overlap with the previously derived can- Figure 1. Functional epigenetic alterations exhibit lower interactome connectivity than their SCNA counterparts. Left panels: for two cancer types (COAD and HNSC), the PPI network is depicted (interactions have been suppressed) with nodes (genes/proteins) colored according to the type of functional alteration and with the radial distance from the center indicating their connectivity (nodes in the center have higher connectivity and connectivity decreases radially outward). We defined for each cancer type four types of functional alterations at the gene-level, including the 100 top-ranked (i) hypermethylated and underexpressed genes (HyperM), (ii) hypomethylated and overexpressed genes (HypoM), (iii) gain of copy-number and overexpressed genes (Amplification) and (iv) CN deleted and underexpressed genes (Deletion). Classes (i)+(ii) are shown here as one group (DNAm) indicated by color magenta, while classes (iii)+(iv) represent another group (CNV) indicated by color cyan. Right panels: Boxplots of the connectivity (degree) for the same two groups of genes. The P-value is from a Wilcoxon rank sum-test comparing the connectivity of genes exhibiting simultaneous differential methylation and differential expression (i.e. classes (i)+(ii)) versus the connectivity of genes exhibiting simultaneous CNV and differential expression (classes (iii)+(iv)). Analogous plots for all other cancer-types are shown in Supplemental Figure S4. cer altered genes (Supplementary Figure S9) and that these tissue-specific genes were therefore also enriched among the EC+MR signaling domain class (Supplementary Figure S10 and Supplementary Table S10). This data is consistent with the view that tissue-specific genes are often differentially expressed in cancer and that this deregulation is associated with epigenetic alterations (41). Pan-cancer wide analysis identifies signaling pathways exhibiting differential signaling domain enrichment of epigenetic versus genetic alterations In order to identify specific signaling pathways which exhibit differential signaling domain enrichment between DNAm and SCNAs, we computed the number of genes undergoing functional DNAm or SCN alterations in each major signaling pathway domain and for all major signal-ing pathways ('Materials and Methods' section). For each cancer-type and signaling pathway we obtained a P-value to test for enrichment of functional DNAm alterations in the extracellular domain. In a meta-analysis over all 10 cancer types, specific signaling pathways emerged as exhibiting a consistent differential enrichment pattern across cancer-types (Table 2). Among the most highly ranked pathways, we found G-Protein Coupled Receptor (GPCR) signaling, immune system and chemokine signaling and JAK-STAT signaling ( Table 2). WNT-signaling, a hotspot of ageassociated differential DNAm in normal tissue (17), was also one of the highest ranked pathways, attaining significant P-values in 7/10 tumor types (Combined Fisher-test P < 0.0001) ( Table 2). Focusing on the canonical WNTsignaling pathway, we confirmed a clear differential enrichment across signaling domains, with most of the epigenetic alterations occurring in the extra-cellular domain (Fig-Nucleic Acids Research, 2017, Vol. 45, No. 2 589 Figure 2. Functional epigenetic alterations preferentially target genes in the extra-cellular/transmembrane domains, with SCNA counterparts preferentially mapping to the intra-cellular domain. Top Row: Barplots comparing the observed and expected numbers of genes for two different signaling domains (extracellular+transmembrane receptor: EC+MR, and intra-cellular: IC) for HyperM group (hypermethylated and underexpressed genes) across 10 TCGA cancer types. P-values are from a one-tailed Fisher's exact test. Alternative hypothesis is that the odds ratio of finding more genes mapping to the EC+MR domain is >1. Row-2: As before but for the HypoM group (hypomethylated and overexpressed genes). P-values are from a one-tailed Fisher's exact test. Alternative hypothesis is that the odds ratio of finding more genes mapping to the EC+MR domain is >1. Middle Row: As before but for the Deletion group (SCN deletion and underexpressed genes). P-values are from a one-tailed Fisher's exact test. Alternative hypothesis is that the odds ratio of finding more genes mapping to the EC+MR domain is <1. Second last row: As before but for Amplification group (SCN gain and overexpressed genes). P-values are from a one-tailed Fisher's exact test. Alternative hypothesis is that the odds ratio of finding more genes mapping to the EC+MR domain is <1. Last row: As before, but for Mutation group (mutated genes). P-values are from a one-tailed Fisher's exact test. Alternative hypothesis is that the odds ratio of finding more genes mapping to the EC+MR domain is >1. Table lists one-tailed Wilcoxon rank sum test P-values comparing the connectivity (degree) distribution of the top ranked 100 genes in each molecular alteration group between each other, and for each of 10 TCGA cancer types. For each cancer type, the alternative hypothesis being tested is that the connectivity of the gene-class in the column is smaller than that of the gene class indicated in the row. ure 3A). Aggregating numbers of alterations across all 10 cancer-types further confirmed a strong differential enrichment within the WNT-signalling pathway, the Chemokine signaling pathway and the JAK-STAT signaling pathway ( Figure 3B). We note that many of the identified signaling pathways, including WNT-signaling, exhibited an enrichment toward functional DNAm alterations in the extracellular domain regardless of the genomic pattern of alteration (Supplementary Table S11). Importantly, we did not observe any signaling pathway to be significant if we tested for a reverse enrichment pattern, i.e. one with more functional DNAm alterations in the intra-cellular domain, ei-ther in comparison to genes undergoing functional SCNAs or not (data not shown), further supporting the view that cancer cells exhibit a preference for extracellular and transmembrane genes to undergo epigenetic deregulation. Besides WNT-signaling, chemokine signaling is also thought to play a major role in cancer progression, by upsetting the balance between a favorable Th1-type and an adverse Th2-type immune response (42)(43)(44). Mapping the functional alterations across cancer-types onto a global chemokine signaling pathway confirmed a striking differential enrichment, with functional epigenetic deregulation happening mostly in the extracellular domain ( Figure 4A). We verified for individual chemokines and chemokine receptors, that patterns of epigenetic deregulation were highly consistent between cancer types ( Figure 4B, Empirical P < 0.001), demonstrating that these patterns of deregulation transcend the tissue/cancer type. Two important epigenetic enzymes which are universally overexpressed in cancer (45), and which are known to influence DNAm levels are DNMT1 and EZH2 (46). Given their role in suppressing specific Th1-type chemokines in ovarian cancer (47), we asked if DNAm patterns of epigenetically deregulated genes in the chemokine signaling pathway were significantly correlated (or anti-correlated) with expression of either DNMT1 or EZH2, and how this varied across cancer-types. Interestingly, this revealed that some chemokines and chemokine receptors were generally always either correlated or anti-correlated with expression of these two enzymes across cancer-types ( Figure 4C), pointing toward universal patterns of co-expression with key epigenetic enzymes. Interestingly, chemokines or chemokine receptors exhibiting consistent hypermethylation and underexpression in cancer, exhibited expression patterns across tumors that were more likely to be consistently negatively correlated with expression of EZH2 or DNMT1 (or both) ( Figure 4B and C). In contrast, for those exhibiting consistent hypomethylation/overexpression in cancer, their expression across tumors is more likely to be consistently positively correlated with EZH2 or DNMT1 (or both). We note that for many genes that showed a significant and consistent correlation with expression of EZH2/DNMT1, about half of these (e.g. CCL14, CCL15, CXCL9, CXCL10, CXCL11, CXCR3, CXCR6) did not have any 450k probe mapping to their TSS200, first exon or TSS1500 regions, not allowing DNAm changes around the promoter to be assessed. DISCUSSION Here we have conducted a systems-level comparative analysis of functional DNAm and SCN alterations, including mutations, in cancer. Our three key findings, (i) that functional DNAm alterations exhibit a significantly lower connectivity compared to functional SCNAs and mutations, (ii) that functional DNAm alterations tend to target genes in the extracellular and transmembrane domains and (iii) that there exist specific signaling pathways (e.g. chemokine and WNT signaling) which exhibit such preferential epigenetic deregulation in the extracellular domain independently of cancer-type, shed novel insight into the potentially distinctive role of these alteration types in cancer. Previous studies have shown that DNAm changes in cancer and aging are enriched for bivalently and PRC2 marked genes, which in turn are highly enriched for developmental transcription factors (TFs) (48)(49)(50)(51)(52). These TFs occupy peripheral positions in a PPI network like the one considered here, which does not include explicit regulatory protein-DNA interactions. However, that TFs map to the periphery of our PPI network, does not imply that functional DNAm alterations in cancer would also occupy peripheral positions, because most of the PRC2 marked TFs undergoing promoter DNAm in cancer are not altered at the expression level (as they are generally not expressed in the normal tissue to begin with) (53). Thus, it is not a foregone conclusion that the subset of genes undergoing epigenetic deregulation in cancer would necessarily mark nodes of low connectivity. Indeed, our second key finding indicates that the lower connectivity of functional DNAm alterations in cancer, is driven mainly by genes encoding growth modulators and secreted factors. Interestingly, a similar enrichment for genes in the extracellular domain, was also observed for tissue-specific genes for which their tissue-specific expression level is strongly associated with the degree of DNAm at their promoter. We observed that this similar enrichment can be explained by the fact that there was considerable overlap between the epigenetically regulated tissue-specific genes and those genes undergoing simultaneous differential methylation and expression in cancer, consistent with previous findings (41). Indeed, one of the main cancer hallmarks is a lack of differentiation, so it should not be surprising that tissue-specific genes are preferentially altered in cancer. Hence, our observation that functional DNAm alterations in cancer are enriched within the extracellular domain can be partially explained by the corresponding enrichment of tissue-specific genes. The enrichment of functional DNAm alterations within the extracellular domain, in contrast to SCNAs (which were over-represented in the intracellular space) and to genetic mutations (which did not exhibit any differential enrichment pattern), was highly consistent across cancer-types, attesting to its biological significance. Our third key finding showed that there exist specific signaling pathways which are more prone to epigenetic deregulation in their extracellular signaling domain, irrespective of cancer-type. This included two signaling pathways of critical importance in carcinogenesis: WNT-and chemokine signaling (44,54). We stress that our observation that these specific pathways are prone to epigenetic deregulation irrespective of cancer-type, is, to the best of our knowledge, an entirely novel insight. It is important, because evidence is mounting that epigenetic alterations, or genetic modulation of epigenetic regulators, also contribute to carcinogenesis (1,6,55,56). Like genetic mutations and somatic copynumber changes, epigenetic alterations also accrue in normal cells as a function of age and as a function of exposure to cancer risk factors. However, because the epigenome is more easily modulated than the genome, the epigenome is the prime candidate to mediate the effects of environmental exposures (57). These exposures are, by definition, cell-extrinsic, mediated by alterations in the environmental niche, in which adult stem-cells of the underlying tissue reside. It is therefore plausible that cellular adaptation to extra-cellular stresses would involve a mechanism that targets the proteins that mediate the extra-cellular signals. Although signal transduction is a complex biological process, involving proteins at every layer of the signaling domain hierarchy, it can be argued that the most direct means to adapt to specific extra-cellular signals is through modulation of extracellular factors and to a less degree by transmembrane receptors. Indeed, it has already been demonstrated that expression variability, as assessed across a large number of different normal tissue types, is maximal for genes whose main cellular localization at the protein level is in the extra-cellular domain (37). Many expression mark-ers of specific cell-types also map to the cell-surface. As we have shown here, the subset of genes in the extracellular and transmembrane domains which get functionally altered in cancer, appear to do so preferentially through alterations in DNAm. Thus, epigenetic deregulation of extracellular signaling domain genes in cancer may reflect adaptation of cancer cells to a selection process driven by specific environmental stresses. This interpretation is strongly supported in the case of the WNT-signaling pathway, as many previous studies have demonstrated that WNT activity in epithelial stem-cells is controlled by cell-extrinsic factors and that modulation of WNT activity affects sensitivity of cells to DNA damage, thus linking epigenetic deregulation which may happen early in carcinogenesis to an increased predisposition to acquire genetic alterations (58)(59)(60)(61). The role of the immune response in controlling the risk of distant metastasis and hence of clinical outcome in cancer is well established (62,63). A long-standing observation, supported by analysis of gene expression data, is that a T-helper-1 type immune response is generally associated with a favorable prognosis, in contrast to an opposing macrophage polarization program which promotes an unfavorable T-helper-2 type response (42,43). The important role of epigenetics in shaping the type of immune response in the tumor microenvironment was recently demonstrated by Peng et al. (47), where it was shown how epigenetic mediated silencing of specific Th1-type chemokines could promote ovarian cancer progression and lessen the therapeutic efficacy of programmed death-ligand 1 checkpoint blockade. Unfortunately, the specific chemokine ligands considered by Peng et al. (e.g. CXCL9 and CXCL10) do not have 450k probes mapping to their promoters and our study also excluded ovarian cancer due to the lack of an appropriate normal reference. Nevertheless, our pancancer wide analysis of the chemokine signaling pathway revealed a striking pattern of epigenetic deregulation, with several chemokines/chemokine receptors exhibiting consistent hypermethylation and underexpression in cancer, also exhibiting expression patterns (across tumors) that correlated negatively with either DNMT1 or EZH2 (or both). It is very likely that these chemokine genes play a tumor suppressor role in cancer, and their consistent negative correlation with expression of epigenetic enzymes such as DNMT1 or EZH2, suggest that their underexpression may be under epigenetic control. For instance, our analysis identified promoter hypermethylation and underexpression of ligand CXCL12 in six cancer types, and previous studies have reported epigenetically induced silencing of this gene in breast cancer (64), colon cancer (65) and non-small cell lung cancers (66). Hypermethylation of CXCL12 in non-small cell lung cancer has also been reported to be a poor prognostic marker (66). Another interesting chemokine ligand is CXCL14, which we observed to be hypermethylated and underexpressed in two cancer types (breast and colon), but which exhibited a distinctive anti-correlative expression pattern with EZH2/DNMT1 in most cancer types. Supporting this, epigenetic silencing of CXCL14 has been found to promote progression of breast (67), colorectal (68) as well as gastric cancer (69). In stark contrast to DNAm, functional SCNAs appear to preferentially target genes in the intra-cellular domain, affecting central processes such as the cell-cycle. Kinases, phosphatases and other intra-cellular receptor substrates are characterized by a significantly higher level of signaling promiscuity and centrality. Disruption of genes in this signaling domain may contribute to increased cellproliferation, but largely also toward an increased cellular resistance and robustness (70,71). In summary, this work exposes a deep subtle difference between functional epigenetic and genetic alterations in cancer, suggesting that these molecular alterations may contribute in distinct ways to the carcinogenic process.
2018-04-03T01:50:39.290Z
2016-11-28T00:00:00.000
{ "year": 2016, "sha1": "c8d2360f3d84f6a348152558dbbdeeab5e15034c", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/nar/article-pdf/45/2/583/9939695/gkw1100.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c8d2360f3d84f6a348152558dbbdeeab5e15034c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
233409692
pes2o/s2orc
v3-fos-license
Accuracy and reliability of measurements obtained with a noncontact tono-pachymeter for clinical use in mass screening We evaluated the reliability and accuracy of the noncontact CT-1P tonopachymeter (Topcon, Japan) in terms of intraocular pressure (IOP) and central corneal thickness (CCT). One hundred sixty-three healthy participants and 33 patients with open angle glaucoma were enrolled. IOPs were measured by CT-1P (T-IOP) and Goldmann applanation tonometer (G-IOP), and CCTs were measured by the CT-1P (T-CCT) and an ultrasound pachymeter (US-CCT). Both CCT instrument-adjusted (T-IOP-C) and unadjusted T-IOPs (T-IOP-NC) were included. Pearson correlation coefficients and biases assessed with Bland–Altman analysis with 95% confidence interval (CI) were calculated for reliability evaluation. Intrasession repeatability was excellent for both T-IOP (intraclass correlation coefficient [ICC] 0.91) and T-CCT (ICC 0.98). Intersession reproducibility was also excellent for T-CCT (ICC 0.94). T-IOP-NC and T-IOP-C both showed significant correlations with G-IOP (r = 0.801, P  <  0.001 and r = 0.658, P  < 0.001, respectively). T-CCT was also strongly correlated with US-CCT (r = 0.958; P  < 0.001). T-IOP-NC and T-IOP-C both showed a positive bias (1.37 mmHg, 95% CI [1.14, 1.61] and 2.77 mmHg, 95% CI [2.49, 3.05], respectively). T-CCT showed a negative bias of − 17.3 µm (95% CI [−18.8, − 15.8]). With cautious interpretation, the CT-1P may offer good feasibility for IOP and CCT measurement in screening centers. The Topcon CT-1P (Topcon Inc., Tokyo, Japan) is a fully automated, noncontact tono-pachymeter that provides noncontact measurements of IOP. CCT can also be measured using the pachymetry feature, and the instrument automatically provides a CCT-adjusted IOP. There are a few previous reports that have documented the reliability of the CT-1P and its comparison with other IOP or CCT measurement methods. According to Bang et al., a significant positive correlation was shown between the IOP values obtained with GAT and the CT-1P, but the IOP measured with CT-1P tended to be higher than that measured with GAT (mean bias = 0.48 mmHg) 13 . In terms of CCT, the CT-1P tono-pachymeter tended to underestimate CCT measurements with respect to those of the Scheimpflug system, anterior segment optical coherence tomography (AS-OCT) device, and US pachymetry 14 . To the best of our knowledge, however, there are no reports investigating the repeatability and reliability of this tono-pachymeter. The purpose of this study was to evaluate the repeatability, reproducibility, and accuracy of the CT-1P with regard to IOP and CCT measurements and to compare these measurements with those obtained from GAT and US pachymetry. Results Subject demographics. A total of 196 eyes from 196 subjects were included in this study. Among them, 163 subjects were healthy controls enrolled from a glaucoma screening program, and 33 glaucoma patients were recruited from the Glaucoma Outpatient Clinic at the Seoul National University Hospital (SNUH) Healthcare System Gangnam Center (HSGC). The primary open angle glaucoma (POAG) group (mean age: 55.0 ± 8.0 years) was significantly older than normal group (mean age: 51.7 ± 11.2; P = 0.046). Eighty-six women (43.8%) and 110 men were recruited; the sex difference was not significantly different between the two groups (P = 1.000). There were no significant differences in the IOP measured by the CT-1P without (T-IOP-NC) or with correction for CCT (T-IOP-C), the IOP measured by GAT (G-IOP), the CCT measured by the CT-1P (T-CCT), or the CCT measured by US pachymetry (US-CCT) between the two groups. A detailed description of the clinical characteristics of the study population is provided in Table 1. Repeatability and reproducibility. The mean and standard deviation (SD) of the first and second T-IOP-NC values acquired were 15.1 ± 2.8 mmHg and 15.2 ± 2.8 mmHg, respectively. The mean and SD of the first and second CCTs acquired were 517.1 ± 35.9 μm and 516.7 ± 36.5 μm, respectively. The ICC values for T-IOP-NC and T-CCT were 0.91 (95% CI [0.89, 0.92]; P < 0.001) and 0.98 (95% CI [0.98, 0.98]; P < 0.001), respectively. Both T-IOP-NC and CCT showed excellent intrasession repeatability. Considering the physiological IOP fluctuations, intersession reproducibility was evaluated only for T-CCT. For the 140 patients who had undergone T-CCT measurement 3 times, intersession reproducibility was excellent (ICC 0.94, 95% CI [0.93, 0.95]; P < 0.001). The coefficient of variation (CoV) values for CCT was 1.47%, which was excellent. The Bland-Altman plots and scatterplots of T-IOP-NC and T-CCT for intrasession repeatability are provided in Supplemental Figure S1 and S2. Comparison of the CCT and IOP values measured by CT-1P with the gold standards. A strong correlation was shown between T-IOP-NC and G-IOP (r = 0.801, P < 0.001; Fig. 1A). A significantly positive but moderate correlation was found between T-IOP-C and G-IOP (r = 0.658, P < 0.001; Fig. 1B). The correlation of the CCT value between the CT-1P and ultrasound pachymetry was found to be very strong (r = 0.958, P < 0.001; Fig. 2). The mean values of T-IOP-NC and G-IOP were 15.3 ± 2.7 mmHg and 13.9 ± 2.6 mmHg, respectively. The IOP acquired by the CT-1P was significantly higher than G-IOP (P < 0.001). In most cases (136 eyes, 69.4%), the IOPs measured by the CT-1P were higher than those obtained by GAT. The mean values of T-CCT and US-CCT were 516.7 ± 36.0 mmHg and 533.9 ± 36.3 mmHg, respectively. The CCT acquired by the CT-1P was significantly lower than US-CCT (P < 0.001). T-CCT was lower than US-CCT for nearly all subjects (193 eyes, 98.5%). A Bland-Altman plot comparing the T-IOP-NC and G-IOP readings (Fig. 3A) showed reasonable agreement between the methods. The mean IOP difference was 1.37 mmHg (95% CI [1.14, 1.61]), and the 95% limit of agreement (LoA) was -1.89 to 4.65 mmHg. These differences did not vary proportionally to the mean www.nature.com/scientificreports/ of the two measurement values. In 76.5% of subjects, the IOP difference between the two tonometry readings was ≤ 2 mmHg, and in 88.8% of subjects, it was ≤ 3 mmHg. When comparing the T-IOP-C and G-IOP, the mean bias was found to be 2.77 mmHg (95% CI [2.49, 3.05]), and the 95% LoA was -1.14 to 6.69 mmHg (Fig. 3B). That is, the CCT-adjusted T-IOP tended to be more overestimated with respect to G-IOP than T-IOP-NC. In the Bland-Altman plot between the T-CCT and US-CCT measurements, T-CCT was generally undervalued than US-CCT. The mean CCT difference was -17.3 µm (95% CI [− 18.8, − 15.8]), and the 95% LoA was − 37.9 to 3.30 µm (Fig. 4). In 86.7% of subjects, the relative error (CCT difference/US-CCT) was ≤ 5%. Note that majority of the dots are located above the identity line, meaning that T-IOPs were generally higher than G-IOPs. T-CCT was generally lower than US-CCT, but in most cases (86.7%), the difference was within the clinically acceptable range (≤ 5%). NCTs such as the Topcon CT-1P have some advantages for mass screening compared to GAT, such as the unnecessariness of corneal anaesthesia 5 . However, to investigate the feasibility of the device for glaucoma screening, the measurements from the device need to show small test-retest variability. First, regarding IOP, the intraobserver repeatability of the CT-1P was comparable to that of other noncontact tonometers 15,16 and was even better www.nature.com/scientificreports/ than that of the iCare rebound tonometer (ICC 0.73-0.82) 17,18 . The CCT measurement also showed excellent repeatability and reproducibility, which was similar to those of other instruments. The repeatability for CCT was reported to be 0.992 for optical low-coherence reflectometry (Lenstar LS 900; Haag Streit, Köniz, Switzerland) 19 . A previous study comparing partial coherence interferometry (PCI) and 3 US pachymeters demonstrated that the ICC for intraobserver variability was 0.999 for PCI versus 0.987-0.995 for US 20 . In Pearson correlation analysis, a significant, strong correlation was observed between T-IOP-NC and G-IOP and between T-CCT and US-CCT. The correlation coefficient of the CT-1P for IOP was similar to those of previously introduced NCTs 13,21 . The correlation between T-CCT and US-CCT is also comparable to that of a previous study (r = 0.857) 22 . This suggests that the IOP and CCT observed by the Topcon CT-1P can provide significant predictions for G-IOP and US-CCT. Many ocular or systemic factors, including tear film height, astigmatism, corneal thickness, corneal hysteresis, or arrhythmia, affect IOP measurement 23 . Considering such variabilities, IOP measurement error may be clinically acceptable up to 3 mmHg 24 . In the study population, 76.5% showed IOP differences between the two tonometry readings ≤ 2 mmHg, and 88.8% showed IOP differences ≤ 3 mmHg. Based on this result, we could assume that IOP measurements from the CT-1P showed good agreement with those obtained with GAT. However, IOP measurements with the CT-1P tended to be slightly higher than G-IOP (mean bias = 1.37 mmHg). This finding is consistent with a previous report 13 , in which the mean difference between T-IOP and G-IOP was 0.48 ± 2.12 mmHg. In healthy subjects, the mean IOP measured by the CT-1P (15.4 ± 2.7 mmHg) was similar to previously reported values (15-16 mmHg on average, SD 2.5-3.0 mmHg) 6,25 . T-IOP-C was higher than T-IOP-NC in most cases, which accompanied a larger mean bias to G-IOP than T-IOP-NC. This is attributable to the underestimation of CCT by the CT-1P. In addition, the correlation between T-IOP-NC and G-IOP was stronger than that of T-IOP-C and G-IOP. It is well known that IOP measurements are affected by CCT 6,7 . Adjusted IOP is calculated according to the formula: Adjusted IOP = measured IOP + (Standard CCT − measured CCT × Coefficient of adjustment) 26 . Therefore, the dependency on CCT has been eliminated in T-IOP-C, explaining the weaker correlation than that of T-IOP-NC. The mean difference between T-CCT and US-CCT was relatively larger than that between T-IOP and G-IOP and had a distinct tendency toward underestimation. We found two previous works in line with our conclusion for CCT underestimation by the CT-1P compared to US pachymetry 14,22 . There are a few possible reasons for these differences. Since a probe has to reach the cornea perpendicularly, topical anesthetics are needed for US pachymetry. It can affect at most 10 μm of the CCT measurement 12,27 . Furthermore, the US-CCT value can be dependent on the speed in tissues of variable environments and on different levels of examiner experience 28 . In contrast, the noncontact tono-pachymeter avoids contact with the cornea and uses light reflection through the front and back of the cornea. Because the principles used to delimit the front and backside of the cornea of the CT-1P are different from those of the US pachymeter, this discrepancy may play a role in the difference in the measured CCT. Although the absolute error was within the acceptable range in the majority of cases, clinicians should be careful when interpreting T-CCT. There are some limitations in this study. First, all the subjects were from a Korean population, and the number of glaucoma patients was relatively smaller than the number of normal subjects. We determined that the reliability of this tono-pachymeter did not significantly differ between the two groups; however, further research with more glaucoma patients is warranted. Second, the repeatability was evaluated with only a single observer in a short period of time. The measurements from different technicians may show larger variability. Last, we did not consider the CCT fluctuation when calculating the intersession reproducibility. CCT also has diurnal fluctuations like IOP, although the amount of variability is reported to be small 29,30 . In conclusion, the Topcon CT-1P noncontact tono-pachymeter showed good repeatability and agreement with GAT and ultrasound pachymetry. With cautious interpretation, it can be a useful tool in health screening centers. Methods Subjects. The present study included subjects from the Gangnam Eye Cohort Study, an ongoing cohort study conducted by SNUH HSGC. Detailed information on this cohort has been published elsewhere 31 . The present study was approved by the Institutional Review Board (IRB) of SNUH (IRB No. 1906-141-1043) and followed the tenets of the Declaration of Helsinki. Written informed consent was obtained from all subjects. The study population comprises healthy subjects who had participated in a glaucoma screening program at the SNUH HSGC and patients diagnosed with POAG at the Glaucoma Outpatient Clinic of SNUH HSGC. The inclusion criteria for healthy subjects were participation in a glaucoma-screening program at the SNUH HSGC during the period from January 2017 to December 2018 with age > 40 years at the time of the first exam. Individuals identified for exclusion showed (1) a secondary cause of glaucomatous optic neuropathy, (2) ocular or systemic disease that may cause visual field (VF) loss or other optic disc abnormalities, and (3) a history of intraocular surgery other than uncomplicated cataract surgery. One eye was randomly chosen from each patient for statistical analysis. Glaucoma-screening program. The glaucoma-screening examination comprised IOP measurement by a noncontact tono-pachymeter (model CT-1P; Topcon Inc., Tokyo, Japan) along with GAT (model AT900; Haag-Streit, Köniz, Switzerland) and fundus photography by a nonmydriatic fundus camera (model TRC-NW8, Topcon Inc., Tokyo, Japan). The fundus photographs were evaluated by an experienced ophthalmologist (HJC) for suspicious findings such as glaucomatous optic nerve head (ONH) changes or retinal nerve fiber layer (RNFL) defects. Subjects with suspected glaucomatous optic neuropathy, suspected RNFL defects, or IOP > 21 mmHg were referred for definite glaucoma examination. www.nature.com/scientificreports/ IOP and CCT measurements. In a fixed sequence, all of the subjects were examined with a CT-1P noncontact tono-pachymeter (NCT) and ultrasound pachymeter (Pocket II; Quantel Medical, Clermont-Ferrand, France), followed by GAT to obtain IOP and central corneal thickness (CCT) measurements. Since cornea compression during GAT acquisition can induce an increase in aqueous outflow, which might affect subsequent IOP measurements, Goldman IOP was obtained after NCT acquisition 32,33 . NCT and CCT measurements were made by the same experienced ancillary staff. IOP measured with the Topcon CT-1P (T-IOP) was recorded with or without adjustment by the CCT instrument. CCT was recorded from the CT-1P and from the US pachymeter for comparison. GAT measurements were taken with an AT900 according to the standard procedures. One drop of 0.5% proparacaine hydrochloride eye drops (Paracaine, Alcon Laboratories Inc., Fort Worth, TX, USA) was instilled before acquisition, and a fluorescein strip was applied to the inferior conjunctival fornix. To avoid errors introduced by the topical anesthesia, G-IOP was obtained five minutes after eyedrop instillation 34 . All GAT measurements were obtained by the same experienced ophthalmologist (HJC), and the mean of the three GAT measurements was used for analysis. Each tonometer was calibrated according to the manufacturer's guidelines prior to its use in this study. Between each instrumentation application, the subjects were allowed a five-minute rest period to recover from the aqueous outflow. Diagnosis of glaucoma. A diagnosis of glaucoma was made based on both the structural changes (e.g., glaucomatous optic disc cupping or RNFL defects) and the presence of glaucomatous VF loss on standard automated perimetry (SAP) 35 , which was defined as the consistent presence of a cluster of 3 or more nonedge points on a pattern deviation plot with P < 5%, including one or more with P < 1%, a pattern standard deviation (PSD) < 5% or glaucoma hemifield test results outside the normal limits 36 , and on the presence of glaucomatous optic disc cupping (e.g., neuroretinal rim thinning, notching, excavation) or RNFL defects. VF defects had to be repeatable on at least 2 consecutive reliable tests (false positive/negatives < 15%, fixation losses < 15%) 37 . The appearance of the optic disc on optic disc photography and the RNFL on red-free RNFLP were evaluated by two glaucoma specialists (JL, HJC) who were blinded to all other information on the eyes. If the opinions on the diagnosis of glaucoma differed, the final judgment was made by consensus. The control subjects had an IOP ≤ 21 mmHg with no history of increased IOP, absence of glaucomatous disc appearances or RNFL defects, and a normal VF on SAP. Statistical analysis. Unpaired t-tests and chi-square tests were performed to compare baseline clinical characteristics between healthy and glaucomatous eyes. A paired t-test was used for comparison of IOP and CCT measurements acquired from different types of equipment. Intraclass correlation coefficients (ICCs) were calculated to evaluate the intrasession repeatability of the T-CCT and T-IOP measurements and the intersession reproducibility of the T-CCT measurement. Considering the long-term IOP fluctuation, intersession reproducibility was analyzed only for T-CCT. Intersession reproducibility was calculated for the subjects who underwent T-CCT measurements three times within an interval of 6 months. Pearson correlation analysis and Bland-Altman analysis were used to assess the correlations and agreement. For the Bland-Altman plots, the bias with 95% confidence interval (CI) was calculated for T-IOP relative to G-IOP. The intersession reproducibility of CCT from three visits was additionally evaluated by CoV as a normalized SD, as shown in The smaller CoV means better reliability and the instruments with a CoV < 10% are generally regarded as having high repeatability, and a CoV < 5% indicates very high repeatability 38 . All statistical analyses were performed using R version 3.4.0. The data are presented as the mean ± standard deviation, and the level of statistical significance was P < 0.05. Data availability The dataset generated during the current study is available from the corresponding author on reasonable request.
2021-04-28T06:16:58.253Z
2021-04-26T00:00:00.000
{ "year": 2021, "sha1": "4d569365f5fb5cba9954d9a9ea85728df66beaec", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-88364-8.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "a8f4946b0f27a04a913474341343ef03ca8ba423", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235463953
pes2o/s2orc
v3-fos-license
Transition from Methylphenidate to Atomoxetine: reasons for switching and clinical outcome Aims Attention Deficit Hyperactivity Disorder (ADHD) is a behavior disorder originating in childhood comprising of a constellation of features including inattention, impulsivity, and hyperactivity. The National Institute of Clinical Excellence (NICE) Guidelines 2018 recommends methylphenidate as a first line pharmacological agent for treatment of children aged 5 years and over with ADHD. Lisdexamfetamine, dexamfetamine and atomoxetine are recommended in this order if methylphenidate is not tolerated or if symptoms did not respond to separate 6-week trials. Our aim was to, assess the transition of methylphenidate to atomoxetine, the reasons for switching and its clinical outcome in order to make recommendations to current practice regarding treatment of ADHD. Method The study examined a total of 53 children between 0-16 years of age who were being treated for ADHD with atomoxetine at CYPS till September 2018. Data was collected from patients’ files retrospectively by using a proforma based on the NICE guidelines 2018 ADHD: diagnosis and management. Result Out of 53 patients’ on atomoxetine in September 2018, 49 were included in the study. Results recorded side-effects as the main reason for switching from methylphenidate to atomoxetine. Unwanted side-effects were documented in 71.7% of patients of which 57.9% exhibited more than 1 side-effect with the two commonest side-effects documented being weight loss and decreased appetite. The audit highlighted the fact that the correct dose of atomoxetine was only administered in 17.2% of children with 56.9% of patient's being given a higher dose than recommended. Initial weight was not documented in 19% and hence, ideal dose could not be calculated. Overall, atomoxetine was shown to be an effective treatment. Out of the 40 patients documented to have hyperactivity this symptom was decreased in 82.5% whilst 82.9% were shown to have increased concentration. 35 patients had documented impulsivity and this was decreased in 62.9% of cases. 11 patients had documented anxiety with 72.7% being treated effectively with atomoxetine. 31% of patients’ had documented side-effects with 16% of these being tics. 20% of patient's required augmentation. Conclusion The results indicate that the majority of doctors at CYPS in Malta adhered to the NICE guidelines 2018 and atomoxetine was proven to be efficacious as a second line drug in the treatment of ADHD. However, better adherence to NICE guidelines is required when it comes to the calculation of appropriate dosage. Our prediction is had dose recommendations according to weight been adhered to there may have been less side-effects documented. Aims. Dementia is a progressive condition inflicting significant costs for health and social care services. In December 2017, there were 456,739 people on GP registers with a formal diagnosis of dementia. Making the right choice of anti-dementia medication with essential monitoring is one important aspect of care. Thus, the aim of this audit was to identify if current practice at Mossley Hill inpatients and outpatients service for older adults in Liverpool, was in accordance with the NICE Guideline NG97 (Dementia: assessment, management and support for people living with dementia and their carers). Additionally, we aimed to evaluate whether Memantine was commenced according to BNF/SPC recommendations about e-GFR and whether this was documented on patient records, as well as to highlight areas of improvement. Method. An audit was carried out for all patients for whom Memantine was initiated, between June and August 2019. Sixty-nine patients were identified through trust Pharmacy records. Data were collected retrospectively, reviewing local electronic records (ePEX, RIO) and GP referrals. This included age, sex, diagnosis, indication for starting Memantine, decision context, prescriber, documentation of renal function status and communication of decision to the GP. Findings were compared to NICE guidance NG97 and presented at the local audit meeting with a view to recommend strategies for improvement. Result. Results indicated that most of the patients were female (64%) with the most common diagnosis being Alzheimer's disease (75%). Recurrent reasons for initiating Memantine were: contraindication for AChE treatment (25%); illness progression on AChE (22%); and severe dementia on initial presentation (23%). Usually, the decision to start Memantine treatment was made in MDT or after prescriber clinical review. In 68% of the reviewed cases, renal function status was documented. Patients' GP was informed of medication change in 86% of cases. Conclusion. To conclude, in the majority of cases Memantine initiation was in line with NICE guidance. However, documentation can be improved, so as to facilitate future audit. We recommended creating a checklist for prescribing Memantine that could be integrated within the electronic records system. Aims. Attention Deficit Hyperactivity Disorder (ADHD) is a behavior disorder originating in childhood comprising of a constellation of features including inattention, impulsivity, and hyperactivity. The National Institute of Clinical Excellence (NICE) Guidelines 2018 recommends methylphenidate as a first line pharmacological agent for treatment of children aged 5 years and over with ADHD. Lisdexamfetamine, dexamfetamine and atomoxetine are recommended in this order if methylphenidate is not tolerated or if symptoms did not respond to separate 6-week trials. Our aim was to, assess the transition of methylphenidate to atomoxetine, the reasons for switching and its clinical outcome in order to make recommendations to current practice regarding treatment of ADHD. Method. The study examined a total of 53 children between 0-16 years of age who were being treated for ADHD with atomoxetine at CYPS till September 2018. Data was collected from patients' files retrospectively by using a proforma based on the NICE guidelines 2018 ADHD: diagnosis and management. Result. Out of 53 patients' on atomoxetine in September 2018, 49 were included in the study. Results recorded side-effects as the main reason for switching from methylphenidate to atomoxetine. Unwanted side-effects were documented in 71.7% of patients of which 57.9% exhibited more than 1 side-effect with the two commonest side-effects documented being weight loss and decreased appetite. The audit highlighted the fact that the correct dose of atomoxetine was only administered in 17.2% of children with 56.9% of patient's being given a higher dose than recommended. Initial weight was not documented in 19% and hence, ideal dose could not be calculated. Overall, atomoxetine was shown to be an effective treatment. Out of the 40 patients documented to have hyperactivity this symptom was decreased in 82.5% whilst 82.9% were shown to have increased concentration. 35 patients had documented impulsivity and this was decreased in 62.9% of cases. 11 patients had documented anxiety with 72.7% being treated effectively with atomoxetine. 31% of patients' had documented side-effects with 16% of these being tics. 20% of patient's required augmentation. Conclusion. The results indicate that the majority of doctors at CYPS in Malta adhered to the NICE guidelines 2018 and atomoxetine was proven to be efficacious as a second line drug in the treatment of ADHD. However, better adherence to NICE guidelines is required when it comes to the calculation of appropriate dosage. Our prediction is had dose recommendations according to weight been adhered to there may have been less side-effects documented. Audit: lithium monitoring for psychiatric inpatients and community patients during the initiation phase Aims. Measure compliance with standards requiring baseline work up before Lithium therapy is commenced and subsequent Lithium level monitoring during the initiation phase Method. All inpatients and outpatients who were started on Lithium between 2018 and 2019 within the Leicestershire Partnership NHS trust. Case notes were of patients 128 were retrieved from the electronic system and an audit proforma was completed to ascertain adherence to auditing standards as per BNF and trust guidelines. Parameters monitored were full blood count (FBC), renal functions test including serum electrolytes, thyroid function test, and BMI before commencing Lithium, and serum Lithium periodically after. ECG was needed for those patients with cardiovascular illness. Data were systematically compiled and analyzed descriptively using Microsoft Excel Result. A total of 128 patients were included in the study. 111 (86.71%) had FBC, 118 (92.19%) had renal function test and electrolytes, 114 (89.06%) had thyroid function test while 99 (77.34%) had their BMI/weight recorded before initiating Lithium. 26 out of 36 patients with cardiovascular disorder had their ECG recorded. After Lithium was commenced, 108 (84.37%) had serum Lithium tested a week later, while only 89 (69.53%) had lithium monitored weekly. Trust guidelines recommend weekly monitoring for up to 4 weeks after a stable dose was reached. This was monitored in only 16 out of 128 patients. Conclusion. Most of the patients had blood test done before being commenced on Lithium. However it was observed that serum Lithium was not adequately monitored at regular intervals after dose escalations. These finding indicate that there has to be greater awareness of the trust and BNF guidelines with regards to Lithium monitoring. Hyponatraemia monitoring in those prescribed antidepressants -an audit from an inpatient older adult ward Aims. To assess follow-up of sodium levels for in-patients prescribed antidepressants in practice compare to the standard of 3 monthly sodium levels for all patients who are prescribed antidepressants and at risk of hyponatraemia Method. A list of the 20 most recently discharged patients from Meridian Ward, an older-adult functional inpatient ward, was prepared by the team administrator on 6th May 2020. We audited the entire duration of our patient's stay on Meridian Ward (we did not include periods of their admission when they were on other wards) using the electronic notes system, Carenotes. We also checked the electronic biochemistry results system, ICE, for sodium results, and the discharge summary for mentions of fluid restriction, medications and handover to GP of sodiumchecking. We also checked scanned drug charts to see if they were on antidepressants and other implicated drugs. For people with episodes of hyponatraemia, in order to retrieve further info we looked at discharge summary and searched the activity notes for the following terms "Hyponat" "sodium" "fluid restrict" "Low na" We regarded the following conditions as risk factors for hyponatraemia: cardiac malignancy respiratory hypothyroid renal hepatic stroke We regarded following medications as risk factors: opioids diuretics carbamazepine theophylline antipsychotics NSAIDs PPIs ACE-I ARBs amiodarone domperidone sulphonylureas Result. 14 of the 20 patients were taking antidepressants. Of those: 13 were eligible for regular sodium monitoring due to risk factors 11 of these had 3-monthly sodium levels during admission For only 2 of these did we make a plan for the GP to continue to monitor the sodium level in community 3 had an episode of hyponatraemia implicated antidepressants: sertraline plus mirtazapine mirtazapine (very serious episode which caused seizure) sertraline for 2 of them an appropriate plan was made 1 without a plan -a mild hyponatraemia with nothing documented in the notes Conclusion. During their admission to Meridian Ward, 85% of patients taking antidepressants who had risk factors for hyponatraemia had three-monthly sodium levels in line with the trust guidance. However, only two patients (15%) had a plan for further sodium levels in the discharge summary sent to the GP. This highlights a need for improved awareness of risk factors for hyponatraemia and, in particular, improved communication with general practitioners who are going to take over prescribing of antidepressant medications. Recommendations 3 monthly Na levels for all patients with risk factors i.e. on any antidepressant prescribed PLUS any one of:
2021-06-18T13:17:01.130Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "4921c76385a3b80f4779ec80825b5b6f7aed298a", "oa_license": "CCBYNCND", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/EF2EFD570A551461EE0B39F325E58E4B/S205647242100226Xa.pdf/div-class-title-transition-from-methylphenidate-to-atomoxetine-reasons-for-switching-and-clinical-outcome-div.pdf", "oa_status": "GOLD", "pdf_src": "Cambridge", "pdf_hash": "4921c76385a3b80f4779ec80825b5b6f7aed298a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
253649722
pes2o/s2orc
v3-fos-license
Outcomes of Iris-Claw IOL Implantation in Patients with Marfan’s Syndrome in Jordan Objective The management of ocular complications of Marfan’s syndrome, especially ectopia lentis, is challenging. In this study, we present the effectiveness and the safety of iris-claw intraocular lens (IOL) implantation along with lensectomy for those patients. Also, we compare the practice of implanting these IOLs either in the anterior chamber of retropupillary. Methods Retrospectively, we included all patients with Marfan’s syndrome who underwent lensectomy with iris-claw IOL implantation as a result of ectopia lentis. The patients were categorized into two groups: anterior chamber iris claw IOL and retropupillary iris-claw IOL. The clinical and demographic data, the visual outcome and postoperative complications were compared. Results Eighteen eyes of 10 patients were included in the study. The mean age of the patients was 19.1 years. Six patients were males. The iris-claw IOL was implanted anteriorly in 13 eyes. The visual outcome was comparable between both groups and most patients achieved improvement in the visual acuity. In addition, the postoperative complications developed similarly in both groups. However, all cases of IOL disenclavation (6 cases) developed in the anterior group. It is revealed that the age of the patient was the most significant factor affecting the occurrence of IOL disenclavation. Conclusion Iris-claw IOL (either anteriorly or retropupillary) is an effective and relatively safe method in treating ectopia lentis in patients with Marfan’s syndrome. In younger patients, anterior iris-claw IOL is safer than retropupillary iris-claw IOL as the risk of disenclavation is higher in younger patients. Introduction Marfan's syndrome (MFS) is a genetic disorder of connective tissue associated with mutation in fibrillin-1, an important component of the elastic microfibril of ciliary zonules. It is an autosomal dominant connective tissue disorder. Early diagnosis is of crucial importance owing to the life-threatening complications of cardiovascular pathology. [1][2][3] The diagnostic criteria of MFS are included in the revised version of the Ghent criteria. According to this scoring system, MFS has been established with a score of ≥7 points (of a maximum total of 20 points) and it is considered diagnostic. 4,5 In the absence of a family history of MFS, MFS is diagnosed in the presence of aortic root dilatation combined with ectopia lentis, or a causative FBN1 mutation. In the presence of a family history, MFS is diagnosed with the demonstration of ectopia lentis, or a systemic score ≥7 points, or aortic root dilatation. 4,5 Accordingly, ocular manifestations (especially ectopia lentis) are a corner stone in the diagnosis of MFS. The main ocular features of MFS include ectopia lentis, myopia and retinal detachment. [6][7][8][9] Ectopia lentis is the most common one and occurring in 50-80% of patients with MFS, and it is defined as displacement or subluxation of the crystalline lens. The general feature of ectopia lentis in MFS patients is usually bilateral, symmetric and non-progressive. It may vary from a mild asymptomatic dislocation seen only with dilation of pupil to significant subluxation that places the equator of the lens in the pupillary axis. Also, the severe forms of ectopia lentis include crystalline lens dislocation into the anterior chamber, which may lead to pupillary block or chronic angle-closure glaucoma. 3,[7][8][9][10] Posterior dislocation may be hazardous for the retina with a risk for retinal detachment, chronic vitritis and chorioretinitis. 11 For mild cases, the functional visual acuity may be obtained with refractive aids. However, in severe ectopia lentis, unstable refractive status, glaucoma, or endothelial cell loss, surgery is recommended. 12,13 The optimal surgical approach is still controversial and may vary with the individual experience of the surgeon or with individual features of the patient. Lensectomy with Iris-claw intraocular lens (IOL) "Artisan ® " implantation has been studied in those patients with several advantages such as good visual outcome, fewer complications, and easy placement. The iris-claw IOL may be implanted in the anterior chamber or retropupillary. 8,[14][15][16][17][18][19][20] In this study, we evaluate the practice of iris-claw IOL implantation in patients with MFS who experienced ectopia lentis in our educational institution. Also, it compares between the effectiveness and safety between anterior and retropupillary iris-claw IOL in those patients. Methods Patients Retrospectively, we evaluated the characteristics of 18 eyes of 10 patients with Marfan's syndrome who underwent irisclaw "Artisan ® " IOL implantation as a result of severe ectopia lentis during the period of January 2014 to December 2021. After the approval of the Institutional Review Board at Jordan University of Science and Technology (JUST), the study was conducted at King Abdullah University Hospital (KAUH), a tertiary educational center for ophthalmic services which is affiliated with JUST. Using the paper-based and electronic documents records, demographic data (age, sex), past medical history, and the preoperative optical parameters were collected. Furthermore, the operative details, visual outcome and postoperative complications were evaluated. The included study population was those patients with Marfan's syndrome who underwent lensectomy with a primary iris-claw IOL implantation as a result of ectopia lentis. Those patients fulfill the revised version of Ghent criteria of Marfan's syndrome. The exclusion criteria comprised patients with insufficient preoperative or postoperative data, patients with traumatic lens subluxation or ectopia lentis due to causes other than Marfan's syndrome, and patients with previous ocular surgery. The included cases of ectopia lentis were defined as crystalline lens subluxation with border affecting the pupillary axis or anterior chamber or vitreous lens subluxation. The patients were divided by location of implantation into 2 main groups: the anterior iris-claw IOL and retropupillary iris-claw IOL. The outcome was compared between both main groups using different measures. First, the mean change in visual acuity was compared preoperatively and postoperatively during all follow-up visits. Second, postoperative complications were compared and included irregular iris shape (new postoperative irregularity or aggravated preoperative irregularity), iris tissue loss, iris-claw IOL decentration or tilt, spontaneous or traumatic disenclavation, clinical signs of endothelial cell loss (including long-term corneal edema and the development of bullous keratopathy), pigment dispersion, postoperative high intraocular pressure (IOP) which affected the vision and required the use of antiglaucoma agents or the need for glaucoma surgery and retinal detachment. All data were retrieved from visits preoperatively and at 1 week, 1 month, 3 months, 1 year, and on the last follow-up visits postoperatively. Perioperative Setting Visual acuity was assessed by Snellen decimal projectors. Visual acuity was converted to LogMAR visual acuity. For patients with visual acuity of counting fingers, hand motion, light perception or "no light perception", they were converted according to the study of Schulze-Bonsel K et al. 21 IOP was measured by Goldmann tonometry, and anterior and posterior segment examination were performed through slit-lamp biomicroscopy with the required non-contact hand-held lenses. The ophthalmic examination was done by well-trained residents and confirmed by the attending consultant ophthalmologists. The IOL power was measured either by ultrasonic biometry (Digital A/B scan 5500; Sonomed Inc., Lake Success, NY, USA (United States of America)) or by IOL Master when needed. Sanders-Retzlaff-Kraff (SRK-T) formula was used for the selection of the IOL power (other formulas were utilized such as Haigis formula in patients with high myopia and Holladay II and Hoffer Q for patients with short axial length). The optical parameters included the iris-claw 3812 IOL power (using an A-constant of 115 for anterior iris-claw and 117 for retropupillary iris-claw IOL), keratometry readings, and axial length. Emmetropia was targeted in the eyes of patients > age 10, while hyperopia was the aim in younger patients, with values dependent on patient age (for ages 7-10: +0.5 D, ages 5-7: +1; and ages 3-5: +2). The biometry was done under general anesthesia in pediatric patients. Six consultant surgeons performed the operations and selected to implant the IOL either anteriorly or retropupillary depending on their individual experience. The same standardized surgical technique and guidelines were applied in both groups. The lens used in this study was the Artisan ® aphakia IOL (Ophtec BV, Groningen, The Netherlands) which is a polymethyl methacrylate IOL with an 8.5-mm length, 1.04-mm maximum height, and 5.4-mm optical zone width. All operations were performed under either general or local anesthesia. Two corneal side ports were performed at 3 and 9 o'clock positions. After performing the lensectomy by the vitreous cutter (either limbally through the anterior chamber in most cases or through a pars plana seclerotomies), acetylcholine 1% (Miochol ® -E) was injected intracamerally through the paracentesis for miosis. A 5.5-mm corneal incision was made at 12 o'clock. For retropupillary implantation, the iris-claw IOL was inserted upside down (with its convex surface facing posteriorly), rotated by an Artisan lens forceps to a horizontal position, and centered over the pupil. The optic of the reversed iris-claw IOL was held securely using a special forceps. Next, the two haptics were gently slid behind the iris. With the other hand, a long micro-spatula was used through the side ports to tuck iris tissue into the claw. For anterior implantation, the convex surface was placed anteriorly, and the iris was enclavated at midperiphery between the claw haptics. The corneal incision was closed and secured with three simple buried interrupted 10-0 nylon sutures. In 3 cases, the procedure was combined with pars plana vitrectomy. A peripheral iridotomy (PI) was done in some cases. Postoperative therapy included antibiotic, steroid and nonsteroidal anti-inflammatory eye drops for 1 month. Selective corneal suture removal according to corneal astigmatism was performed 6 to 8 postoperative weeks. Many patients underwent prophylactic laser retinopexy especially who have retinal pathologies. Statistical Analysis Extracted data were entered into a spreadsheet. Statistical analysis was performed using the IBM SPSS v.22 (Armonk, New York, USA). Data were expressed as frequency (percentage) for nominal data, mean ± standard deviation of the mean (SD). Statistical significance between the study groups was determined using Chi-square test for categorical variables, and Student's t-test for continuous variables. P ≤ 0.05 was considered statistically significant. General Characteristics Eighteen eyes of 10 patients with MFS who underwent lensectomy with iris-claw IOL implantation due to ectopia lentis were allocated in this study. Of the 10 patients, 6 (60%) were males. The mean age of the patients was 19.1 years. Of the 18 eyes, the left eye was involved in 9 (50%) of the cases. Iris-claw IOL was implanted in the anterior chamber in 13 (72.2%) of the cases. The mean follow-up time for the patients was 31.3 months (standard error 6.4, maximum 65 months, minimum 14 months). Table 1 summarizes the general characteristics of the included patients. In 15 cases, the lensectomy was done through the anterior chamber along with anterior vitrectomy. In the remaining 3 cases, a pars plana lensectomy along with pars plana vitrectomy was utilized and eyes were kept flat under air. Peripheral iridotomy was created in 14 eyes (77.8%). Most patients achieved an improvement in visual acuity at the last follow-up visit. Regarding the postoperative complications, disenclavation of one or both haptics of IOL was the most significant and most commonly encountered postoperative complication. It occurred in 6 eyes (33.3%) either spontaneously or by trauma. Irregular pupil shape and iris tissue loss developed in 5 (27.8%) eyes for both. Only two eyes of the same patient developed high IOP reading, which was controlled by antiglaucoma medications. Table 2 shows the detailed outcome for every patient. Retropupillary versus Anterior Artisan There was no difference between retropupillary and anterior iris-claw IOL in terms of sex, age, laterality and previous ocular diseases. Regarding the associated surgical procedure, PI was performed in the anterior chamber group (91.8% for anterior location versus 40% for retropupillary). 3813 The visual outcome was not statistically different between both groups Table 3. However, the retropupillary group achieved better visual improvement. At 1 year postoperative period, the mean change of BCVA in the retropupillary group was −0.600 LogMAR, which corresponds to an improvement in visual acuity of about 30 letters. On the other hand, the mean change of visual acuity was −0.357 LogMAR in the anterior group, which corresponds to about 18 letters of improvement. In addition, the development of postoperative complications was comparable and not statistically significant between both groups. Irregular pupil shape was developed in 5 cases; all of them were in the anterior chamber implantation. Iris tissue loss was developed in 4 cases in the anterior group and in 1 case in the retropupillary group. High IOP was developed in 2 eyes where the iris-claw IOL was implanted retropupillary. Regarding the disenclavation of iris-claw IOL, all 6 cases developed in the anterior group rather than the retropupillary group, which carries the risk of IOL dropping into the vitreous cavity. In 4 cases (out of 6), the disenclavation was traumatic in nature. The other 2 cases were spontaneous and unnoticed trauma cannot be ruled out. The disenclavation was successfully managed by iris-claw IOL repositioning and fixation. Factors Affecting the Occurrence of Iris-Claw IOL Haptics Disenclavation It was revealed that the laterality, location of iris-claw IOL, and the combined procedures did not affect the occurrence of disenclavation of the haptics. Regarding the sex, 5 cases were developed in male patients and 1 case in female but not statistically different. The only factor that was demonstrated to affect the development of the disenclavation is the age of the patients (P = 0.005). The mean age for patients with previous disenclavation is 7.8 years versus 24.7 for patients without disenclavation. It is important to notice that also patient 3 had more than one time of disenclavation in both eyes. Discussion This retrospective study compares the implantation of iris-claw IOL anteriorly versus retropupillary in patients with MFS who had ectopia lentis. It showed that the visual outcome was comparable between both groups with slight preference for the retropupillary group in the final visual acuity. In addition, both locations were safe with few side effects. However, in young patients, it was preferable to implant the IOL in the anterior chamber as the risk for disenclavation is higher. Cleary et al reported that anterior chamber iris-claw IOL is safe and effective in the correction of aphakia in children following lensectomy for ectopia lentis. 17 They reported their results on 3 patients with MFS. Aspiotis et al performed lensectomy with anterior chamber iris-claw IOL in 5 patients with MFS and ectopia lentis, and they reported that the BCVA improved 4 Snellen lines and endothelial cell counts remained constant during six months of follow-up. 22 Moreover, Sminia et al performed lensectomy with iris-claw IOL in the anterior chamber for two patients and followed them for 12 years with good visual outcomes and no serious complications. 23 Cevik et al reported outcomes of anterior chamber Artisan iris-claw lens implantation in children with non-traumatic ectopia lentis. 16 They concluded that Artisan provides good results in terms of improving uncorrected and corrected vision but involves a high incidence of postoperative complications, especially lens dislocation and retinal detachment. 16 In a case series by Rabie et al, the authors evaluated the outcome of lensectomy and iris-claw IOL in the anterior chamber for 12 eyes of nine patients with MFS and only one case of retinal detachment was developed, and another one case of IOL disenclavation was reported in this series during 44.5 months of follow-up. 20 Catala-Mora et al studied the effectiveness and safety of anterior iris-claw IOL for ectopia lentis in MFS patients, and they concluded that this technique is both safe and effective, improving vision in pediatric patients with severe ectopia lentis. 24 Gonnermann et al studied the posterior iris-claw IOL in patients with MFS-related ectopia lentis in 13 eyes, and they reported good visual outcomes, low endothelial cell loss, and low complication rates. 19 Ectopia lentis is the most common ocular sequela of MFS and varies from 50% to 80% in different studies. Ectopia lentis in MFS results from fibrillin abnormalities, which make the suspensory zonules of the crystalline lens posterior the iris. These abnormalities lead to zonular weakness and, in turn, subluxation of the crystalline lens "ectopia lentis" which is usually subluxated superior-temporally. 6,12 MFS results from autosomally-dominant heterozygous mutations in FBN1 gene, which in turn result in insufficiency of fibrillin-1. This leads to destruction of microfibrillar and structural architecture in the extracellular membranes. 25 Over 800 pathogenic mutations in FBN1 have been discovered. It was proposed that missense mutations in cysteine residues comprise a significantly higher proportion of mutations I fibrillin-1. In addition, it was found that mutations in the first 15 axons at the 5′ end are the causative in ectopia lentis. 26,27 This portion of the protein is thought to be integral to homodimer formation of the fibrillin-1 molecules, which eventually leads to polymers of fibrillin-1 and thus microfibrils. The mutations in FBN1 result in abnormal distribution and structure 3816 of microfibrillar bundles in the capsule of MFS patients, particularly at the site of zonular attachment. 28 Subsequently, iris atrophy and iridodonesis can develop. Many new surgical techniques were developed for ectopia lentis with advantages and good safety profile in comparison with the practice in previous decades where surgery for ectopia lentis was associated with serious intraoperative and postoperative complications that resulted in poor visual outcome. 11 The surgery of ectopia lentis in MFS is challenging as a result of two main factors, first, the capsular insufficiency that developed from ciliary zonular weakness. Second, the choice of IOL implant is mostly difficult. 12 Choices for IOL implant include iris-claw IOL either in the anterior chamber or retropupillary, anterior chamber IOL, posterior chamber scleral-fixation IOL, and scleral fixated capsular tension rings. 12,13 Regarding the anterior chamber IOL, these IOLs are made in a flexible open-loop pattern. They are deep in the anterior chamber and lack the stability in MFS patients, which leads to excessive movement with resultant corneal decompensation, peripheral anterior synechia, and glaucoma. 12,29 Scleral-fixation posterior chamber IOL is an optimal choice for implantation, which can avoid the corneal complications of the anterior chamber IOL with good visual outcome. 12,30 However, Asadi and Kheirkhah published a series on scleral-fixation IOL for 25 eyes of MFS children and showed a high incidence of complications including transient intraocular hemorrhage in 13 eyes, transient choroidal effusion in 2 eyes, late endophthalmitis in 1 eye, retinal detachment in 1 eye, and late IOL dislocation in 6 eyes. 31 As mentioned, iris-claw IOL is an optimal and excellent option for MFS patients with an acceptable rate of complications regardless of its location. In their randomized trial, Hirashima et al studied 31 eyes of 16 patients with ectopia lentis due MFS. They categorized the patients into two groups, retropupillary group and anterior chamber group. They found that the improvement in visual acuity is similar in both groups. Although IOL disenclavation tended to occur more frequently in retropupillary group, the difference was not significant. 32 In our study, the improvement in visual acuity was similar in both groups. However, IOL disenclavation (as a result of iridodonesis) was seen more in the anterior group. We think that the age of the patients plays the most important role in determining the possibility of IOL disenclavation. This study is not without limitations. First, the retrospective nature of the study with possible data inaccuracy and insufficiency is an important point. Second, the small sample size is an important point that limits the statistical analysis values. Third, variable IOL calculation methods and different surgeon handling may affect the outcome even with similar standardized protocols. Fourth, the deficiency of intraoperative images is another weakness point. Fifth, endothelial cell count is one of the important factors when comparing anterior and retropupillary iris-claw IOL. Unfortunately, the measurement tools are not available in our institution. Lastly, the rate of disenclavation is being higher in the anterior group due to the younger age of this group (selection bias). In conclusion, MFS patients are prone for various ocular complications including ectopia lentis. Iris-claw IOL (regardless of its location) is one of the optimal choices for their ocular complications especially if can be managed by the surgeon. Retropupillary and anterior chamber iris-claw IOL are comparable in the visual outcome and postoperative complications in those patients. However, in younger patients, we would prefer to implant in iris-claw IOL anteriorly as the risk for disenclavation is higher. More randomized trials and reviews are needed to justify the results.
2022-11-19T16:09:47.400Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "959d63a90513820373b7265767936eacf41b5d55", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=85494", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4ece5e1c20efa014d6b6c95765de5135058d4728", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
28197906
pes2o/s2orc
v3-fos-license
Epidemiology of Esophageal Cancer in Ardabil Province During 2003-2011 The esophageal cancer (EC) is the third prevalent cancer of digestive system and 8th common cancer worldwide (4% of whole) which is related to different factors such as geographical width, age, job, and birth place. The south-eastern margin of Caspian Sea is one of areas with high incidence of esophageal cancer in the world (Semnani et al., 2004). In Iran annually, an estimated total of 6500 EC occur that of which 5800 patients die from EC within the same year (Pourfarzi et al., 2011). Ardabil Province is one of the provinces located in northwestern Iran, an area 70 km inland from the western Caspian coastline, with an area of about 17953 km2 of the total area of Iran. According to the 2010 census, the population of Ardabil province is 1,272,214 (1.7% of the total population of Iran). The esophageal cancer is the second prevalent cancer affecting both males and females and upper digestive tract cancers are the leading cause of 43% death in Ardabil province (Babaeei et al., 2009). Demographic and histopathologic pattern of esophageal cancer in northwestern region of Iran was different from its histopathologic pattern in western countries (Pedram et al., 2011). The incidence rate of esophageal cancer was different throughout the world. Clinical and epidemiological Introduction The esophageal cancer (EC) is the third prevalent cancer of digestive system and 8 th common cancer worldwide (4% of whole) which is related to different factors such as geographical width, age, job, and birth place.The south-eastern margin of Caspian Sea is one of areas with high incidence of esophageal cancer in the world (Semnani et al., 2004).In Iran annually, an estimated total of 6500 EC occur that of which 5800 patients die from EC within the same year (Pourfarzi et al., 2011). Ardabil Province is one of the provinces located in northwestern Iran, an area 70 km inland from the western Caspian coastline, with an area of about 17953 km 2 of the total area of Iran.According to the 2010 census, the population of Ardabil province is 1,272,214 (1.7% of the total population of Iran). The esophageal cancer is the second prevalent cancer affecting both males and females and upper digestive tract cancers are the leading cause of 43% death in Ardabil province (Babaeei et al., 2009).Demographic and histopathologic pattern of esophageal cancer in northwestern region of Iran was different from its histopathologic pattern in western countries (Pedram et al., 2011). The incidence rate of esophageal cancer was different throughout the world.Clinical and epidemiological Epidemiology of Esophageal Cancer in Ardabil Province During 2003-2011 Firouz Amani 1 , Saeid Sadeghieh Ahari 1 *, Lyla Akhghari 2 pattern of esophageal cancer in South-East of Iran was partially differed with other parts of the county which was important for both clinicians and health policy makers (Mashhadi et al., 2011). The esophageal cancer is common in the area extending from the west, and southern coast of Caspian Sea to east and north of China, including Iran, central Asia, Siberia, and Mongolia.Also, this disease has spread to other regions like Finland, Iceland, Curacao, southeastern part of Africa, and northwestern part of France (Yomralioglu et al., 2009).In Globocan 2008 (Ferlay et al., 2010), it was found to be the eighth most common cancer worldwide, with 481,000 new cases, and the sixth most common cause of death from cancer with 406 000 deaths (5.4% of the total).More than 80% of the cases and deaths occur in developing countries and Central and East Asia have particularly high rates (Igissinov et al., 2010). In North America and Western Europe, this kind of cancer is more common among black people and men.This disease is generally manifested at the ages higher than 50 and it seems that this disease is more common in societies with severe economic-social conditions.Approximately 10% of esophageal cancer happens in the upper thoracic (cervical esophagus) and about 35% in middle third of esophagus as well as 55% in lower third of esophagus (Jalali et al., 2005). Esophageal cancer is the eighth most common cancer worldwide.An estimated 480,000 people across the world were diagnosed with esophageal cancer in 2008, accounting for 4% of the total (Semnani et al., 2004). The developing countries carry the biggest burden of esophageal cancer, with more than eight out of ten (83%) cases being diagnosed there in 2008 (Ferlay et al., 2011). There are two main histological types of esophageal cancer: squamous cell carcinoma (SCC), which is associated with tobacco smoking and alcohol; and adenocarcinoma (AC), which is related to reflux disease and excess bodyweight (Carrao et al., 2004;Freedman et al., 2007;Boyle et al., 2008).SCC accounts for the vast majority of esophageal cancers diagnosed in low and middle income countries (Boyle et al., 2008).Squamous cell carcinoma comprised more than 99% of all esophageal cancers in our patients and this histological type is the prominent type in the Northeast of Iran (Anvari et al., 2011). In summary, prognosis of esophageal cancer in North West of Iran is poor.Therefore, reduction in exposure to risk factors and early detection should be emphasized to improve survival (Mirinezhad et al., 2012). According to the conducted studies, the high prevalence of cancer within digestive system in Ardabil province is evident and risk factors affecting the prevalence degree of esophageal cancer can be found in Ardabil province; so, this study attempted to investigate that the epidemiology of esophageal cancer in Ardabil province during study years. Materials and Methods This is a cross-sectional descriptive study that has been done on 661 patients in Ardabil province from March 2002 to May 2011.The necessary data collected by a checklist including age, gender, birthplace, residency (where they have spent most years of their life time), job, marital status, risk factors of disease (cigarette, alcohol drinks, and opium), tumor pathology, education, clinical symptoms, family history of cancer from Ardabil cancer registry center.The data were analyzed by statistical methods using SPSS 10 software. Results Six hundred sixty-one patients were registered as cases of esophageal cancer including 65.1% males and 34.9% females with a male to female ratio of 1/1.9.The incidence of cancer in male was 1.9 times higher than female which was statistically significant (p=0.0001)(Figure 1).281 cases (42.5%) were urban and 380 cases (57.5%) were from rural area and the incidence of cancer in rural areas was 1.4 times more than urban areas which was statistically significant (p=0.0001)(Figure 2).From all patients, 522 people (79%) were married and 552 patients (83.5%) were illiterate.Among all esophagus cancer cases, 252 cases (38.1%) were farmers and 252 patient (38%) use smoking.In 648 patients (98%), the digestive symptoms and in 500 cases (75.6%) dysphasia was reported.Among 437 patients (66.1%), reflex symptom was mentioned. In 79 cases (12%), the bleeding symptom had been reported.In 117 cases (17.7%), the family history of cancer was mentioned.The type of tumor in 455 cases (68.8%) was SCC and 188 cases (28.5%) suffered adenocarcinoma (AC) and others' was unspecified.In 286 afflicted people .2013.14.7.4177 Epidemiology of Esophageal Cancer in Ardabil Province, 2003-2011 (43.3%), cancer had involved in the middle third of esophagus (Table 2). Discussion At the present study, 65.1% of affected people were male and 34.8% were female suggesting that the male were more affected than female which is consistent with the results found in previous studies conducted in Ardabil province and Caspian shores as well as other countries (Wild et al., 2003;Babaeei et al., 2009;Lgissinov et al., 2012).In the studies previously done by Jalali et al., the affliction proportion in rural population to urban population is 2.3:1 (Jalali et al., 2005), however, at the present study, about 60% of affected people inhabit in rural areas which echoes the findings of previous studies (Babaeei et al., 2009;Pourfarzi et al., 2011).But in Hajian study explored that more than 50% of affected people are living in cities based on the research done in Babolsar city (Hajian et al., 2003).In the current study, 38.1% of affected people were smoking which confirms findings of previous research, done in Japan with high incidence of gastric and esophageal cancers, arguing that the risk of cancer affliction in smoking people is 20 times more than nonsmoking ones.This percentage is so high in comparison with the results found in the research conducted in Caspian shores (7.9%) which can be considered as one of risk factors for esophagus cancer in Ardabil province.At the present study, 38.1% of patients were planters and 28.3% were homemakers being consistent with the results obtained in the study done in Caspian southern shores which indicated that the majority of affected men were planters and the majority of affected women were homemakers (Renehan et al., 2008).In this study, 60% of patients were from rural and 83.5% were illiterate and 50% involved farmers, concluding that the cancer manifestation has high prevalence in low economic and social classes which echo the results from studies done in Iran and other reference books which indicate that cancer prevalence is mostly observed in low classes (Wild et al., 2003). However, at the present study, 68.8 % of patients suffer SCC and 3.8% suffer AC and 27.4% suffer other kinds of esophagus cancer which has differences with studies done in Caspian southeast shores including 37.7% SCC, 10.4% AC, and 15.9% other types of esophagus cancer (Semnani et al., 2004;Anvari et al., 2011;Ghanaei et al., 2012).The SCC manifestation was higher in this study in comparison with some similar studies conducted in other provinces.Regarding tumor site, 44.4% middle part of esophagus, 30.9% lower part of esophagus, 20.9% junction, and 6.9% upper thoracic have been involved.Pourfarzi proved that 44.1% lower part, 29.5% middle part, 22.4% upper part of esophagus has been involved (Pourfarzi et al., 2011).In the reference books, the esophagus tumor involvement in lower part, middle part, and upper part has been mentioned to be 55%, 35%, and 10%, respectively (Freedman et al., 2007), while the middle part of esophagus has been the tumor site in most cases.The cancer involvement site was middle third of esophagus, lower third of esophagus, upper thoracic, and finally the junction respectively and the involvement of the middle third of esophagus was the most common case based on previous studies done in Ardabil province confirming the results pertained to the present study (Wild et al., 2003).Results showed that the prevalence and annual incidence rate of esophagus cancer in Ardabil province which is lower than country and the cancer is prevalent in male than female which have similar pattern compare to other places (Mashhadi et al., 2011;Mirineghad et al., 2012). Table 1 . Selected Demographic Characteristics of EC Patients Figure 1.Incidence of Cancer.A) City and Sex.B) City and Residence Place A) B)
2017-10-10T17:57:43.747Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "51cd9311e8d0aec29be64c75fb956503248ff21a", "oa_license": "CCBY", "oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201332479511907&method=download", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "253bf0d2e2bce66dae1f0a92a8ae3f9e46cf7b06", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
8105944
pes2o/s2orc
v3-fos-license
Training and support to improve ICD coding quality : A controlled before-and-after impact evaluation The health sector in South Africa (SA) uses the World Health Organization (WHO)’s International Statistical Classification of Diseases and Related Health Problems, 10th revision (ICD-10) codes for epidemiological surveillance as well as patient billing. The proposed National Health Insurance (NHI) policy states that diagnosis-related groups (DRGs) will be the mechanism through which provincial and national hospitals purchase services from the national health authority.[1] The formulation of DRGs, which can roughly be summarised as average cost for similar health conditions, is dependent on accurate and complete ICD coding. Comorbidity and complications increase the costs of managing health conditions at hospital level. The omission of ICD codes from patient records would therefore result in under-costing DRGs and under-resourcing of hospitals. Also, morbidity profiles of hospitals would be incomplete, rendering hospital admission data a poor proxy for the burden of disease for communities in the hospital’s drainage area. While private hospitals have dedicated coders to produce comprehensive sets of ICD codes for patient encounters, clinicians at public hospitals are required to code diagnoses of all inpatients themselves. However, review of the first 18 months of the NHI pilot described the implementation of the ICD system in the public sector as unsatisfactory and in need of strengthening.[1] In response to this challenge, the Western Cape Government: Health (WCGH) department commissioned a software application for discharge summaries, the electronic Continuity of Care Record (eCCR), to assist clinicians with ICD coding by integrating ICD code browsers, notes and basic coding rules. However, as another SA study pointed out, the mere introduction of an electronic system may not produce the desired results without engaging and supporting the intended users of the system.[2] A review of the eCCR pilot showed that while ICD coding coverage was far better than in previous years, the data quality was still inadequate for billing and surveillance purposes.[3] While 74% of the patient discharge records’ primary ICD codes were accurate, only 45% of records had complete sets of the required codes during a pilot at a central hospital in 2013. This study followed on the recommendations from that study for additional training, oversight of junior clinicians and co-ordination of competing processes. Increasing demands on clinicians make it difficult for them to commit to costly, time-consuming accredited ICD coding courses, although such programmes have been shown to have a positive impact on data quality.[4,5] Independently of this research, the WCGH introduced a package of support interventions at one of two central hospitals where the eCCR was implemented. The package included orientation to the eCCR, on-site training in the fundamentals of ICD coding, senior review of discharge summaries prepared by junior staff, access to an in-house-developed online ICD coding training course, and on-site support of a case manager designated to support eCCR users in ICD coding. This open-access article is distributed under Creative Commons licence CC-BY-NC 4.0. RESEARCH The health sector in South Africa (SA) uses the World Health Organization (WHO)'s International Statistical Classification of Diseases and Related Health Problems, 10th revision (ICD-10) codes for epidemiological surveillance as well as patient billing.The proposed National Health Insurance (NHI) policy states that diagnosis-related groups (DRGs) will be the mechanism through which provincial and national hospitals purchase services from the national health authority. [1]The formulation of DRGs, which can roughly be summarised as average cost for similar health conditions, is dependent on accurate and complete ICD coding.Comorbidity and complications increase the costs of managing health conditions at hospital level.The omission of ICD codes from patient records would therefore result in under-costing DRGs and under-resourcing of hospitals.Also, morbidity profiles of hospitals would be incomplete, rendering hospital admission data a poor proxy for the burden of disease for communities in the hospital's drainage area. While private hospitals have dedicated coders to produce comprehensive sets of ICD codes for patient encounters, clinicians at public hospitals are required to code diagnoses of all inpatients themselves.However, review of the first 18 months of the NHI pilot described the implementation of the ICD system in the public sector as unsatisfactory and in need of strengthening. [1]In response to this challenge, the Western Cape Government: Health (WCGH) department commissioned a software application for discharge summaries, the electronic Continuity of Care Record (eCCR), to assist clinicians with ICD coding by integrating ICD code browsers, notes and basic coding rules. However, as another SA study pointed out, the mere introduction of an electronic system may not produce the desired results without engaging and supporting the intended users of the system. [2]A review of the eCCR pilot showed that while ICD coding coverage was far better than in previous years, the data quality was still inadequate for billing and surveillance purposes. [3]While 74% of the patient discharge records' primary ICD codes were accurate, only 45% of records had complete sets of the required codes during a pilot at a central hospital in 2013.This study followed on the recommendations from that study for additional training, oversight of junior clinicians and co-ordination of competing processes. Increasing demands on clinicians make it difficult for them to commit to costly, time-consuming accredited ICD coding courses, although such programmes have been shown to have a positive impact on data quality. [4,5]Independently of this research, the WCGH introduced a package of support interventions at one of two central hospitals where the eCCR was implemented.The package included orientation to the eCCR, on-site training in the fundamentals of ICD coding, senior review of discharge summaries prepared by junior staff, access to an in-house-developed online ICD coding training course, and on-site support of a case manager designated to support eCCR users in ICD coding. RESEARCH There is little literature on the impact of training and support on ICD coding quality.Previous research used inter-observer reliability as the standard for quality, but did not appraise codes against the original patient record. [6,7]In general, research into ICD coding targets dedicated coders and focuses on efficiency and productivity. [7]jective A retrospective evaluation of the impact of the ICD coding support package by comparing data quality before and after the introduction of the package at the intervention site and a control site, each at tertiary level hospitals in the Western Cape Province of SA.The study formed part of a larger evaluation of the eCCR and ICD coding in the Western Cape. Study design This was a quasi-experimental study in which the quality of ICD-10 data in the eCCR was assessed before and after the implementation of training and support at the intervention site.ICD-10 data quality was also assessed at a control site to determine the change in data quality over and above changes that may have occurred naturally without the intervention. Study setting and population The study was conducted in the internal medicine departments of two central hospitals in the Western Cape.Patient records and data from the eCCR were reviewed for patients who were discharged over periods of 2 months: baseline 1 August -30 September 2014, and post-intervention 1 November -31 December 2014.During these periods, it was required that all patients admitted to general internal medicine wards at both hospitals receive discharge summaries prepared using the eCCR. Sample size The two-sided Fisher's exact test statistic was used to calculate the sample size for this study.The significance level of the test was targeted at p<0.05.It was hypothesised that ICD quality in the intervention group might improve by 15%, while the control group might change by 5%.Group sample sizes of 160 each were required to achieve 80% power to detect a difference in differences (DID) in group proportions of 0.10 from baseline to post-intervention.Each group sample size was increased by 10% to account for the possibility that original patient records might be missing, bringing the group sample size to 176. After the eCCR database had been cleaned, 352 records were randomly selected from the intervention and control sites in proportion to the total discharges at baseline and post-intervention. Before and after sample sizes were weighted according to the total number of patients discharged with the eCCR in each of the study periods so that folders had an equal chance of being randomly selected at each site. Data collection Data were extracted from the eCCR database, original patient records and the human resource management information system. The ICD codes from the eCCR were checked by one investigator (RD) against original patient records at both the intervention and the control sites to maximise the consistency with which the outcome variables were generated.Data quality checks were performed on a 10% random sample of the data to the satisfaction of a co-author (GW).Similar to a method described by Chute et al., [8] and as used in the pilot eCCR study, [3] the primary ICD code for each patient record was reviewed and classified as one of the following: The accuracy of primary ICD codes alone was reported in this study because this is the dominant cost driver in the formulation and selection of DRGs. The narratives in the discharge summary from the original patient record were used to determine the relevant clinical concepts of the admission episode.This assumed that clinicians summarised the most relevant clinical information in the patient episode.The eCCR discharge summaries were checked for any clinical information that should have been coded as primary or secondary diagnoses.The ordering of the codes was not used to determine coding quality, except where it influenced the primary ICD code.The technical terms relating to ICD coding in the context of this study are defined in Table 1. Data from the patient records and eCCR were entered onto predesigned data collection forms and then entered directly into a piloted, preformatted Excel 2013 spreadsheet (Microsoft, USA) by the principal investigator (RD).A 10% sample of randomly selected folders was checked by a co-investigator, an expert in ICD coding (GW), to ensure consistency in the application of the rules used by the investigator to derive the outcome data. Term Definition for this study Primary diagnosis 'The main condition is defined as the condition, diagnosed at the end of the episode of healthcare, primarily responsible for the patient's need for treatment or investigation.It is the "main condition treated".If there is more than one "main condition treated", then the most clinically severe or life-threatening condition should be selected.There can only be one primary discharge diagnosis per patient admission.' [3] Secondary diagnosis ' Additional conditions that affect patient care or may co-exist with the primary diagnosis in terms of requiring: clinical evaluation; or therapeutic treatment; or diagnostic procedures; or extended length of hospital stay; or increased nursing care and/or monitoring.This includes any comorbidity that the patient may have.There may be multiple secondary diagnoses per patient.' [3] Clinical concept ' A clinical concept is any diagnosis, procedure, risk factor, modifier, morphological reference or contextual circumstance that can be represented as an ICD code.ICD codes are therefore not restricted to diagnoses.' [3] Diagnostic codes ' All coded clinical concepts that were coded as primary, secondary and complication ICD codes.' [3] Inclusion criteria Records of inpatients who were discharged, using the eCCR, from the general internal medicine departments at two central hospitals between 1 August 2014 and 30 September 2014 for the baseline period, and between 1 November 2014 and 31 December 2014 for the post-intervention period, were included. Exclusion criteria Records of patients who died in hospital prior to discharge, and records of patients for whom the original paper or scanned electronic patient record could not be found after three requests on separate dates, were excluded. Measurement tools The International Statistical Classification of Diseases and Related Health Problems, 10th revision (SA version, January 2014), derived from and licensed to SA by the WHO, was used as a reference for checking the accuracy and completeness of ICD codes. [9]The instructional notes from the Centers for Disease Control, USA, as well as additional notes specific to SA, were used to assist in the appraisal of ICD coding quality.These resources were integrated into the eCCR and were therefore available to clinicians at the intervention site during the study period.Patient data were collected from folders and clinician characteristics from human resources records.The investigators were not blind to the study site or pre-/ post-intervention period when assessing the outcomes in this study. Statistical analysis The record of a patient admission was the unit of analysis.If a primary ICD code was classified as a match, as described above, it was regarded as accurate.If all the relevant clinical concepts were represented by at least partially matching ICD codes, a record was regarded as complete.The term ICD coding quality was used to refer collectively to primary ICD code accuracy and coding completeness in order to reduce repetitive statements concerning these two outcome variables.Data were imported from Excel into Stata version 13.1 (StataCorp, USA) for analysis.Categorical variables were described with proportions and 95% confidence intervals (CIs).Means and 95% CIs and medians and interquartile ranges were calculated for continuous and count variables, respectively.The before and after groups and the intervention and control groups were treated as four independent groups in the analysis.To test for statistically significant differences in patient characteristics between the groups, the χ 2 statistic was used for categorical data, one-way analysis of variance for normally distributed continuous data, and the Kruskal-Wallis test for nonparametric data. The impact of the intervention support package was determined by calculating the difference between ICD coding quality pre-and post-intervention at the control site, and then subtracting this answer from the difference between ICD coding quality pre-and postintervention at the intervention site, i.e. the DID. [10,11]Other than the inspection for CI overlap, significance testing could not be performed on the DID calculation as the outcome measurements were on the overall performance of independent groups as four distinct units rather than the individual patient records.As further recommended by Rohrer et al., [10] the odds of the outcome variables in the postintervention group were determined using firstly logistic regression which produced crude odds ratios (ORs), and secondly multiple logistic regression which produced adjusted ORs to account for group differences and patient and clinician characteristics. [10]e associations between ICD coding quality and characteristics of both the patient and the discharging clinician have been demonstrated in previous research. [3]Based on those findings and the assumption that these factors would modify likely ICD code quality, we adjusted the regression model using the patient's age, gender, comorbidity and length of stay in hospital, the clinician's rank, time period relative to the intervention, and study site.The 95% CIs and p-values for the ORs were also reported, p<0.05 being regarded as statistically significant.Clinicians prepared varying numbers of summaries.This introduced a cluster design effect that was adjusted for in the analysis. Ethics approval The study was approved by the Health Research Ethics Committee at Stellenbosch University (ref.no.S13/08/137) and was conducted according to accepted and applicable national and international ethical guidelines and principles, including those of the international Declaration of Helsinki, October 2008.Ethics approval included a waiver of patient consent for the patient record review.Permission was obtained from the Provincial Health Research Committee to proceed with the research and to access data from routine systems (ref.no.2013/RP/140).Patient identifiers were removed prior to analysis and reporting. Included records None of the 352 records requested from the intervention and control sites had missing folders.None of the patients had died prior to discharge, and therefore all records were included in the analysis.There were no missing data. Patient and clinician characteristics and associations with ICD coding quality Descriptive characteristics of patients and clinicians are shown in Tables 2 -3.Although there appeared to be a greater proportion of females at the intervention site than at the control site, this was not statistically significant (p=0.52).There were no statistically significant differences between the groups for the patient characteristics of age (p=0.31),length of stay (p=0.41) and comorbidity (p=0.30).There RESEARCH were, however, statistically significant differences between the groups in terms of the rank of the discharging clinicians (p<0.01).These differences are also apparent in Tables 2 -3.None of the associations between patient characteristics and ICD code accuracy were statistically significant in the crude and adjusted analyses.While the association with the clinician rank of 'specialist' appeared significant (Table 4), this is likely to be a spurious finding as only one specialist's discharge summaries were sampled for this study (Tables 2 -3).The odds of a record being encoded completely decreased by 33% for every additional comorbid condition in the patient (Table 5). Impact of training and support on ICD coding quality DID results The number of records with accurate primary ICD codes improved slightly from 71.1% (95% CI 61.1 -80.4) at baseline to 79.1% (95% CI 69.4 -86.4) after the intervention at the intervention site, while the accuracy of records at the control site remained essentially unchanged.The DID in primary ICD code accuracy between the intervention and control sites was only 6.6% (Fig. 1). While the percentage of records with complete codes at the control site improved only slightly from 22.5% (95% CI 14.5 -34.2) to 25.5% (95% CI 17.9 -35.0), the intervention site percentages improved Fig. 2. Percentages of records with complete sets of ICD codes at the inter vention site compared with the control site before and after the training and support intervention. RESEARCH considerably from 27.1% (95% CI 18.5 -37.7) to 68.1% (95% CI 57.8 -77.0), which translates into a DID of 38.0% (Fig. 2). Multiple regression results Relative to the baseline period, patient records at the intervention site had a 6.6 (95% CI 3.5 -16.2) adjusted OR of having a complete set of ICD codes for an admission episode after the introduction of the training and support package for ICD coding.This includes adjustment for patient characteristics, clinician characteristics, time period relative to the intervention and study site.However, for the same scenario, the adjusted OR of 1.9 (95% CI 0.97 -3.6) for accuracy of the primary ICD codes was not statistically significant. Discussion The results of this study describe the impact of a training and support intervention on the completeness and accuracy of discharge ICD codes generated in an electronic discharge summary application for clinicians.Potential confounding factors that could influence the impact of the intervention were also taken into account. Despite use of the same criteria and measurement tools, the ICD code quality at both sites during the baseline period of this study was notably lower than that found in research conducted 1 year previously, in which accuracy and completeness were reported as 74% and 45%, respectively. [3]This may be because the novelty of using an electronic application for discharges had worn off, or because the researchers in the previous study had under-estimated the Hawthorne effect.Surprisingly, the relationships described between ICD coding quality and patient and clinician characteristics in the previous research were not evident in this study.Besides strength of association being very weak and non-significant, there was little change between the ORs in the crude and adjusted analyses of these characteristics (Tables 4 -5).The significant association with the rank of specialist should be interpreted cautiously, as there was only one specialist at the intervention site whose discharge summaries were sampled for this study.The significant associations move from strongly positive to strongly negative between the crude and adjusted analysis, suggesting instability in this finding. Indeed, exposure to the training and support intervention had the strongest association with ICD coding completeness.However, the intervention package did not make much difference to the accuracy of primary ICD codes.Given the inherent limitations of the ICD system described by Chute et al., [12] accuracy in the region of 75% may be as good as it gets for a clinical setting where discharges are mostly prepared by the most junior clinicians who, besides not having been trained to expert level in ICD coding, are still learning to diagnose and manage complex cases in a tertiary hospital.Other research has suggested that it may be impossible ever to achieve 100% accuracy and completeness in ICD coding owing to the design of the ICD system. [6,12,13]None of the disease classification systems are able to capture all clinical concepts that are of interest to clinicians. [8,13]ifferences between descriptors of the ICD coding system and everyday clinical terminology also contribute to inaccurate and incomplete coding. [12,14]s has been the case in previous research, this study showed that increasing comorbidity had a negative association with the quality of ICD codes, possibly owing to the challenge of finding correct terminology for the ICD descriptors in the look-up browser for each additional clinical concept that required encoding. [3]The addition of SA synonyms for the American terminology used in the ICD code descriptors and help notes to the eCCR since the 2013 study did not seem to have a significant effect on the data quality, though there were reports of improved user experience in the qualitative component of the larger eCCR evaluation study (Dyers et al., unpublished data). While these results are encouraging in terms of a systemstrengthening intervention to improve ICD coding quality, this may still not be of an acceptable standard for the purposes of revenue retrieval and compliance with financial prescripts.Twenty percent of inaccurately coded patient records may negatively impact on DRG costing, resulting in underfunding of services purchased by hospitals from the national health authority as proposed in the NHI policy.Despite the notable impact of the intervention of 38% on the completeness of ICD coding, there is still room for improvement by 32% to ensure that all the required clinical concepts are encoded.For the purpose of initial DRG formulation, this may require the use of expert encoders.Repeated training interventions may also progressively improve coding quality. Study limitations There may have been patient, clinician and service confounding variables that were not adjusted for.Although there was a risk of measurement bias in this study due to the investigators not being blinded to the retrospective 'assignment' of patients to the intervention and control groups, efforts were made to apply the same coding rules to all groups consistently by a single observer, i.e. the principal investigator (RD), who had no particular interest in the performance of the intervention package. RESEARCH The order of ICD codes was not considered for this study.The observed improvement in performance may therefore still not have been according to international coding standards.The use of only one clinical discipline at two central hospitals limits the generalisability of these results.However, this retrospective quasi-experimental evaluation forms part of a province-wide quality improvement cycle from which local policy-makers can draw lessons and be mindful of the caveats to the findings.While there were imbalances in clinician numbers and characteristics between the groups over the study periods due to clinician rotations and varying team numbers, addressing these by intervening in the work environment or randomising patient and clinician assignment to balance the number of discharges per clinician would have created an artificial scenario and produced results that could never be achieved in the real working world.This research made use of a pragmatic approach to assessing the impact of a systemstrengthening intervention where the complexity of the actual healthcare delivery setting was deliberately retained.This resulted in meaningful findings for translation into policy.However, it is acknowledged that the two central hospitals have different histories, university links and cultures, the potential role of which in the findings cannot be completely excluded. Recommendations It may not be affordable for managers to introduce the entire training and support package in all clinical departments in all hospitals, the most expensive component of the package being the case manager.However, policy-makers should consider scaling up the less costly components, such as the orientation programme, senior review of discharge summaries prepared by junior staff and access to the online ICD course. In addition, it may be worthwhile to explore the affordability and cost-effectiveness of incrementally introducing onsite support by designated case managers to clinical areas that treat complex patients and where in-hospital costs are high, e.g.secondary, tertiary and high-care units for obstetrics, paediatrics, general surgery and internal medicine.This may have short-term cost benefits in these areas that could also spill over into other clinical areas in the medium term as clinicians rotate through the various disciplines in their training.As the more stable members of clinical teams, i.e. the senior clinicians, become more comfortable with ICD coding, the improvement in data quality may be sustained and possibly improved in the long term. Hospital managers are advised to pursue the use of 'checklists, alerts, and predictive tools; embedded clinical guidelines that promote standardized, evidence-based practices; electronic prescribing and test-ordering that reduces errors and redundancy; and discrete data fields that foster use of performance dashboards and compliance reports' . [15]This should form part of ongoing quality improvement processes for hospital data in general and not just for ICD coding, so that there is coherence and efficiency in the generation of all health service data. Additional research and innovative monitoring mechanisms that include larger samples of patient records and health facilities over longer periods of time are recommended to get a more reliable picture of ICD coding quality. Conclusion Despite the inherent limitations of this non-randomised study design, this research provides sufficient pragmatic evidence that training and support had a substantial positive impact on ICD coding quality in an SA hospital setting.Additional research is required to explore the long-term impact, sustainability and cost-effectiveness of this intervention package to support clinicians in generating goodquality data for hospital inpatients. Table 3 . Post-intervention characteristics of patients and clinicians CI = confidence interval; IQR = interquartile range. Table 4 . Crude and adjusted ORs (also adjusted for clustering) between patient/clinician characteristics and accuracy of primary ICD codes Fig. 1.Percentages of records with accurate primary ICD codes at the inter vention site compared with the control site before and after the training and support intervention.
2018-04-03T00:00:37.095Z
2017-05-24T00:00:00.000
{ "year": 2017, "sha1": "a82611018d082aa8bc6d445ff5ca5574209c3b2a", "oa_license": "CCBYNC", "oa_url": "http://www.samj.org.za/index.php/samj/article/download/11910/8072", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a82611018d082aa8bc6d445ff5ca5574209c3b2a", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
229929738
pes2o/s2orc
v3-fos-license
Exploring Risk and Resilient Profiles for Functional Impairment and Baseline Predictors in a 2-Year Follow-Up First-Episode Psychosis Cohort Using Latent Class Growth Analysis Being able to predict functional outcomes after First-Episode Psychosis (FEP) is a major goal in psychiatry. Thus, we aimed to identify trajectories of psychosocial functioning in a FEP cohort followed-up for 2 years in order to find premorbid/baseline predictors for each trajectory. Additionally, we explored diagnosis distribution within the different trajectories. A total of 261 adults with FEP were included. Latent class growth analysis identified four distinct trajectories: Mild impairment-Improving trajectory (Mi-I) (38.31% of the sample), Moderate impairment-Stable trajectory (Mo-S) (18.39%), Severe impairment-Improving trajectory (Se-I) (12.26%), and Severe impairment-Stable trajectory (Se-S) (31.03%). Participants in the Mi-I trajectory were more likely to have higher parental socioeconomic status, less severe baseline depressive and negative symptoms, and better premorbid adjustment than individuals in the Se-S trajectory. Participants in the Se-I trajectory were more likely to have better baseline verbal learning and memory and better premorbid adjustment than those in the Se-S trajectory. Lower baseline positive symptoms predicted a Mo-S trajectory vs. Se-S trajectory. Diagnoses of Bipolar disorder and Other psychoses were more prevalent among individuals falling into Mi-I trajectory. Our findings suggest four distinct trajectories of psychosocial functioning after FEP. We also identified social, clinical, and cognitive factors associated with more resilient trajectories, thus providing insights for early interventions targeting psychosocial functioning. Introduction Psychosocial functioning refers to the ability to perform in daily living activities such as work, studies or recreational activities, and to establish satisfying interpersonal relationships with others [1]. In the last 50 years, psychiatry has progressively moved from a deficit-based care (which focuses on symptomatic remission), to a model oriented towards functional recovery, meaning that helping the patient to meet his/her personal goals has become as critical as achieving symptomatic remission [2,3]. In fact, it is increasingly accepted that functional outcomes are more meaningful when measuring treatment response than are scores on various scales rating only psychiatric symptoms [4]-and more aligned with what the patient ultimately expects from treatment [5]. Therefore, full functional remission is currently a preeminent goal in psychiatry. Prior evidence suggests that achieving full functional recovery short after first-episode psychosis (FEP) is a stronger predictor of long-term full functional remission than symptomatic remission [6,7]. This evidence underscores the need to find early and modifiable factors associated with functional impairment already from early stages. Although multiple studies have investigated putative predictors of poor psychosocial functioning after FEP [8], most of them have approached this question using a dichotomous outcome, that is, presence vs. absence of functional impairment. The real picture seems far more complex, though, given the highly divergent outcomes in psychosocial functioning that individuals can experience after FEP, which encompass varying degrees of functional difficulties and different evolutions over time. Some patients will experience an early functional recovery, others might exhibit severe functional difficulties from illness onset and some subgroups might experience (persistent or transitory) mild to moderate functional impairment, which still have a negative impact on their daily life. Hence, the real challenge is to predict early in the course of the disease which individual will fall into each of these trajectories in order to be able to design earlier and more tailored treatments for social and personal recovery [2,[9][10][11]. Statistical methods like latent class growth analysis (LCGA) can help to provide a more accurate picture of the heterogeneous course in psychosocial functioning that can be observed following FEP, as it allows considering different outcomes of the same characteristic simultaneously [12,13]. To our knowledge, only few studies so far have applied these statistical techniques to assess functional outcomes in FEP samples [14][15][16], and none of them has considered simultaneously sociodemographic variables, clinical features and an extensive set of cognitive domains, all of them previously related to poor functional outcomes [17]. Therefore, our main aim was to identify different trajectories of functional impairment in the 24-month follow-up of a FEP cohort and to assess putative predictors of these diverse trajectories, with a special focus on resilient trajectories. As a secondary objective, we aimed to explore diagnoses distribution within the different trajectories. Participants The current study is based on data from the project 'Phenotype-genotype and environmental interaction. Application of a predictive model in first psychotic episodes' (PEPs study), a multicenter, longitudinal, naturalistic follow-up study [18]. A total of 16 centers throughout Spain participated in this study; fourteen of them were members of the Biomedical Research Networking Center for Mental Health (CIBERSAM) [19] and two were collaborator centers [18]. The study was conducted in accordance with the ethical principles of the Declaration of Helsinki. It was approved by the ethics committees at each participating center (project identification code: 2008/4232). All participants or their legal guardians signed an informed consent after providing them a full explanation of the study's procedures. The detailed protocol of the PEPs study was published elsewhere [18,20]. Briefly, a total of 335 subjects with FEP were recruited by all the participating centers, from April 2009 to April 2012. Individuals were included in the PEPs study if they were between 7 and 35 years old, presented first lifetime psychotic symptoms for at least one week in the last 12 months, were fluent in Spanish language, and were willing to sign the informed consent. Intellectual disability according to the Diagnostic and Statistical Manual of mental disorders, 4th edition (DSM-IV) criteria [21], history of head trauma with loss of consciousness, and presence of an organic disease with mental repercussions constituted exclusion criteria. Patients had been under antipsychotic treatment for less than 12 months at study entry. Follow-up assessments were conducted at 2 months, 6 months, 12 months and 24 months following inclusion. Baseline Sociodemographic Data Sociodemographic data were collected from all participants at baseline, including sex, age, ethnicity, educational level, marital status, current living situation, occupation, and parental socioeconomic status (SES). Parental SES was determined using the Hollingshead Two-Factor Index of Social Position [22]. Personal and family history of somatic and psychiatric disorders was also compiled. History of drug misuse was evaluated using the adapted version of a Multidimensional Assessment Instrument for Drug and Alcohol Dependence scale [23]. The Family Environment Scale (FES), a self-report instrument, was used to assess the patients' perception of the social climate within their families [24,25]. Baseline Clinical and Functional Assessment For all subjects in the study, diagnosis was established by experienced mental health professionals using the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I) [21,26]. Psychopathology was evaluated using the Spanish validated versions of the Positive and Negative Syndrome Scale (PANSS) [27,28], the Young Mania Rating Scale (YMRS) [29,30], and the Montgomery-Åsberg Depression Rating Scale (MADRS) [31,32]. Premorbid adjustment was estimated by means of the retrospective Premorbid Adjustment Scale (PAS) [33]. The Functional Assessment Short Test (FAST) [1,34] was used to determine psychosocial functioning. It comprises 24 items, which evaluate six specific functioning domains: autonomy, occupational functioning, cognitive functioning, financial issues, interpersonal relationships, and leisure time. This scale seeks to identify changes or difficulties in functionality attributable to the illness. The FAST scores range from 0 to 72. According to the cut-off classification as proposed by Bonnín et al. [35], FAST scores > 40 are indicative of severe functional impairment, FAST score between 21 and 40 indicate moderate functional impairment, FAST scores between 12-20 indicate mild impairment, and ≤11 points in the FAST reflect no functional impairment. This scale has shown to be sensitive to change and has been validated for FEP [36]. In all the aforementioned scales, higher scores are indicative of greater clinical severity or functional impairment. History of traumatic life events was assessed through the Spanish version of the Trauma Questionnaire (TQ) [37,38]. Duration of untreated psychosis (DUP), defined as the number of days elapsed between the onset of positive psychotic symptoms and the initiation of the first appropriate treatment for psychosis, was also registered. It was estimated using the Symptom Onset in Schizophrenia (SOS) inventory [39]. 2-Month Follow-Up Neuropsychological Assessment Participants were likewise evaluated using a comprehensive neuropsychological battery encompassing most of the cognitive domains proposed by the National Institute of Mental Health MATRICS consensus [40]. The evaluation was performed by trained neuropsychologists in the first two months after the inclusion of the participant in the study to avoid the interference of acute psychopathological manifestations on neurocognitive assessments. The neuropsychological assessment comprised the following cognitive domains: (1) estimated Intelligence Quotient (IQ) (calculated based on the performance on the Vocabulary subtest from the Wechsler Adult Intelligence Scale (WAIS-III) [41]); (2) executive function (Stroop Color-Word Interference Test [42], Wisconsin Card Sorting Test (WCST) [43] and Trail Making Test (TMT), form B [44]); (3) attention (Continuous Performance Test-II (CPT-II) [45]); (4) processing speed (TMT, form A [46] and categorical (Animal Naming) and phonemic (F-A-S) components of the Controlled Oral Word Association Test (COWAT) [47]); (5) verbal memory (Spanish version of the California Verbal Learning Test, the Test de Aprendizaje Verbal España-Complutense (TAVEC) [48]); (6) working memory (Digit span and Letter-Number sequencing subtests of WAIS-III [41]); and (7) social cognition (Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT) [49,50]). The neuropsychological battery is described in further detail in the PEPsCog study [51]. Identification of Functional Trajectories: Latent Class Growth Analysis LCGA was used to identify distinct functioning trajectories over the 24-month followup. In the current analysis, individual class membership was assigned on the basis of FAST total scores measured at five time points over the two-year follow-up period, namely at baseline, 2-, 6-, 12-, and 24-month follow-up. We only included in the analysis individuals over 18 years old, as the FAST scale has only been validated in adult samples, and with information on the FAST scale in at least two follow-up assessments. This left a sample of 275 adult participants. Each model was rerun 100 times using different start values to avoid converging to local maxima [52]. To accommodate expected fluctuations over time, we estimated linear and quadratic terms. In order to determine the optimal number of trajectory classes, models with increasing number of latent classes (from 1-to 4-class models) were fitted to the data and the best-fitting model was selected according to the following goodness-offit indices: Akaike's Information Criterion (AIC), Bayesian Information Criterion (BIC), samples-size-adjusted BIC (aBIC), and entropy. Lower values of AIC, BIC, and aBIC suggest a more parsimonious model, while higher entropy also indicates better model fit. Entropy ranges from 0 to 1 and is a summary indicator of the accuracy with which models classify individuals into their most likely class. Entropy with values approaching 1 indicate clear delineation of classes [53]. Interpretability and parsimony of the model were also taken into consideration in the final selection of the model. LCGA analyses were performed on R version 3.6.3, using the 'lcmm' package ( [54]; https://cran.r-project.org/web/packages/ lcmm/index.html). Identification of Baseline Predictors of Functional Trajectory Membership To identify putative baseline predictors of trajectory membership, the estimated latent classes (i.e., the estimated trajectory group) derived from LCGA were imported to SPSS, version 23 (SPSS Inc., Chicago, IL, USA), for a three-step analysis: First, we created seven cognitive composites to be used as putative baseline predictors using data from the two-month follow-up neurocognitive assessment. To do so, patients' raw scores on each neuropsychological task were standardized to z-scores based on the performance of the whole sample. The selection of the tasks within each cognitive domain was based on previous works from the PEPs group [51,55,56]. Afterwards, z-scores of different tests were summed and averaged to create the following seven cognitive composites: (1) the processing speed composite, based on the word-color task from the Stroop Test and the TMT-A; (2) the working memory composite, which included the Letter-Number Sequencing and the Digit-Span WAIS-III subtests; (3) the verbal learning and memory index, which was composed of the total trials 1-5 list A, short free recall, short cued recall, delayed free recall, delayed cued recall, and recognition scores of the TAVEC; (4) the executive function composite, calculated based on the number of categories and perseverative errors of the WCST, the Stroop Interference Test, and the TMT-B; (5) the attention composite score, which was based on several measures of the CPT-II, such as commission and reaction time; (6) the verbal fluency composite which was composed of the Category Fluency (Animal Naming) and the F-A-S Test of the COWAT; and (7) the social cognition composite, which included the Emotional Management of the MSCEIT. Whenever extreme scores in the performance of the aforementioned test were detected (i.e., more than four standard deviations (SD) above or below the mean), the scores were truncated to z = +/− 4. Since higher scores in CPT-II, WCST perseverative errors, and TMT-A and -B indicate poorer performance, z-scores obtained from measures of these tests were reversed before constructing the corresponding composite scores. Second, candidate predictors (i.e., baseline sociodemographic and clinical variables as well as the created cognitive composites) were compared between trajectory classes using Kruskal-Wallis and chi-square tests, as appropriate. The Kruskal-Wallis test was selected for continuous variables since they did not follow a normal distribution, as assessed visually and by the Kolmogorov-Smirnov test. When applicable, post-hoc comparison analyses with Bonferroni correction for multiple comparisons were performed to further clarify the presence of significant differences between trajectory classes. Third, those variables found to be statistically significant in the post-hoc analysis in at least two pair-wise comparisons were then entered into a multinomial regression model to determine which candidate factors independently predicted trajectory membership, adjusting for age and sex. For the PANSS scale, only the PANSS positive and negative subscales were entered as independent variables to avoid multicollinearity. Significant putative predictors for the multivariable model were identified using a stepwise backwards elimination process [57], with sex and age entered as fixed factors. The identified latent classes were used as the dependent variable. Since we were interested in exploring predictors of resilient trajectories, we selected the most impaired group as the reference category. Diagnosis Distribution within the Identified Functional Trajectories Lastly, to explore whether diagnosis distribution differed within each functional trajectory and how it changed over time, we compared using chi-square tests the proportion of individuals with a diagnosis of Schizophrenia, Bipolar disorder, Schizoaffective disorder, and Other psychoses (including psychotic disorder not-otherwise specified, brief psychotic disorder, schizophreniform psychosis, delusional disorder, substance-induced psychosis) in each of the predicted functional trajectories at baseline, 1-year and 2-year follow-up. The level of statistical significance for all analyses was set at p < 0.05. Sample Characteristics and Attrition Analysis The final sample included 261 participants. A total of 14 individuals were not considered for the analyses since information on their FAST scores was only available at one time point. Therefore, they were treated as drop-outs. The baseline characteristics of the final sample are presented in Table 1. A comparison between drop-outs and non-drop-outs at baseline, 12-month, and 24-month follow-up can be found in the Supplementary Table S1. The median age of the final sample was 25.05 years old (Interquartile Range: 9) and 33% of the participants were female. Among those subjects that dropped out from the study, there was a lower proportion of Caucasian participants and of participants with a family history of psychiatric disorders. Subjects that dropped out from the study reported more frequently substance misuse at baseline too. Family history of psychiatric disorder (Yes) 146 (55.9) * Active includes workers and students. Latent Classes of Functional Trajectories After examining fit indices, entropy, parsimony, and interpretability of the model, the 4-class model including the quadratic term was selected as optimal for our data ( Table 2). Entropy was acceptable (0.76) for the 4-class model as well as post mean class probabilities (0.81 for Class 1, 0.92 for Class 2, 0.82 for Class 3, and 0.84 for Class 4). This suggests that with the 4-class model individuals were likely to be correctly assigned to their respective latent class. Table 2. Goodness-of-fit statistics of latent class growth analysis with one-to-four class solutions of psychosocial functioning trajectories. [58]. Bold is used here to indicate which model was selected. Number of Classes The mean FAST scores at each assessment point of individuals grouped according to their predicted trajectory are presented in Figure 1. One group showed mild impairment at baseline and no impairment by the end of the follow-up, and was referred to as Mild impairment-Improving trajectory (Class 1; n = 100 (38.31%)). Another group, denominated as Moderate impairment-Stable trajectory (Class 2; n = 48 (18.39%)) exhibited moderate functional impairment at baseline and throughout the follow-up. A third group presented with severe functional impairment that improved along the follow-up. It was referred as Severe impairment-Improving trajectory (Class 3; n = 32 (12.26%)). The last group, termed as Severe impairment-Stable trajectory, displayed severe-moderate functional impairment throughout the follow-up (Class 4; n = 81 (31.03%)). Thus, 50.57% of the sample showed a trajectory characterized by a functional improvement/recovery ("Improving trajectories"), while 49.42% exhibited persistent functional impairment during follow-up ("Stable trajectories"). Baseline Predictors of Trajectory Membership The comparison between the four psychosocial functioning trajectories on sociodemographic, clinical, and neuropsychological variables is presented in Table 3. The baseline variables found to be statistically different between groups in at least two pairwise comparisons were: parental SES, alcohol use, PANSS positive, PANSS negative, PANSS general, PANSS total, Young total, MADRS total, PAS total, verbal learning, and memory and working memory. As previously stated, for the PANSS scale, only the PANSS positive and negative subscales were entered as independent variables in the multinomial regression model. Baseline Predictors of Trajectory Membership The comparison between the four psychosocial functioning trajectories on sociodemographic, clinical, and neuropsychological variables is presented in Table 3. The baseline variables found to be statistically different between groups in at least two pairwise comparisons were: parental SES, alcohol use, PANSS positive, PANSS negative, PANSS general, PANSS total, Young total, MADRS total, PAS total, verbal learning, and memory and working memory. As previously stated, for the PANSS scale, only the PANSS positive and negative subscales were entered as independent variables in the multinomial regression model. Multinomial regression analysis (final model: R 2 Nagelkerke 53%, X 2 = 140.26; df = 24; p < 0.001) indicated that parental SES, total baseline scores in PANSS positive subscale, PANSS negative subscale, MADRS, and PAS, as well as verbal learning and memory contributed to differentiate among the four functional trajectories (Table 4). Specifically, subjects falling into the Mild impairment-Improving group were more likely to have a medium-high parental SES (OR: 4.14, 95% CI 1. Exploring Diagnoses Distribution among Functional Trajectories throughout the Follow-Up The diagnoses distribution within each functional trajectory at baseline, one-year follow-up and two-year follow-up is depicted in Figure 2. Diagnosis distribution significantly differed between trajectory groups at baseline (n = 261; X 2 = 19.9; p = 0.02), 1-year follow-up (n = 202; X 2 = 42.6; p < 0.001) and at 2-year follow-up (n = 156; X 2 = 28.5; p = 0.001). A higher proportion of patients with a diagnosis of Schizophrenia was found among individuals falling into the Severe impairment-Stable and Moderate impairment-Stable trajectories compared to the Mild impairment-Improving trajectory. On the other hand, the diagnoses of Bipolar disorder and Other psychosis were more frequent among individuals falling into the Mild impairment-Improving trajectory compared to the Severe impairment-Stable trajectory. Abbreviations: Mi-I: Mild impairment-Improving; Mo-S: Moderate impairment-Stable; Se-I: Severe impairment-Improving; Se-S: Severe impairment-Stable. year follow-up (n = 202; X 2 = 42.6; p < 0.001) and at 2-year follow-up (n = 156; X 2 = 28.5; p = 0.001). A higher proportion of patients with a diagnosis of Schizophrenia was found among individuals falling into the Severe impairment-Stable and Moderate impairment-Stable trajectories compared to the Mild impairment-Improving trajectory. On the other hand, the diagnoses of Bipolar disorder and Other psychosis were more frequent among individuals falling into the Mild impairment-Improving trajectory compared to the Severe impairment-Stable trajectory. Abbreviations: Mi-I: Mild impairment-Improving; Mo-S: Moderate impairment-Stable; Se-I: Severe impairment-Improving; Se-S: Severe impairment-Stable. Post-Hoc Mediation Analysis Given that previous works on FEP samples have suggested that premorbid adjustment may influence psychosocial functioning through verbal memory and negative symptoms [59], we decided to test how the identified predictors interact to impact functioning in our sample. For that, we examined mediation using a regression-based bootstrapping approach [60]. Analyses were performed with PROCESS [61], with age and sex introduced as covariables (see Appendix A for a more detailed explanation). The model used to explore mediation between predictors of the Severe impairment-improving trajectory vs. Severe impairment-Stable trajectory indicated that better premorbid adjustment positively impacts verbal learning and memory, which in turn increases the probability of Post-Hoc Mediation Analysis Given that previous works on FEP samples have suggested that premorbid adjustment may influence psychosocial functioning through verbal memory and negative symptoms [59], we decided to test how the identified predictors interact to impact functioning in our sample. For that, we examined mediation using a regression-based bootstrapping approach [60]. Analyses were performed with PROCESS [61], with age and sex introduced as covariables (see Appendix A for a more detailed explanation). The model used to explore mediation between predictors of the Severe impairment-improving trajectory vs. Severe impairment-Stable trajectory indicated that better premorbid adjustment positively impacts verbal learning and memory, which in turn increases the probability of belonging to the Severe impairment-improving trajectory (indirect effect = −0.011; 95% CI, −0.030 to −0.001). However, our results indicate complementary partial mediation since both direct and indirect effects were significant and pointed in the same direction [62]. Regarding mediation between predictors of Mild impairment-Improving vs. Severe impairment-Stable trajectories, we could establish that parental SES partially mediates its effects through premorbid adjustment and through baseline negative symptoms (indirect effect = −0.249; 95% CI, −0.551 to −0.086). Discussion In this study, we used LCGA to investigate trajectories of psychosocial functioning following FEP. In line with previous studies using the same approach [14][15][16], our results indicate a heterogeneous pattern of psychosocial functioning in the first years after FEP. Specifically, we found four distinct functional trajectories. The largest number of subjects in our sample showed mild functional impairment at baseline and experienced functional recovery short after FEP. The second largest group experienced severe functional impairment at baseline which persisted, although more moderately, throughout the study period. A third group displayed a moderate and persistent functional impairment throughout the 24-month follow-up. Finally, a minority of patients exhibited severe functional impairment at baseline, which subsequently improved almost to the point of no functional impairment by the end of the follow-up. Importantly, around 50% of the sample exhibited a marked functional improvement by the end of follow-up. Baseline factors associated with functional improvement were parental medium-high SES, less severe negative, and depressive symptoms (for individuals in the Mild impairment-Improving trajectory), better scores in the verbal learning and memory domain (for individuals in the Severe impairment-Improving trajectory) and better premorbid adjustment (for both the Mild impairment-Improving and Severe impairment-Improving trajectory groups). Less severe positive symptoms at baseline predicted a Moderate impairment-Stable trajectory vs. a Severe impairment-Stable trajectory. These results are in agreement with previous studies performed in FEP and chronic psychiatric samples, where parental SES [17,63], negative [14,64,65] and depressive symptoms [66,67], verbal memory [64], and premorbid adjustment [14,68] were predictors of functional outcomes. To our knowledge, however, this is the first study to simultaneously analyze such a large panel of potential predictors of mid-term psychosocial functioning trajectories identified using an LCGA approach, which included sociodemographic, clinical, and neurocognitive variables, and to further examine the interaction between the identified predictors. Regarding diagnosis distribution among classes, our findings are in keeping with previous research [20,69]. All diagnoses were represented in the four trajectories, yet the proportion of patients with a diagnosis of Schizophrenia was higher among individuals showing persistent functional difficulties, whereas a higher proportion of patients with Bipolar disorder or Other psychoses fell into the group showing the most favorable functional trajectory. Despite these results need to be interpreted with caution due participants drop-out during follow-up, we found the same pattern at 12-month and 24-month follow-up. In our study, medium-high parental SES appeared as one of the main predictors of the trajectory characterized by mild functional impairment at first assessment followed by an early functional recovery. The association between higher parental SES and better functional outcomes is probably a complex one. Our mediation analysis, indeed, suggests that parental SES partially mediate its influence on functionality through premorbid adjustment and negative symptoms. However, other factors not included in the mediation analysis also seem to play a role. For instance, families with a higher SES might provide more cognitive stimulation to their offspring [70], for example, involving them in more intellectual, artistic, or cultural leisure activities, hence enhancing their cognitive reserve, which has been associated with better functional outcomes [56,71,72]. In fact, we found that subjects within the Mild impairment-Improving trajectory reported to be involved in more social and recreational activities than the Severe impairment-Improving trajectory group, as reflected by higher scores in the Active-recreational orientation subscale of the FES. These families may likewise have more resources to identify the first psychotic symptoms and enable an earlier engagement with mental health services [73]. It could also translate more family support or means to provide better care in the post-FEP period [74]. In any case, our results emphasize the need for social interventions to promote and educate on mental health and facilitate the access to mental health services in the pre-and post-FEP period [75,76], as it has been done in Australia through the headspace initiative (https://www.headspace.org.au). Several studies have consistently reported a relationship between verbal learning and memory and functional outcomes, both in affective and non-affective samples [51,67,[77][78][79]. For instance, more preserved verbal learning before enrolling to functional remediation, a psychological therapy specifically targeting functional impairments, is associated with better long-term functional outcomes after this therapy [80]. Negative symptoms are also well-known predictors of poor functional outcomes [81][82][83] and the interrelationship between negative symptoms and cognition as predictors of functionality has been a matter of intense debate and study in prior works [84,85]. In the study by Milev et al. [64], performed in a sample of 99 subjects followed for seven years after FEP, verbal memory appeared as a strong predictor of global functioning in univariate logistic analysis. However, when the effect of verbal memory was examined together with negative symptoms in a multivariate multinomial logistic regression, negative symptoms took precedence over verbal memory as a predictor of global functioning, since the latter was no longer significant. In their three years follow-up study, Simons et al. [86] likewise found that the association between the performance in most cognitive domains, including verbal memory, and social functioning in the long-term was fully mediated by negative symptoms. Finally, Jordan et al. [59] showed that verbal memory predicted length of negative symptoms remission in FEP patients, which in turn predicted better functional performance. According to this evidence, negative symptoms might play a more predominant role predicting functional outcomes than verbal memory. That might explain why, when comparing those groups exhibiting significantly different severity of negative symptoms at baseline (i.e., Severe impairment-Stable vs. Mild impairment-Improving), negative symptoms but not verbal memory appeared as a predictor of poorer functional trajectory. In contrast, when comparing groups with similar negative symptoms at baseline (i.e., Severe impairment-Stable vs. Severe impairment-Improving trajectory), more preserved verbal memory arose as a significant predictor of better functional recovery. Consequently, our findings confirm the importance of negative symptoms as a treatment target for functional recovery and suggest that assessing performance in verbal learning and memory might be especially useful as a differential factor of future functional outcome in FEP subjects presenting with severe functional impairment and similar negative symptoms. On the contrary, for those subjects showing mild negative symptoms at baseline, assessing verbal memory and learning might not provide additional information on their functional prognosis. Better premorbid adjustment also appeared as a predictor of a more favorable functional trajectory in our analysis, in keeping with prior evidence [81,87]. As suggested by Hodgekins et al. [14], the persistence in functional impairment after FEP in those subjects with poorer premorbid adjustment might just reflect a functional disability that was already present before the onset of the full-blown psychotic episode, then rendering it difficult for these patients to achieve a functional remission-hence, the importance of intervening early in the course of the disease with specific interventions designed to improve functionality [75,88,89]. Considering that the effects of premorbid adjustment on psychosocial functioning might be partially mediated by verbal learning and memory, as further supported by Jordan et al. [59], those individuals at high-risk for affective and non-affective psychosis who exhibit poor social adjusted (and especially those with low parental SES) might benefit from an adapted version of functional remediation, which improves functionality but also enhances verbal memory [90,91]. Randomized clinical trials in early-stage samples will be needed to test the real benefit of early functional remediation interventions (ideally adapted to high-risk samples) in long-term psychosocial outcomes. To date, evidence coming from randomized clinical trials is only available on the effect of cognitive remediation in individuals at ultra-high risk for psychosis, which points to a positive impact on cognitive measures, including verbal memory, but less clear effects on psychosocial functioning [92]. Finally, our results indicate that less severe depressive symptoms at baseline are associated with a Mild impairment-Improving trajectory. Persistent depressive symptoms have been shown to worsen functional prognosis after FEP [93,94]; however, in our study, we were evaluating the putative predictive role of baseline depressive symptoms and therefore it can be that our findings just reflect a less severe clinical presentation in the Mild impairment-Improving trajectory compared to the Severe impairment-Stable trajectory. Additionally, the Severe impairment-Stable trajectory was characterized by more severe negative symptoms, and we cannot rule out some overlap between scores in the MADRS and the PANSS negative subscale [95]. A similar explanation can be applied to our findings of lower scores at baseline in the PANSS positive subscale being predictors of a Moderate impairment-Stable trajectory compared to the Severe impairment-Stable trajectory. They may reflect that the differences in functionality observed between the two groups in the first assessment are driven by more severe psychotic symptoms at baseline. Future works with greater sample size, including variables not available in this study (such as cognitive reserve scores or biological markers) and taking into account longitudinal factors that can also influence functioning (such as persistent substance abuse or therapeutic non-compliance) would be needed to confirm and refine our findings. Furthermore, our findings that all diagnoses are represented in all trajectories support the idea that there are transdiagnostic subgroups that are alike in clinical presentation and outcomes. According to previous research [96], these subsets of patients might represent specific biotypes that are not governed by classical diagnostic criteria. Therefore, future studies that analyze whether patients falling in resilient vs. persistent functional trajectories are characterized by a differential set of biomarkers would be interesting to develop precise models of risk stratification of functional impairment. For now, our results already suggest that more preserved verbal learning and memory could be used as a marker of functional resilience in those FEP patients with a more severe clinical and functional presentation. Limitations The current study presents several limitations to be noted. Firstly, as a sub-analysis of a prior study not primarily designed for the purpose of the present work, sample size might be too small and follow-up too short to capture all the potential trajectories for psychosocial functioning. Secondly, trajectory "naming" is a subjective process; in our case, it was based on what we considered the most important information to be extracted from the observed trajectories. Some might not agree with the chosen labels for each trajectory. Nevertheless, we consider our approach to be pragmatic and clinically useful, as it delineates two subsets of patients: those at risk of sustained functional difficulties and those more resilient, that is, showing more improvement during follow-up. Thirdly, we focused on baseline predictors and did not take into account variables like treatment compliance or substance abuse during follow-up, which might also contribute to functional outcomes in the period after FEP. Fourthly, as the study design was constructed prior to 2009, specific scales for negative symptoms such as the Brief Negative Symptom Scale (BNSS) [97] or the Clinical Assessment Interview for Negative Symptoms (CAINS) [98] were not used. The same applies to cognitive reserve, with scales such as the CRASH not being available at that time [99]. Lastly, results regarding diagnosis distribution need to be interpreted with caution due to the small sample size in some of the diagnostic categories, which may render X 2 results non-valid. Conclusions In our study, we identified four trajectories of psychosocial functioning following FEP, two of them indicative of a persistent functional impairment course and two describing a more resilient course. Additionally, our findings give some clues on putative factors that might mediate functional resilience, such as better socioeconomic status and premorbid adjustment, lesser negative symptoms, and more preserved verbal learning and memory. They also highlight that final functional outcomes are the result of the additive effects of a variety of factors. Hence, an integrative approach from very early stages is needed to target functional impairments, especially among those in a more vulnerable psychosocial situation. Appendix A Post-hoc mediation analyses Given that previous works on FEP samples have suggested that premorbid adjustment may influence psychosocial functioning through verbal memory and negative symptoms [59], we decided to test how the identified predictors interact to impact functioning in our sample. For that, we examined mediation using a regression-based bootstrapping approach [60]. Analyses were performed with PROCESS [61]. Before beginning the analyses, two dummy variables for trajectory membership were created, one including only Mild impairment-Improving and Severe impairment-Stable trajectories and another including only Severe impairment-Improving and Severe impairment-Stable trajectories. First: we used PROCESS model 4 to test a simple mediation model with trajectory membership (Severe impairment-Improving vs. Severe impairment-Stable, with Severe impairment-Stable trajectory as the reference category) as the outcome variable (Y), baseline PAS score as the predictor variable (X) and baseline verbal learning and memory as the mediator variable (M) ( Figure A1). Age and sex were included as covariates. The data are consistent with the claim that better premorbid adjustment positively impacts verbal learning and memory, which in turn increases the probability to belong to the severe and improving functional impairment trajectory (indirect effect = −0.011; 95% CI = −0.030 to −0.001). The mediation partially explains the effect of premorbid adjustment on trajectory membership; in addition, premorbid adjustment influences class membership independently from the proposed mechanism (b = −0.05, p = 0.002). Hence, we infer complementary partial mediation [62]. Second: we used a series mediation model to assess mediation between predictors of Mild impairment-Improving vs. Severe impairment-Stable trajectory. In this model, trajectory membership (Mild impairment-Improving vs. Severe impairment-Stable, with Severe impairment-Stable trajectory as the reference category) was the outcome variable (Y) and parental SES the predictor variable (X). Baseline PAS score (M1) and baseline PANSS negative subscale scores (M2) were included, in this order, as mediator variables. Total MADRS score was not considered as a mediator as no association with parental SES was found in a preliminary analysis. We could establish a serial mediation from parental SES through premorbid adjustment and through baseline negative symptoms to trajectory membership (indirect effect = −0.249; 95% CI: −0.551 to −0.086). In addition, parental SES had an indirect effect on class membership only through premorbid adjustment (indirect effect = −0.525, 95% CI: −1.097 to −0.171) and only through baseline negative symptoms (indirect effect = −0.504, 95% CI: −1.076 to −0.135). Finally, there was a direct effect of parental SES on trajectory membership (b = −0.932, p = 0.029), indicating complementary partial mediation. independently from the proposed mechanism (b = −0.05, p = 0.002). Hence, we infer complementary partial mediation [62]. Second: we used a series mediation model to assess mediation between predictors of Mild impairment-Improving vs. Severe impairment-Stable trajectory. In this model, trajectory membership (Mild impairment-Improving vs. Severe impairment-Stable, with Severe impairment-Stable trajectory as the reference category) was the outcome variable (Y) and parental SES the predictor variable (X). Baseline PAS score (M1) and baseline PANSS negative subscale scores (M2) were included, in this order, as mediator variables. Total MADRS score was not considered as a mediator as no association with parental SES was found in a preliminary analysis. We could establish a serial mediation from parental SES through premorbid adjustment and through baseline negative symptoms to trajectory membership (indirect effect = −0.249; 95% CI: −0.551 to −0.086). In addition, parental SES had an indirect effect on class membership only through premorbid adjustment (indirect effect = −0.525, 95% CI: −1.097 to −0.171) and only through baseline negative symptoms (indirect effect = −0.504, 95% CI: −1.076 to −0.135). Finally, there was a direct effect of parental SES on trajectory membership (b = −0.932, p = 0.029), indicating complementary partial mediation.
2020-12-31T09:07:48.279Z
2020-12-28T00:00:00.000
{ "year": 2020, "sha1": "6ed65190c21dc1e3a08c955d314dc098d1afbbf1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/10/1/73/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "bdcdafe2de6ec51045b7c10d040074b7fe481e82", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
215837480
pes2o/s2orc
v3-fos-license
International Journal of Human-Computer Studies A Introduction Organisations are increasingly under threat from attackers attempting to infiltrate their computer systems by exploiting the behaviour of human users (Sasse et al., 2001). One means by which this can be achieved is via targeted, fraudulent emails, which aim to persuade employees to click on malicious links, download malicious attachments or transfer organisational funds or other sensitive information. This practice is commonly known as spear phishing (Workman, 2008). A 2016 Cyber Incident Report (Verizon, 2016) highlighted that over 2,000 organisations experienced a data breach in 2015, with the highest number experienced by organisations in the financial sector (a total number of 795). This same report also showed that approximately 1 in 10 employees of such organisations clicked on links or opened attachments contained within sanctioned phishing email tests. One way in which organisations attempt to raise awareness of spear phishing emails amongst their staff is through the use of simulated phishing tests. This involves the organisation sending simulated, targeted phishing emails to a number of employees and monitoring the resultant 'click-rate' (i.e., the proportion of employees who click on malicious links within the email). Such emails, whether sent as part of simulated phishing tests or by actual fraudsters, use a range of influence techniques to encourage people to respond quickly and without consideration. This includes instilling a sense of urgency or limited availability and exploiting compliance with authority figures (Atkins and Huang, 2013;Cialdini, 2007;Stajano and Wilson, 2011). Examples of influence techniques used in spear phishing emails are shown in Table 1. When such attacks are successful, they can result in substantial reputational damage, monetary losses or operational impacts for the organisation involved (e.g., Landesman, 2016;Piggin, 2016;Zetter, 2016). It is this threat that has contributed to the rise of anti-phishing training games, formal phishing simulation tests, and interface design initiatives to increase employee awareness and assist in the effective management of phishing risks within the workplace (Abawajy, 2014;Dodge et al., 2007). Despite an increased focus on training and awareness approaches, a 2016 report produced by security training firm PhishMe highlighted that employees continue to be vulnerable to phishing attacks, with an average response rate of approximately 20% (Computer Fraud and Security, 2016;PhishMe, 2016). This includes responses to both spear phishing and generic phishing emails. This report, which was based on the analysis of over 8 million simulated phishing emails, also highlighted that 67% of employees who respond to simulated phishing attacks are repeat victims and therefore likely to respond to phishing emails more than once. The continuing vulnerability of many organisations to phishing attacks has led the UK National Cyber Security Centre to recently release specific guidance for organisations regarding how they can defend themselves from the phishing threat (NCSC, 2018a). The hierarchical nature of many workplaces and employees' limited time means that they are likely to be particularly susceptible to the authority and urgency influence techniques highlighted by https://doi.org/10.1016/j.ijhcs.2018.06.004 Received 23 June 2017; Received in revised form 23 April 2018; Accepted 30 June 2018 Cialdini (2007) and Stajano and Wilson (2011). Elements of the particular work context in which a spear phishing email is received (such as receiving an urgent request whilst being particularly busy or distracted) are also likely to exacerbate susceptibility. However, difficulties in accessing data related to susceptibility within workplace settings have severely limited current understanding of these factors. Therefore, there is much to be gained from investigating the role of both influence techniques and work-related contextual factors using applied data sources. This will not only aid theoretical development, but also assist in advancing practical interventions. The present paper uses data from two organisations that routinely handle sensitive information to address this current limitation; using a novel approach that enables existing theoretical concepts to be considered and new ones to be identified in relation to applied workplace settings. The paper is structured as follows. First, we briefly consider current theoretical approaches and research findings relevant to susceptibility to spear phishing emails. We then present two studies conducted in organisational settings. In Study One, we take a novel approach to the examination of message-related factors (specifically, the presence of authority and urgency influence techniques) by examining historic data from simulated phishing tests within organisation A. In Study Two, we undertake a qualitative exploration of wider susceptibility factors related to the individual recipient and the context that they are in (including how familiar they are with the message sender, whether they are expecting a particular communication, and their awareness of the potential risk of spear phishing) by exploring employee perceptions of susceptibility within the work environment using a focus group methodology in a second organisation (organisation B). Although Williams et al. (2017a) discuss the potential role of these various aspects on susceptibility to online influence in their theoretical review, there is limited empirical evidence to date. The current studies take a first step in addressing this gap. We conclude by considering these findings in relation to the potential expansion of current theories. We also consider potential contributions to practical applications, including interface design, employee training and awareness, and decision support systems. Theoretical justification Over the last decade, researchers have attempted to identify the primary factors that may impact individual susceptibility to phishing emails. This has led to the development and application of a range of theoretical frameworks, including the Integrated Information Processing Model of Phishing Susceptibility (IIPM; Vishwanath et al., 2011), the Suspicion, Cognition, and Automaticity Model (SCAM; Vishwanath et al., 2016), and Protection Motivation Theory (PMT; Rogers, 1975). Although these models show a degree of overlap, they have rarely been studied together, despite the fact that all of the highlighted elements are likely to influence susceptibility to spear phishing. For instance, PMT has been more commonly applied to generic security behaviour and examines individual perceptions of threat and perceived ability to manage such threats. Conversely, the SCAM incorporates individual knowledge, beliefs and habits in relation to phishing susceptibility specifically. Finally, the IIPM focuses primarily on the information processing style that is used when a phishing email is encountered. These models have also not been extensively studied using organisational data. Exploring the role of all of these aspects within organisational settings provides a unique opportunity to understand the full range of factors that may influence susceptibility in the workplace. We further consider each of these models in relation to our study aims below. 1.1.1. The integrated information processing model of phishing susceptibility (IPPM) The IPPM suggests that the likelihood that an individual will respond to a phishing email is influenced by the content of the email, such as the influence techniques that it contains, the use and accuracy of email signatures, and the sender address (Vishwanath et al., 2011). Specifically, the model claims that people's limited attentional resources are monopolised by the presence of particular influence techniques such as urgency (e.g., an urgent deadline). This increases the likelihood that people will rely on relatively automatic forms of information processing (known as heuristic processing) when deciding how to respond and will not engage in more in-depth consideration of the legitimacy of the email (known as systematic processing: Eagly and Chaiken, 1993;Harrison et al., 2016a;Kahneman, 2011;Luo et al., 2013;Vishwanath et al., 2011;. As a result, authenticity cues within the email (i.e., features a person uses to determine legitimacy), such as an incorrect sender address, are more likely to be overlooked. The relative role of particular influence techniques in influencing individual susceptibility to phishing remains uncertain, however (Oliveira et al., 2017). For instance, when comparing participant responses to genuine, phishing and spear phishing emails that contained authority, scarcity or social proof influence techniques, Butavicius et al. (2015) found greater susceptibility to emails that contained authority cues. Williams et al. (2017b) also manipulated the presence of authority cues within fraudulent software updates whilst keeping the presence of urgency cues constant and found that participants were particularly susceptible to updates containing authority cues. However, in a field experiment where different phishing messages were sent to more than 2,600 participants, the presence of authority influence techniques was not found to increase click-rates (Wright et al., 2014). In their analysis of participants' self-reported reasons for responding to fraudulent updates, Williams et al. (2017a) further highlighted the role of other message-related cues, such as how familiar participants were with the particular update message (i.e., whether they had received similar messages before) and whether they were expecting a particular communication. To our knowledge, the relative role of such influence techniques has yet to be explicitly examined within workplace settings. This is despite the fact that particular influence techniques may be differentially relevant, and therefore have different effects, in work contexts. Within study one, therefore, we explicitly investigate whether the presence of authority and urgency techniques influence employee susceptibility to simulated spear phishing emails within the workplace. We extend this in Study Two by examining employee discussions of the message-related factors that they report as making them more or less likely to respond to an email that they receive. The suspicion, cognition and automaticity model (SCAM) The SCAM claims that the extent to which heuristic processing strategies are used when evaluating emails varies according to characteristics of the individual recipient (Vishwanath et al., 2016). These differences primarily relate to individual beliefs regarding online risk (Barnett and Breakwell, 2001;Bromiley and Curley, 1992), which encompasses the degree of experience, efficacy, and subject-specific knowledge that people have (Downs et al., 2006;Canfield et al., 2016; Table 1 Example influence techniques that occur in phishing emails. Pattinson et al., 2012;Sun et al., 2016). However, the relationship between these factors remains unknown. A reliance on heuristic processing is considered more likely to occur when an individuals' ability or motivation to engage in more in-depth processing of information is reduced (Eagly and Chaiken, 1993). Therefore, individuals with a greater awareness of the risks of online activity, and phishing specifically, are considered more likely to engage in deeper processing of the information contained within emails, such as authenticity cues. Conversely, those with a lower awareness are considered more likely to engage in superficial, heuristic forms of processing. Finally, individual's established habits of behaviour in relation to email communications are also considered to influence the degree of suspicion that they have towards emails that they receive (Vishwanath, 2015;Vishwanath et al., 2016). It is not clear, however, to what extent such constructs apply within a work context. For instance, people's beliefs regarding online risk may differ when they are at work compared to when they are at home, particularly if there are differences in how they may be impacted personally by any potential breach and the degree of IT support that they have available to them if they unintentionally respond to a phishing email., Similarly, the extent to which current training approaches provide sufficient knowledge to influence these beliefs and minimise employee susceptibility remains uncertain (Caputo et al., 2014). Finally, any potential relationship between these constructs and the information processing strategy that is used may be further influenced by wider aspects of the work environment, such as employees facing the additional challenge of being busy, distracted, or having other urgent primary goals competing for their time (Miarmi and DeBono, 2007;Sivaramakrishnan and Manchanda, 2003;Vohs et al., 2008). Within Study Two, therefore, we explore the potential role of all of these factors within workplace settings. Specifically, we examine the extent to which these factors are reflected in employee perceptions of their own susceptibility to spear phishing. Such work is vital if the full range of potential interventions, including technical, training, process, and design solutions, are to be effectively exploited within organisations (Irvine and Anderson, 2006). Protection motivation theory (PMT) Protection motivation theory (Rogers, 1975) has been used to highlight the role of individual perceptions of online threats and perceived ability to cope with such threats in relation to security behaviour more generally (e.g., Ng et al., 2009;Tsai et al., 2016). PMT states that the likelihood of an individual engaging in protective behaviour is influenced by their perceptions of the particular threat (i.e., the perceived severity of the threat and their vulnerability to it) and the degree to which they feel able to enact the necessary behaviours to protect themselves (known as self-efficacy). PMT has recently been applied to the phishing domain. For example, a survey of 547 individuals conducted by Wang et al. (2017) demonstrated that people's 'phishing threat perceptions', combined with their (perceived) ability to detect phishing emails, impacted their resultant coping strategies. Namely, whether they focused on more effective, task-focused strategies, such as finding out more information and learning new skills to manage the threat, or more maladaptive, emotion-focused strategies, such as avoiding thinking about the issue. These coping strategies in turn influenced their ability to distinguish between legitimate and phishing emails. The potential influence of threat perceptions on responses to phishing emails was also discussed by Conway et al. (2017), who conducted a series of semi-structured interviews with employees regarding their experiences of information security and phishing. The findings of their analysis suggested that highly visible security procedures reduced perceived vulnerability to online threats in the workplace, resulting in less secure behaviour. Within organisational settings, a number of technical and other support mechanisms may be in place to assist users on information security matters. For instance, the use of automated system alerts, specific phishing warnings circulated via email, and IT phishing-reporting mechanisms may all reduce perceived vulnerability and enhance self-efficacy in the workplace. However, there is very limited research exploring how people conceive of these mechanisms, the extent to which they may influence perceptions of vulnerability and selfefficacy, and whether employees consider them beneficial in helping them to effectively cope with the spear phishing risk. We explore the role of such factors in Study Two. Study one The primary aim of study one was to examine whether the presence of authority and urgency cues within simulated spear phishing emails differentially impacted employee susceptibility to these emails within a work context. Although phishing emails can make use of a range of influence techniques (Cialdini, 2007;Stajano and Wilson, 2011), the use of authority and urgency cues within phishing emails is known to be particularly commonplace (Akbar, 2014;Atkins and Huang, 2013). Authority cues focus on mimicking organisations or individuals that are respected and have a degree of authority in relation to the recipient. Urgency cues involve placing people under a degree of time pressure to encourage them to respond quickly. As previous work has shown that the presence of authority and urgency cues within phishing messages can increase susceptibility in other contexts (Butavicius et al., 2015;Williams et al., 2017a), we predict that these effects will extend to a workplace setting. Hypothesis 1. The presence of urgency cues within simulated spear phishing emails will be related to an increased likelihood of responding to these emails. Hypothesis 2. The presence of authority cues within simulated spear phishing emails will be related to an increased likelihood of responding to these emails. Method Historic phishing simulation data from a large UK public sector organisation (with >50,000 employees) that interfaces with members of the public and routinely handles sensitive information was analysed. This data was collected by the organisation and provided to the researchers in the form of aggregate responses to nine simulation emails that were sent to all employees of the organisation (approximately 62,000 individuals) over a 6-week period in early 2015. These simulation emails were sent from fictitious organisations and were specifically designed to closely mimic actual phishing emails that targeted the organisation. Each employee received two of these simulation emails. A limitation of using these applied datasets was that we were unable to ensure that all simulation emails were sent to the same number of employees. Further, we did not have access to participants' demographic information. Table 2 shows the number of recipients for each of the nine emails. An example simulated phishing email is also shown in Fig. 1. All emails were addressed to the individual recipient (e.g., 'Dear John') and contained a corresponding logo related to the fictitious organisation. As commonly found in phishing emails, each email also contained a link within the text that recipients were encouraged to click in order to respond to the email content. If recipients clicked on the link, they were automatically directed to an internal, educational website that informed them that they had clicked on a link within a phishing simulation and were provided with access to further voluntary online training and awareness-raising materials. Each of the nine simulation emails were rated by two independent raters according to the degree to which the email included authority and urgency influence techniques. The content that was provided and assessed by the raters focused on information within the email body itself. This included the logo, the text of the email body, and the email signature (as shown in Fig. 1). Raters were blind to the response rate (known as the 'click-rate') for each of the emails. Specifically, emails were rated on a scale of 1-3 (1 = not at all; 2 = slightly; 3 = very much) and raters were provided with standardised definitions to assist them: • To what extent does the email contain urgency-based influence techniques? Definition: The e-mail states that the receiver has a limited amount of time to respond if they wish to engage with the e-mail content, such as being time-limited, urgent or scarce. For example, 'this link will expire 24 h after this notification has been read by you.' • To what extent does the sender represent an authority figure or institution? Definition: The email contains cues that suggest that the sender has a degree of authority in relation to the recipient, such as the power to enforce compliance or give orders. For example, an email claiming to be from a senior figure within the organisation that requests individuals comply with a request. Inter-rater reliability was assessed using Cohen's kappa (Dewey, 1983) and demonstrated good agreement between the two raters (k = 0.745, p < .001). For each phishing email, the score for each influence technique was calculated as the mean of the two raters' scores. These ratings are shown in Table 2. In order to reduce the likelihood that any differences found between emails were related to other factors, such as the perceived authenticity of the email, all nine emails were also rated on the same 1-3 scale according to (a) the extent to which the layout of the e-mail appears genuine, (b) the extent to which the content of the e-mail appears genuine, and (c) the extent to which the email is considered to be trustworthy. For each of these aspects, all nine emails were rated > 1, with the majority > 2 (except email seven, which had a mean rating of 1.5 for layout, and emails five and six, which both had a mean rating of 1.5 for trustworthiness). Results Click-rate data was analysed according to the particular simulation email. Collapsed across email type, there was a mean click rate of 19.44% (Range = 6.00%-35.00%; SD = 11.85%), which reflects the average response rate of 20% highlighted in the recent PhishMe report (Computer Fraud and Security, 2016;PhishMe, 2016). Due to a lack of data regarding which two emails employees received, each data point was treated as coming from a separate participant. For each of the four techniques, those emails that had a mean rating > 1 were labelled as 'technique present' and those that had a mean rating of 1 were labelled as 'technique not present'. To examine the relationship between the presence of authority and urgency cues and mean click-rate, a binomial logistic regression was conducted in R, with authority and urgency technique (present vs. not present) as the predictor variables and response (link clicked vs. link not clicked) as the dependent variable. The results demonstrated that both authority and urgency were associated with an increased likelihood of clicking on the email link (Authority: Wald z-statistic = 72.68, df = 1, p < 0.001, OR = 3.42, CI [3.31, 3.53]; Urgency: Wald z-statistic = 39.12, df = 1, p < .001, OR = 1.84, CI [1.79, 1.91]). This supports both hypothesis 1, that the presence of urgency cues will be related to an increased likelihood of responding to emails, and hypothesis 2, that the presence of authority cues will be related to an increased likelihood of responding to emails. For every one unit increase in authority rating, the log odds of clicking on the email link was found to increase by 1.23. For every one unit increase in urgency rating, the log odds of clicking on the email link was found to increase by 0.61. Finally, examining the difference between the residual deviance and null deviance allows performance of the model based on these predictor variables to be compared with a null model. The predictor variables were found to significantly reduce the residual deviance (null deviance = 122,136, residual deviance = 112,742, p < .001) compared to the null model, suggesting that they both contribute to model performance. These results are discussed in detail in Section 4: Discussion. Email content is only one aspect likely to influence response behaviour, however. Since individual and situational-level factors could not be examined using the available phishing simulation data, further investigation was required to explore the potential contribution of these wider factors to employee response behaviour. Study Two was conducted to examine these factors, using a focus group methodology to explore employee perceptions of what influences their response behaviour. Study two The aim of study two is twofold. First, to examine whether factors external to the phishing message itself, such as aspects related to the individual recipient or the context in which they are operating, are likely to impact susceptibility to spear phishing within the workplace. Second, to examine whether specific factors identified in current theoretical models of phishing susceptibility (e.g., the IPPM: Vishwanath et al., 2011;the SCAM: Vishwanath et al., 2016;PMT: Rogers, 1975; further detail of specific factors is provided in Section 3.1.4: Thematic analysis) correspond with employee perceptions of their own susceptibility within the workplace. To address these aims, we employ a qualitative focus group methodology to explore employee perceptions of susceptibility to spear phishing emails. Specifically, six focus groups were conducted across two organisational sites of a second organisation (further details are provided in 3.1.3. Participants). These focused on examining employee perceptions of (a) the factors that impact susceptibility to spear phishing emails at work, (b) how they manage this susceptibility within the work environment, and (c) the perceived efficacy of current training approaches. In particular, the role of additional susceptibility factors external to the actual influence techniques used, such as habitual email behaviours related to work routines, phishing-related knowledge, and beliefs regarding phishing risk, was explored (Ng et al, 2009;Tsai et al., 2016;Vishwanath et al., 2016). Materials A standardised question plan was developed to explore employee perceptions of their own susceptibility to spear phishing and how they manage suspicious emails at work. This enabled us to investigate responses in relation to current models of susceptibility to phishing (e.g., Rogers, 1975;Vishwanath et al., 2016;Vishwanath et al., 2016). This question plan was used as the basis for all focus groups and focused on the following areas: 1. What factors make you more or less suspicious of an email that you receive? 2. What factors make you more or less likely to respond to a targeted phishing email? 3. What factors make you more or less likely to report an email that you receive as potentially fraudulent? 4. What do you think about current training regarding phishing? 5. Anything else you would like to add regarding your interaction with targeted phishing emails? Although the primary emphasis of the focus groups was on exploring susceptibility to targeted 'spear phishing' emails, focus group participants did make reference to generic phishing emails at various points. This was particularly prevalent when considering what made them trust an email. Where relevant, these points are highlighted in the results section. Procedure A qualitative approach was taken to enable perceptions and experiences to be captured and analysed according to the presence of theoretically-driven themes (for further details, see Section 3.1.4: Thematic analysis). This approach allowed us to take a more in-depth approach to exploring susceptibility factors, as well as identifying aspects of current training that could be improved. The study was granted ethical approval by the University's Research Ethics Committee (Ref. FBL.15.11.015). Focus groups were held on-site in a private meeting room. Two researchers were present at each focus group, with one facilitating the session and the second taking written notes. Each focus group was recorded using a Dictaphone and transcribed following the session. Any identifying information or reference to particular organisational systems was removed on transcription. Participants were provided with full details of the research prior to the focus groups and also provided informed written consent at the beginning of the focus group session. It was made clear prior to the focus group session that participation was voluntary and that participants could leave at any time without having to give a reason. Participants were also informed of general focus group etiquette prior to the start of the focus group. Thus, we informed participants that (a) we were interested in hearing their open and honest thoughts, (b) there were no right or wrong answers, (c) what is said in the room should not be discussed outside of it, and (d) that the session would be tape recorded, but individuals would remain anonymous in transcription and reports. Contact details of the researchers were also provided to enable participants to contact them in the future if required. Participants Thirty-two employees of an international organisation operating within the engineering and management sector (>10,000 total employees) participated in six focus groups conducted across two organisational sites within the UK in April 2016. Each focus group contained 4-6 participants. Participants were recruited via internal communications inviting employees to participate in a voluntary focus group conducted by university researchers to explore people's perceptions and experiences of targeted phishing emails within the workplace. Participants consisted of twelve males and twenty females and represented administrative, engineering and project management job roles. Further demographic information was not available to researchers. Thematic analysis Thematic analysis is a qualitative method that allows for interpretation of material to identify potential themes and patterns within the data (Berg, 2006). We adopted a hybrid approach, which included both inductive and deductive thematic analyses (Fereday and Muir-Cochrane, 2006). This involved three main stages: 1) inductive themes were defined according to the study objectives and in line with previous phishing susceptibility literature (e.g., the SCAM; Vishwanath et al., 2016), 2) subcategories (codes) within each theme were defined, and 3) emergent subcategories that were deduced from the data were defined. To illustrate, "4. Knowledge and Training" was predefined (and coded as a primary theme), and the corresponding categories were alphabetised, for instance, "4.a. Technical Understanding". Emergent subcategories are highlighted with the corresponding codes, for instance "4.b. understanding the security centre (emergent)". The thematic framework is outlined as follows: Trust or suspicion Definition: Concepts and perceptions related to factors that make someone consider that an email is likely to be legitimate or that make them doubt its authenticity. Based primarily on research of Vishwanath et al. (2011; and Williams et al. (2017b). Codes: (a) Determining authenticity; (b) familiarity; (c) expectations; (d) work context. Perceptions of spear phishing risk Definition: Concepts and perceptions related to people's perceived vulnerability to spear phishing within the work context, and the perceived severity if this occurred. Based primarily on Protection Motivation Theory concepts (e.g., Rogers, 1975;Tsai et al., 2016). Codes: (a) Exposure to external emails (emergent); (b) centralised inboxes (emergent); (c) risk awareness. How susceptibility is managed Definition: Factors related to the mechanisms that people use to help them manage spear phishing emails in the workplace. Based primarily on discussions with organisational security personnel. Two independent coders who were both present in the focus groups analysed the dataset. Inter-rater reliability was assessed using Cohen's kappa and demonstrated good agreement between the two raters (k = 0.890, p < .001). There were 21 instances where the two coders coded the same information differently. These discrepancies were resolved through discussion between the coders and a third individual who was not present in the focus groups but had knowledge of the research area. Theme 1: trust or suspicion In all of the focus groups, the majority of participants had received some form of phishing email, although these were often related to a personal context rather than the work environment. These experiences often reflected more generic phishing scams, whereby the content of the email and sender address were highlighted as containing a number of 'suspicious' cues that were generally easy to identify, such as receiving emails that claimed to be from a legitimate organisation but that came from a personal email address: "at home you get ones like 'inland revenue at google.com'." (FG6, P1). The majority of factors that were identified as impacting trust of an email were applicable to both a home and work context, with only some specific aspects reflecting the particular work environment. These factors are discussed in more detail below. (a) Determining authenticity Particular aspects of an email that are used to determine authenticity were highlighted a total of 32 times across all six of the focus groups, focusing primarily on actions such as hovering over the hyperlink and examining the sender address for errors, thus demonstrating a degree of knowledge and awareness of how to identify fraudulent emails. For example, "the easiest way I find is to click on the email address it comes from" (FG1, P2). Similarly, the presence of spelling errors was consistently highlighted as a suspicious cue, "I had one from Barclays before, it had the Barclays logo and everything and I think on the first or second paragraph they spelt Barclays wrong, so…" (FG6, P2). In addition to these more specific elements, subjective judgments of something feeling 'not quite right' were also considered, particularly when other aspects of the email appeared legitimate. For example, "something I always just can't figure out, you know, human nature, is when they look fine, almost too perfect, and there's something about them, but it doesn't look like spam at all, it's just a lovely, perfectly worded email, brilliantly laid out and then you catch a feeling and think 'why am I even thinking about this?', most emails you don't even question, but you get that feel, bad vibe from it" (FG6, P2). A greater requirement to base decisions on these subjective feelings was explicitly highlighted in roles where more traditional cues, such as sender address, could not be relied upon. For example, "I suppose that is it though, where they've come from. I mean I work in procurement and you get legitimate enquiries wanting to be a supplier and all that and it's often necessary to open the email to check that, that sort of content, you can't just go by the address that it's come from ne-cessarily… but they've usually got something not quite right in them, haven't they, which rings alarm bells" (FG5, P5). The majority of these elements were considered relevant to both spear phishing and generic phishing emails, although the topic of the email was considered most relevant for generic phishing (e.g., whether it represented a typical '419′ scam offering vast sums of money). (b) Familiarity Relative familiarity with the sender or topic of emails that are received was mentioned 10 times across five of the focus groups. For instance, being unfamiliar with the sender of the message was considered an important cue by some participants, "I suppose I'm a bit paranoid, if I don't know the person who sent it to me, even if it looks genuine, if there's an attachment then I don't open it" (FG2, P2), although this was more qualified in others, "if it's an unknown sender, I might be suspicious" (FG5, P2). New employees who were not yet familiar with the individuals that they would typically be liaising with were also highlighted as potentially being more susceptible to phishing emails, particularly those emails that established members of staff would consider relatively easy to identify. For example, "when I first came here, I was, because I wasn't familiar with what the companies were that were going to email me necessarily I was just sort of clicking on anything … but it was just because I wasn't familiar with the companies that we were dealing with" (FG4, P2). Despite the use of familiarity as a potential cue to the legitimacy of an email, the potential risks of familiar senders were also highlighted by one participant, "they can be the hardest ones to spot sometimes, if they're from a friend or contact and they've actually been hacked haven't they and sometimes they can be the tricky ones to work out" (FG1, P2). (c) Expectations Communications that were expected or considered routine were also highlighted as less likely to trigger suspicion, being mentioned 26 times across all six focus groups. For instance, one participant discussed receiving an email at "two minutes to midnight on a Saturday and we just thought, you know, so we just sent it straight to [IT Security] here at the time and said, you know, we never, no one would ever send us an email at that time in the morning with this sort of heading on it" (FG2, P3). However, the presence of expectations regarding communication norms and what a legitimate message typically 'looks like' could also lead to issues in itself. For instance, difficulty in identifying fraudulent emails that exploit these expectations and routines was explicitly highlighted by one participant in relation to a colleague who had received a spear phishing email regarding an unpaid invoice from a legitimate email account: "it's a company she deals with, we've currently got problems with accounts payable … and actually why would she not believe that it was true" (FG1, P4). Particular expectations regarding communication norms were also highlighted as leading to difficulties in international working environments. For instance, different email styles and communication norms across different countries could make it more difficult to differentiate legitimate emails from spear phishing emails: "I mean there are some places, you do get, you get some emails from America and they write in a different way and it does make it difficult sometimes to sort of spot the difference" (FG6, P5). (d) The work context The role of the work context in influencing responses was highlighted 13 times across all of the six focus groups. The impact of being busy on the depth of information processing that was possible was highlighted in the comments of one participant, "I think that you're still likely to click on something because we're all really busy and I think that you sort of scan stuff don't you, and if you see something attached you might just click and think 'oh well, I'll have a look at the other information', you don't always have lots of time" (FG1, P4). Similarly, another participant stated "Yes, if it was out of my sphere of what I was doing, say I was doing, I don't know [project related task] and I got an e-mail about something else I'd think, 'why do they want to know that?' but again if you're very, very busy E.J. Williams et al. International Journal of Human-Computer Studies 120 (2018) 1-13 then I might just click on it by accident" (FG2, P1). This issue was also highlighted by one participant as being particularly relevant in smaller businesses, who may not have the IT support and reporting infrastructure to allow people to easily verify the legitimacy of emails if they do have concerns, "everyone's way too busy, so you know 'I haven't got time to check that' so, I don't know, they [larger businesses] may have people who it might be their entire job to check these emails which is great, but then, for other people, a smaller business, where they don't have that, they don't have that kind of support" (FG6, P2). 3.2.2. Theme 2: perceptions of spear phishing risk The extent to which participants were exposed to spear phishing emails within the work context varied substantially, with some participants reporting that, (to their knowledge) they had never received a phishing email of any kind whilst at work, whilst others reported receiving targeted emails on a regular basis. This exposure appeared to be impacted by the extent to which individuals received external emails within their job role and the use of centralised inboxes. Those with greater exposure to spear phishing also demonstrated a greater degree of awareness regarding how to report phishing emails, as well as the risk of being targeted by spear phishing emails within the workplace. These factors are discussed in more detail below. (a) Exposure to external emails Exposure to external emails was discussed eight times across five of the focus groups. If individuals did not regularly receive external emails, then receiving such an email was highlighted as a primary trigger for suspicion in the work context: "P3: we shouldn't also get ones from outside influences P2: no P3: the external ones, for example, we shouldn't get on a day to day basis, it should be from [internal] personnel and that would flag it up for me… P1: yeah, [internal] are the only ones we should be getting, unless you're doing a task outside P3: yeah". For employees who regularly received external emails, it was considered more difficult to determine the authenticity of an email. As stated by one participant, "ours will be from everywhere, because we buy an awful lot of stuff from outside companies, [organisation] and what have you, I've noticed more and more emails are coming through which I just put as junk, junk, junk, but yeah and we have so much coming through that it could be easy to click on something" (FG2, P4). Such difficulties were also highlighted by a call-centre based employee, "we get 200-300 emails a day, so knowing when to click on something and when not to click on something is quite hard because we get purchase orders coming through and we've got to click on the attachment" (FG5, P1). The substantial variation in exposure to spear phishing across job roles was reflected in employees' relative awareness of the relevant processes and procedures, such as how to report a suspected phishing email, with those who regularly encountered potentially fraudulent emails or regularly reported emails as suspicious appearing to be more familiar with the reporting process. As highlighted by one participant who did not regularly encounter 'suspicious' emails, "to be honest I wouldn't know, I don't generally get phishing emails at work and I wouldn't know who to report it to, the IT department I guess" (FG1, P2). (b) Centralised inboxes Job roles that involved use of a centralised inbox were also highlighted in two of the focus groups as increasing exposure to potential phishing emails: "we get them, sort of, every day because we have several centralised inboxes, so we'll get a phishing email every single day" (FG5, P1). Similarly, "I mean I haven't had it here, but when I worked in a different building we had our co-owned little email address, I used to get quite a few, you know, sort of unsolicited ones and I thought, always forwarded them on and they came back to me and said no it was ok but thank you…" (FG1, P5). These emails could include both generic phishing and targeted phishing emails. For the latter, other cues that would traditionally raise suspicion, such as an unexpected contact, were also deemed to be lacking due to the unsolicited nature of some messages, thereby increasing the reliance on external verification and reporting procedures. For instance, "its very difficult for us because I think our inbox allows every single thing you could imagine come through, whereas, personal [personal inbox], I've only ever had one come through on that, but our centralised one we have to allow anyone to pop an email in that so it gets quite difficult" (FG5, P1). (c) Risk awareness Differences in perceptions and awareness of risk were referenced 16 times across four of the focus groups. This was explicitly highlighted by one focus group participant, "I don't think it's something that is well understood … I think it's mixed, you've got people who are very clear about it and you've got people who aren't so clear… or aren't so clear on what their regular routines and habits and ways of dealing with emails might cause… I think that there's a spectrum of awareness around it" (FG3, P1). Differences were also highlighted according to perceptions of risk and vulnerability within a personal (i.e., home) context compared to a work context. In particular, the work environment was perceived as more secure, with enhanced technical controls making it less likely that suspicious emails would be encountered (although this was dependent on the job role), and the provision of specialist support to reduce the impact if a phishing email was responded to. For instance, one participant questioned "are we distinguishing between work and home perhaps, because I think at work it's not so prevalent because this should be a better system in place for it hopefully" (FG5, P2), whilst another highlighted "that's a good thing about work though, as it's a good system so a lot of them get blocked, so we don't generally get much spam or kind of stuff through the work email address, with my [personal] account I get a big problem with that" (FG1, P1). In contrast, perceptions of vulnerability in a personal context appeared to vary substantially, with some participants demonstrating a high degree of confidence in their ability to manage phishing emails of all types and others feeling much more vulnerable. For example, "see I feel quite comfortable with them at work, they'll let me know not to open it, at home… you just don't know what to do" (FG4, P4). Theme 3: how susceptibility is managed When discussing the potential risk from spear phishing within the workplace, employees highlighted a number of assistance mechanisms and aides that they used to manage this risk in their day-to-day environment. This ranged from online warnings and email banners to reporting mechanisms and discussion with peers. (a) Warnings and banners The perceived benefit of technical-based aids for focusing attention and invoking suspicion were highlighted nine times across all six focus group, with particular reference to the use of email banners to encourage users to engage in more systematic consideration of the email. For example, "now they have an external email banner, which helps because it does make you think to look at it more and don't click on anything" (FG2, P4). The provision of security alerts was also considered to increase awareness of particular threats, providing a means to match emails received with a mental representation of a known phishing threat and making it less likely that such emails would be considered genuine. "P4: they tend to send like an alert saying if you get something from this E.J. Williams et al. International Journal of Human-Computer Studies 120 (2018) 1-13 specific address or saying this, some people have been targeted … which is quite useful, so you can at least see the body of the text and go 'ah yeah, if I see something like that'… P3: yeah, that's a good thing P2: yeah, that's all I generally see, are alerts saying watch out for this". (FG1) (b) Reporting The use of reporting procedures to determine the legitimacy of emails was discussed 40 times across all six focus groups. The ease of reporting potential phishing emails, and the provision of timely and reliable feedback in relation to the legitimacy of these emails, was highlighted as helping employees make the correct decision regarding emails that they were uncertain about. For instance, "there's a spam reporting email that they've got set up, you just attach it to that, send it off, wait for it to come back telling you whether it is or not" (FG2, P4) and "yeah, we just send them off to another email address and then it comes back to us saying it wasn't malicious or whatever" (FG5, P1). Receiving consistent and timely feedback was seen as vital to make sure that people did not consider their reporting actions a waste of time and thus be less likely to report emails in the future, "yeah, if you're not getting any feedback at all then you'll stop forwarding them on, make you think 'are they paying any attention?" (FG1, P2) and "I guess I might be more resistant to send it off if I think it takes two days to get back and for them to say 'oh you can open it' and then I'm two days behind in my work" (FG6, P2). A number of participants also highlighted other factors that may reduce the likelihood of reporting, such as a fear of potential negative repercussions, "I just know people who haven't wanted to report things because they thought they would get into trouble for clicking on something" (FG2, P2), not considering it important, "if I see something I'm not expecting to get I'd probably just delete it without opening it up. I don't think I'd report it to anyone" (FG5, P3), and the potential time involved, "the first time I got one I thought, in the back of my mind, there was a 'we're meant to report all spam, aren't we' but I had to go on [intranet] and Google, well, search, for how to do it. It's not the easiest thing to find on [intranet], if it's something that happens so rarely you're not going to remember" (FG6, P3). (c) Peer verification The role of peer support in verifying emails, such as speaking to colleagues, sharing tips or getting advice from others with regards to decision-making in conditions of uncertainty, was also discussed 11 times across five of the focus groups. In particular, the extent that other members of staff also received a particular email appears to influence decision-making. "P3: so when you pass it around, you say 'have you seen that email' and you say 'yeah, what did you do with it?', 'delete it', 'I'm sending it on', you know that comes round in the office quite often, you know in an office with [x] people sitting around and all of a sudden you all get this email and you think it must be phishing if we've all got it. P6: but then, if it was just you then you might be less sure, you know 'have you got this?' 'no', it might be phishing, but it might not, so you'd still be on your guard but less so I guess, I don't know". (FG2) This social verification was considered particularly relevant for new staff, who may be uncertain regarding communication norms within their job role, and for those who did not regularly receive suspicious emails. For instance, one participant recounted receiving a particular email in the office, "I was like, hey [name], I've got an email, this is exciting it's from [name department] and he was like, 'don't click it' and I was like 'oh, sorry" (FG6, P4). Similarly, another participant recounted a similar incident, "I said to my colleague, 'oh, I don't really understand', and she said, 'oh my god, don't open it, don't open any attachments, send it on to the spam', so I was like 'oops, thank you" (FG4, P2). However, staff groups who do not have access to such informal support mechanisms, such as remote workers or those working off-site when an email is received, may be at particular risk in this regard. (d) Avoidance Avoiding engaging in activities that may increase the risk of falling victim to a phishing attack, such as refusing to click on links within any email received, was also highlighted as a means of reducing susceptibility across three of the focus groups. However, this strategy could only be used if the email or link was not perceived as necessary for work activities. For example, "I don't click on anything if I can help it. I don't click on anything, even if it looks legitimate, unless I feel I need to do that for my work… how do you know, I mean, how do you know it's safe?" (FG3, P6). In scenarios of goal conflict, therefore, where an email is considered important or necessary for a work task, such strategies may prove difficult to enact. Theme 4: knowledge and training Finally, a number of factors were highlighted regarding the degree of knowledge that employees have about both spear phishing and phishing in general, including how and why user information may be gained and used, how security systems manage the phishing risk, and the perceived effectiveness of training in this area. (a) Technical understanding A number of issues were raised in focus group discussions that reflected uncertainty regarding what spear phishing encompasses, how personal information may be gathered and used in spear phishing attacks, and potential trajectories of impact within a system if such an email is responded to (i.e., once a link has been clicked or user credentials entered). Overall, this was referenced 12 times over five focus groups. For instance, "regards to everybody in the company, what, if you asked the question to someone in the company, what is phishing, … they'd say 'I know its something to do with emails', but what is, you know, what is spam, 'well, they're all the same aren't they?' well, they're not, you know, so maybe we need to tell people exactly what each thing is and the key things to looking out for them" (FG2, P3) and "So, I don't think that people actually know what happens if you do accidentally click on something" (FG3, P4). Aiding individuals to gain greater understanding of both the consequences of their potential actions and how these consequences can be mitigated at each stage was considered important, "I think it might be worth when they're doing the training taking a hypothetical scenario saying right Miss A is sitting at her desk and she clicks on this, this is what it's opening up, this is here it's going to, this is what it's leading to, this could be the consequences, so you can see how one click… could almost bring down a company. I don't think people realise just how consequential it can be" (FG4, P4). (b) Understanding the security centre Uncertainty regarding technical security systems, including how these work and how they are operated, was highlighted seven times across four of the focus groups. For instance, uncertainty regarding the vulnerabilities of technical systems was highlighted by one participant: "I think I have an expectation now that we're a [particular type of company] our IT department should be able to deal with most sort of, attempted attacks on our systems, so I sit there thinking well I don't need to worry about it too much … for me, they should be the ones maybe where the investment is to try and stop as many as they can, because by the time they get to us it's kind of failed all of the different sort of checks that must be in place" (FG1, P4). Uncertainty regarding the degree to which processes can be, and are, automated was also considered. For instance, "what I don't know is what the process is for, once you've reported it, is there a physical person that has E.J. Williams et al. International Journal of Human-Computer Studies 120 (2018) 1-13 to check it, or is there some sort of automated system, because depending on, if I was reporting sort of one a day I think I might feel I'm overloading this poor person with all these emails" (FG6, P3). (c) Information overload In order to enhance and maintain employee awareness of spear phishing attacks, communication materials may be regularly circulated to employees via a range of mechanisms (e.g., posters in corridors, information on noticeboards, intranet articles etc.). However, when combined with the vast array of other information that must also be routinely circulated to employees, such as health and safety information and site-specific news, this was considered easy to miss or forget. The issue of an overload of protective information of various sorts was highlighted five times across five of the focus groups. "P4: we do get bombarded with quite a lot of different things about security and health and safety P5: see that's the thing, which all come through on email so it's clogging up your email, making it worse, so you just randomly go through thinking 'that'll do' P4: and also on the noticeboards, I think we're probably a bit blind to it … P2: I think that in our specific area we do get a lot of security things so, yeah, some of it might get missed or forgotten". (FG1) (d) Perceptions of training The efficacy of current training was considered 26 times across the six focus groups. When considering current training, the majority of participants perceived this in relation to a 'tick-box' exercise, with individuals completing online modules either when they are short on time or overloaded by information from other courses (e.g., during the induction period). This was considered to result in the information 'not going in', suggesting that training content is not sufficiently processed to ensure that it can be easily recalled when required. P1: they [people] just want to get their pass and then forget everything about it, that's the main thing P3: yeah P5: most people just go to the assessment at the end and never actually… and don't bother actually reading it all". (FG1) Current training approaches were generally considered to be too static and unresponsive to the changing cyber domain. For instance, "the variation is not that great, you know everybody's saying on the news and everything else, this is getting worse and worse and worse, but the questions are the same as we had last year" (FG2, P1). Overall, participants highlighted a number of suggestions for improving current training approaches in order to more effectively address perceived susceptibility factors. These included: • Providing greater detail • Regularly updating content • Allocating specific time to complete training outside of the primary job role • Using a range of interactive methods (particularly discussion-based activities) • Ensuring personal relevance. Potential implications of these suggestions for practitioners are discussed within Section 4.2: Implications for Designers and User Communities. Discussion These studies explored the factors that influence susceptibility to spear phishing emails within the workplace. Study One used historic phishing simulation data to examine the impact of message factors, specifically the presence of authority and urgency influence techniques within the email, on susceptibility to phishing within an ecologically valid context. In line with our hypotheses, significantly higher clickrates were found for phishing simulations that contained authority and urgency cues. Study Two was then conducted to examine the potential role of factors external to the message itself on employee susceptibility, with a particular focus on perceptions of spear phishing risk, degree of knowledge, work-related routines and norms. This allowed the influence of context, specifically that of the work environment, to be explored in a novel and practically relevant way. Applying concepts from theoretical models to data collected in an organisational setting not only provides a degree of validation within applied contexts, but also aids theoretical development through the identification of additional concepts of interest. Overall, the primary factors highlighted in current theoretical models of phishing susceptibility were supported. A number of additional factors specific to work contexts were also identified, including degree of exposure to external emails, the use of centralised inboxes, information overload within the work environment, and the role of social and technical support in enhancing perceptions of selfefficacy. This provides a basis for the further development of current theoretical approaches (Ng et al, 2009;Tsai et al., 2016;Vishwanath et al., 2011;Vishwanath et al., 2016), as well as a number of practical recommendations relating to interface design, employee training and awareness, and decision support systems. These are discussed in more detail below. Understanding message-related factors: the IPPM When considering what makes people susceptible to phishing emails, the IPPM claims that influence techniques contained within such emails (such as an urgent deadline) distract people's limited attentional resources away from important authenticity information (such as the accuracy of the sender address; Vishwanath et al., 2011). Within the reported research, we found that the presence of common influence techniques within spear phishing emails (specifically, urgency and authority cues) contribute to increased susceptibility within a workplace setting, likely by encouraging the use of heuristic processing strategies (Vishwanath et al., 2011). Since the IPPM was primarily developed and tested within a university population, these findings provide novel evidence that such concepts also apply within a workplace domain where employees have previously received a base level of cyber security training. Interestingly, when participants within Study Two considered the factors that they believe influence their email response behaviour, the specific influence techniques highlighted in study one (i.e., authority and urgency) were not mentioned. Instead, participants focused predominantly on whether they were familiar with the message sender, whether they were expecting the communication, and the presence of authenticity information, such as a correct sender address. To a degree, this is unsurprising since it has previously been acknowledged that familiar information is more likely to be considered legitimate due to increased accessibility of that information within memory processes (Begg et al., 1992;Polage, 2012). However, although it is possible that participants implicitly considered aspects related to authority influence techniques when discussing authenticity information, this was not stated and does not consider other influence techniques, such as urgency. This suggests that individuals may be unaware of their vulnerability to the influence techniques commonly contained within spear phishing emails, representing a gap in current knowledge. As a result, although employees identified authenticity cues as a means to identify E.J. Williams et al. International Journal of Human-Computer Studies 120 (2018) 1-13 spear phishing emails, these strategies may fail if the email contains well-crafted influence techniques or if they are operating within a pressured cognitive context at the time that the message is received (Williams et al., 2017a). This is in line with the heuristic processing propositions of the IPPM and the findings of study one. Within current models, the degree of correspondence between what an individual perceives to influence their judgements (i.e., sender address etc.) and what is actually found to influence them (i.e., the presence of authority or urgency cues) is not currently specified. The findings of our study suggest that there is currently a dissonance between these two aspects. Susceptibility to influence techniques may thus be driven by either a lack of understanding regarding how such cues can be used within spear phishing emails or a lack of understanding of our own vulnerability to such cues. Future work should directly address these possibilities. In particular, whether greater knowledge of influence techniques reduces their persuasive effect or whether such techniques are capable of encouraging heuristic processing irrespective of the degree of awareness that an individual has. Risk beliefs, knowledge and habits: the SCAM The SCAM proposes that the beliefs that an individual has regarding online risk and their established habits of behaviour influence the information processing style that is employed when an email is encountered (Vishwanath et al., 2016). Within study two, work-related norms and routines, online risk beliefs, and degree of knowledge, were all explicitly highlighted by employees as likely to influence their susceptibility to spear phishing. A number of examples of spear phishing emails exploiting habits and routines were identified. These were likely to encourage a reliance on heuristic processing strategies, effectively slipping under the radar and leading to individuals clicking automatically whilst engaged in their usual job routine. For example, an email matching prior expectations was considered to increase the likelihood that a quick 'scanning' of email content would occur. Emails were also identified that mimicked familiar senders or 'usual' subjects, making it less likely that they would 'stand out' from other communications that are received (Grill-Spector et al., 2006;Taylor and Fiske, 1978). Unless an email is perceived to be abnormal in some way, it is unlikely to trigger more in-depth, systematic processing and the additional time and mental resource that this involves (Vishwanath et al., 2016). Although some people reported being generally more suspicious of emails than others (in line with findings of Harrison et al., 2016b), suspicion was also often triggered by particular norms related to the individual's job role. For instance, employees who did not routinely receive external emails highlighted this as a 'red flag', whereas those who regularly received legitimate external emails had to use other triggers to guide decision making, such as whether the email countered expectations. These decision processes appear to be primarily related to differences in the degree of exposure to phishing emails within the work context, with employees who more regularly receive spear phishing emails highlighting the use of more considered processing strategies in order to counter the perceived risk. Finally, work-related pressures that were considered a routine part of the work environment, such as being interrupted during a work task (Hodgetts and Jones, 2006), being otherwise distracted or in a rush (INFOSEC Institute, 2013;Miarmi and DeBono, 2007), or being cognitively fatigued in some way (Vohs et al., 2008), were also considered to increase susceptibility. Although some of these aspects are accounted for by the role of email habits and experience highlighted by Vishwanath et al. (2016), our findings suggest that a wider consideration of 'norms and routines' should be included within theoretical models that explicitly accounts for (a) the degree of familiarity with communication types, (b) prior expectations regarding specific communications, and (c) context-induced cognitive pressure within the work environment. For instance, cognitive pressure is likely to increase reliance on heuristic processing and therefore should increase susceptibility. Conversely, particular communication expectations within a job role could result both in decreased susceptibility to phishing emails that counter these expectations, and increased susceptibility to spear phishing emails that exploit these expectations. Systematic investigation of the relative influence of these various factors on response behaviour and how they may interact across different contexts is required. Finally, the SCAM claims that increased knowledge and experience regarding phishing will contribute to more accurate cyber risk beliefs, thus reducing the likelihood that individuals will rely on heuristic processing strategies. Spear phishing emails represent a particularly difficult to spot attack, and if individuals do not have relevant experience of targeted phishing emails (whether through direct experience or education / training), they may be particularly vulnerable to emails that do not match 'typical phishing' stereotypes. This was evident in the wide range of phishing knowledge demonstrated across participants, with some demonstrating a high degree of understanding regarding how various forms of information can be used to design a spear phishing attack and others unfamiliar with potential spear phishing risks. Again, participants with greater exposure to spear phishing emails within the workplace appeared to demonstrate greater knowledge and more accurate understanding of phishing risks. When assessing the risk of becoming the victim of a spear phishing email, individuals are likely to make judgements about their vulnerability based on previous experience and information regarding the experiences of those around them (Barnett and Breakwell, 2001). As such, those who do not generally encounter phishing emails of any kind within their job role may consider themselves less vulnerable to phishing more generally. Unfortunately, this may make them less likely to consider the possibility that an email may be fraudulent if they are ever actually targeted. However, it is also possible that those who regularly receive particular types of spear phishing emails may indeed be more adept at identifying suspicious emails that match these expectations, but may be more susceptible to those that do not. Our findings suggest that these relative differences in exposure, and their resultant impact on phishing-related knowledge, risk perceptions, and context-specific suspicion should be further investigated and accounted for in current theoretical models (Rogers, 1975;Vishwanath et al., 2016). Coping with spear phishing in the workplace: PMT PMT (Rogers, 1975) posits that individuals will engage in a particular protective behaviour if they feel that the threat is sufficient and they feel able to enact the protective action (self-efficacy). Within Study Two, a number of technical and other support mechanisms and aides were identified by participants as reducing perceived risk in their dayto-day environment and helping them to cope with spear phishing in the workplace. These included technical-based aids (warnings and banners), IT reporting mechanisms, and peer verification. Interestingly, these reflected the predominant susceptibility factors highlighted by current theoretical models (Ng et al., 2009;Tsai et al., 2016;Vishwanath et al., 2011Vishwanath et al., , 2016. For instance, technical design solutions often direct an individual's attention to elements of the message that may raise suspicion, such as online warnings and external email banners (Modic and Anderson, 2014). Such approaches have the potential to override greater attentional focus on the influence techniques contained within the message content. Similarly, mechanisms of verifying initial suspicions via expert feedback or discussion with peers provide feedback regarding current risks in the workplace (Woolley et al., 2010). This may be particularly beneficial for users who lack in-depth knowledge and may not feel sufficiently confident to risk not responding to an email in case it is legitimate. The presence of influence techniques that invoke compliance with authority or urgency cues may exacerbate this further, whereby signalling distrust of the email could be perceived as carrying a potential cost with regards to the message sender or scenario (ten Brinke, Vohs and Carney, 2016). Overall, the range of assistance mechanisms highlighted within Study Two portray the range of ways in which employees attempt to E.J. Williams et al. International Journal of Human-Computer Studies 120 (2018) 1-13 manage perceived vulnerability to spear phishing at work. These findings also extend the PMT concept of self-efficacy within the online security domain by identifying particular mechanisms through which selfefficacy may be enhanced (i.e., online warnings and banners, expert feedback, peer verification). For instance, email warnings and banners may increase perceived ability to identify a spear phishing email by assisting people to identify suspicious elements of a communication (e.g., an external sender) and match incoming emails with known phishing attacks. Conversely, expert feedback and peer verification provide support for those who are not confident in their ability to detect a spear phishing attack by providing a means to independently verify any doubts or suspicions that they may have. Finally, a small number of employees also highlighted avoiding clicking on any links within emails as a risk-reduction strategy. However, this strategy could only be used if the email or link was not perceived as necessary for work activities, highlighting the precarious balance between operational and security requirements within the work environment (Kainda et al., 2010). In scenarios of goal conflict, where an email is considered as potentially important or necessary for a work task, such strategies may prove difficult to enact. This may lead to final response decisions being dependent on the particular organisational security culture within which the employee is operating (Rocha-Flores and Ekstedt, 2016), their perception of the relative risks of clicking on a particular link (Ng et al, 2009;Tsai et al., 2016;Vishwanath et al., 2016), and the extent to which the email reflects current work-related demands and pressures. As a result, such avoidance strategies may actually represent a maladaptive form of coping that is enacted when perceived self-efficacy to effectively identify a spear phishing email is low and alternative verification strategies are not considered effective or timely. Our findings suggest that the potential influence of work context, and the assistance mechanisms that it does (or does not) provide, on perceived self-efficacy should be further investigated. In particular, survey approaches that assess self-efficacy both before and after exposure to particular assistance mechanisms, and its relationship with actual detection ability, would be beneficial. Implications for designers and user communities There is continuing debate regarding the utility of phishing simulations within organisational contexts, with suggestions that such approaches fail to address the complexity of phishing vulnerability and contribute to the development of negative, blame-based security cultures (National Cyber Security Centre, 2018b). The findings of this paper highlight the complex nature of susceptibility to phishing within the workplace. If effective mitigations are to be developed, it is necessary to first understand the underlying causes and mechanisms driving response behaviour. It is increasingly clear that a one-size-fitsall approach is unlikely to be sufficient, with the wider message, individual and context-related factors identified in this paper requiring attention. For instance, employees were found to display large variation in their exposure to spear phishing emails within the work environment, primarily driven by the extent to which they received external emails, with staff groups who regularly deal with external suppliers having the most experience of both receiving and reporting spear phishing and generic phishing emails. Whereas this increased exposure could lead to increased susceptibility in such groups, it could also result in enhanced awareness of the risks of spear phishing and how to deal with it, due to the regular requirement to make decisions regarding message legitimacy. Responding to these different exposure patterns may require the development of adaptive user interfaces that respond to the likely awareness of users, with those who are less regularly exposed to phishing emails requiring different system sign-posting. For instance, less aware users may respond favourably to periodic reminders of the phishing threat and how to report phishing emails. Conversely, more aware users may benefit from regular updates regarding evolving phishing tactics in order to counter any potential stereotypes that may develop through repeated exposure to similar targeted phishing emails (e.g., invoice scams). In order to achieve this, an in-depth analysis of the potential impact of particular job roles on phishing susceptibility using existing human factors tools, such as task analysis (Kirwan and Ainsworth, 1992) may be of benefit in identifying and mitigating likely risk factors for different staff groups. To enhance and maintain employee awareness of phishing attacks, communication materials are often circulated to employees via a range of mechanisms (e.g., posters in corridors, information on noticeboards, intranet articles etc.). However, it is well established that an individual's attentional resources are limited (see Kahneman, 1973), with restrictions on the amount of information that cognitive systems can process at any one time. When these materials are combined with the vast array of other information that must also be routinely circulated to employees, such as health and safety information and site-specific news, this can reduce the likelihood and extent that such information will (a) be noticed, and (b) be remembered or applied when making decisions regarding the legitimacy of emails. This issue is also likely to be accentuated by other aspects of the work environment, such as a perceived lack of time to undertake tasks outside of the primary job role, which can lead to additional information not being prioritised. The allocation of specific time to interact with such information, or the use of more creative, interactive methods to disseminate information, may provide an initial means to counteract this issue. The use of decision aids, such as external email banners and threat updates, was also highlighted as providing valuable assistance in drawing individuals' attention to potential risks when an email is first received. Once doubt has been invoked with regards to the legitimacy of the email, reporting mechanisms were highlighted as providing a means to verify suspicions and receive feedback on judgements. The consistent provision of such feedback was considered important to ensure that those who reported emails did not consider their actions to be a waste of time, and therefore would continue to use reporting mechanisms in the future. Whilst all personnel should have access to formal support mechanisms, access to the more informal processes that were highlighted, such as peer verification and support, is likely to be limited in certain staff groups (e.g., remote workers or those working off-site). As such, ensuring that all staff can access consistent technical feedback when required should be combined with a further consideration of potential options for supporting remote staff groups using informal online support tools, such as the development of specific internal forums or remote communication functions. Finally, a number of perceived knowledge gaps also emerged that may impact susceptibility to spear phishing attacks. Specifically, these focused on (a) the degree of understanding of the technical mechanisms involved when a potential phishing email is responded to (e.g., when a malicious link is clicked on), (b) the potential impact of responding to such an email, including the likely trajectory of such impacts and how these impacts can be mitigated, and (c) the degree of understanding of the limitations of technical solutions and security systems, such as email filters, so that users perceive themselves as a vital component of such systems, in line with socio-technical approaches (Sasse et al., 2007). By empowering individuals with greater understanding of both the consequences of their potential actions and how these consequences can be mitigated at each stage, uncertainty regarding the phishing threat may be reduced (Ng et al, 2009;Tsai et al., 2016). Such approaches should also target understanding of how seemingly innocent information may be used in the development of a targeted phishing attack, such as providing the contact details of an employee to a social engineer or displaying information on social media. This knowledge is also directly applicable to a personal context, where employees may perceive themselves as more vulnerable due to increased exposure to a range of phishing and spear phishing emails and reduced availability of specialist support when deciding how to respond (Ng et al, 2009;Tsai et al., 2016). The degree of technical knowledge that an individual has regarding the vulnerabilities of technical systems may also impact their understanding of the importance of human users as a secondary line of defence, and therefore their perceptions of both susceptibility and responsibility with regards to spear phishing (Ng et al, 2009;Tsai et al., 2016;Vishwanath et al., 2016). By focusing on the development of collaboration with security in order to achieve mutual goals (i.e., to reduce the phishing threat), uncertainties regarding the role and operations of security functions may be reduced and a greater understanding of the vulnerabilities of security systems developed. Consideration should also be given regarding how knowledge of these areas can be encouraged via other means, such as through the design of current system interfaces that may communicate this information in a visual manner at the time that the user interacts with an email. Limitations and future work It should be considered that this paper only reflects data from two organisations, and therefore further work is required to build on these findings and explore the extent that they are reflected in other organisations. This is particularly relevant for the findings of Study Two, since employee perceptions and experiences are likely to be impacted by a range of factors that may be specific to the organisation studied. Secondly, the extent that these perceptions reflect actual employee behaviour when faced with a spear phishing email would also benefit from further investigation in a more controlled setting. However, a number of factors identified relate directly to susceptibility concepts that have previously been examined in laboratory settings or using university populations. Thirdly, it should be noted that recruitment to the focus groups was based on voluntary participation, which could have skewed the sample to employees who were more knowledgeable or had a greater interest in this area. However, a wide range of awareness and knowledge levels and opinions related to phishing was demonstrated in discussions. Finally, the use of historic phishing simulation data in Study One represents a novel approach in exploring phishing susceptibility in applied settings and provides a unique method for exploring the impact of message-related factors on response behaviour. However, this meant that it was not possible to counterbalance the particular influence techniques used within phishing emails, making it more difficult to assess these factors as individual constructs. Similarly, it was not possible to access demographic data regarding respondent attributes (e.g., age, cyber security knowledge, job role etc.). The availability of only nine simulated phishing emails also reduced the number of influence techniques that could feasibly be examined. Therefore, although this study focused on the impact of only two influence techniques (authority and urgency), future work should explore the potential role of other influence techniques and message-related factors, such as the time of day that an email is received, in impacting susceptibility within the workplace. Future work exploring the extent to which employees divulge confidential information (e.g., usernames and passwords) after clicking on a link would also be beneficial to explore when people may become suspicious of attempts to elicit information. However, whether people click on malicious links alone, and how to reduce this, is still of interest to organisational security personnel. Overall, the use of such data represents a novel approach in exploring spear phishing susceptibility in applied settings, and we believe that it provides a unique method for exploring the impact of message-related factors on response behaviour, which we hope will be built upon in future work. Conclusions In sum, the findings of our study highlight the importance of considering the wider work context in relation to employee susceptibility to both spear phishing emails and phishing in general. Work-based norms and routines likely represent a primary factor impacting response behaviour within the workplace, influencing the development of contextspecific habits, expectations and perceptions of risk. These are all likely to influence the information processing strategies that are used when a suspicious email is encountered and its resultant success. Reflective of the combined findings of Study One and Two, considering aspects of the email that is received, the individual who receives it, and the context in which it is encountered, within theoretical approaches is vital if susceptibility within the workplace is to be truly understood. It is hoped that the findings of the current study will provide a basis for further theoretical development in this field, whilst also presenting an initial aid for user communities to consider, and begin to address, the range of potential susceptibility factors that may be present within organisational settings. Funding This work was part funded by the Centre for Research and Evidence on Security Threats (ESRC award: ES/N009614/1). Declarations of interest None.
2023-07-19T06:11:58.626Z
2018-12-01T00:00:00.000
{ "year": 2018, "sha1": "410cf70cc7e3f7bad0944b13f5e246192271a0d6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.ijhcs.2018.06.004", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "e0b67f96ce9a18d2a10719f6c7893b93fe78d2b3", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
5024039
pes2o/s2orc
v3-fos-license
Occlusal Enamel Complexity in Middle Miocene to Holocene Equids (Equidae: Perissodactyla) of North America Four groups of equids, “Anchitheriinae,” Merychippine-grade Equinae, Hipparionini, and Equini, coexisted in the middle Miocene, but only the Equini remains after 16 Myr of evolution and extinction. Each group is distinct in its occlusal enamel pattern. These patterns have been compared qualitatively but rarely quantitatively. The processes influencing the evolution of these occlusal patterns have not been thoroughly investigated with respect to phylogeny, tooth position, and climate through geologic time. We investigated Occlusal Enamel Index, a quantitative method for the analysis of the complexity of occlusal patterns. We used analyses of variance and an analysis of co-variance to test whether equid teeth increase resistive cutting area for food processing during mastication, as expressed in occlusal enamel complexity, in response to increased abrasion in their diet. Results suggest that occlusal enamel complexity was influenced by climate, phylogeny, and tooth position through time. Occlusal enamel complexity in middle Miocene to Modern horses increased as the animals experienced increased tooth abrasion and a cooling climate. Introduction Horses have long been used as a primary example of evolution through adaptation to a changing environment [1,2,3]. Horse adaptations to changing climates, specifically through dental evolution in response to an increasingly abrasive diet, have been qualitatively analyzed, but rarely investigated quantitatively [4,5,6,7]. Grass phytoliths have often been invoked as a primary driver of ungulate dental evolution [8], but recent work has suggested a much greater role for grit from drier environments and a reduced or even no role for phytoliths [9,10,11,12]. Previous work on equid adaptation to an abrasive diet focused on changes in hypsodonty and enamel microstructure [8,12,13]. Evolution of horse teeth through an increase in hypsodonty, quantified as Hypsodonty Index (HI, the ratio of mesostyle crown height to occlusal length) [14,15,16,17,18], has been documented in the Oligocene through Pleistocene fossil record, primarily for North America [19]. Increased tooth height provides more resistive enamel over an animal's lifetime. These changes have been interpreted as an adaptation to feeding in open habitats as cooling and drying climates changed woodlands to grasslands, requiring horses to adapt to increased rates of tooth wear created by environmental grit and the phytoliths of grasses [2,8,12]. Pfretzschner [13] investigated changes in equid enamel microstructure, concluding that adaptation to increased tooth wear was in place by the rise of ''Merychippus'' at about 19 Ma. The prisms and interprismatic matrix that make up enamel at the microscopic level stiffen enamel and the arrangement of these prisms strengthens it with respect to mechanical stress patterns from grinding against opposing teeth and food [13]. Miocene and later equid teeth are marked by complex, sinuous bands of enamel on their occlusal (chewing) surface (Fig. 1). These bands have taxonomically distinct patterns, with workers suggesting that members of the Equine tribe Hipparionini have more complex enamel bands than members of the tribe Equini [4,5]. Previous workers have observed qualitatively that occlusal enamel increases in complexity over the evolutionary history of horses [5]. This change is suggestive because, in a similar way to increases in hypsodonty, increasing the occlusal enamel complexity of teeth should allow them to last longer simply by distributing lifetime tooth wear over a greater total resistive cutting area. Recent work has begun exploring the relationship between the complexity of ungulate occlusal enamel and abrasiveness of diet using quantitative methods [7,20,21,22]. Here we assess the evolution of enamel complexity in Miocene and later North American equids in terms of occlusal enamel complexity, specifically investigating whether enamel complexity evolves in a pattern consistent with that expected as a response to increasing dietary abrasion. Additionally, we provide the first quantitative test of the relative complexity of hipparionine and equine occlusal enamel bands. Questions Given current hypotheses of horse phylogeny and diversification in response to environmental changes and the extremely large available sample size (.2,581 known North American localities with fossil equids), we can use equid occlusal enamel band length and complexity of the occlusal surface to investigate the evolution of morphology in response to an increasingly abrasive diet. These observations lead to a series of questions: Do equids change their enamel complexity from the Miocene through the Recent? If so, does complexity increase over time, as would be expected for increasing adaptation to an abrasive diet? Is there a difference in enamel complexity between equid tribes, especially Hipparionini and Equini? If the evolution of enamel complexity is consistent with dietary adaptation, are there compromises between hypsodonty and enamel complexity? If so, do the two tribes make different compromises? Hypothesis We hypothesize that increased abrasion in equid diets produced a selective advantage for teeth with greater resistive cutting area (occlusal enamel complexity).We will test this hypothesis by statistical analysis of enamel complexity derived from images of fossil horse teeth. If the statistical analysis shows a distinct pattern, then equids responded to increased abrasion through an increase in occlusal enamel complexity, providing an increased resistive cutting area for food processing during mastication. If the statistical analysis shows a pattern indistinguishable from random, we will be unable to reject the null hypothesis of no unifying adaptive significance to changes in occlusal enamel complexity or that some other process that we have not tested is controlling occlusal enamel complexity. Occlusal enamel complexity will vary as a consequence of phylogenetic constraint and evolutionary response to changes to ecological role through time. If our hypothesis is correct, the complexity of enamel on the occlusal surface of equid teeth should increase through time, tracking changes in the abrasiveness in diet as climates changed through the Neogene. It is possible that phylogenetic constraint, inherited developmental or other limits to adaptation, may control the compromises different lineages of horses find between hypsodonty and enamel complexity for their adaptation to tooth abrasion. If so, we would expect each tribe to have distinct differences in their occlusal enamel complexity in comparison to their hypsodonty. Published qualitative observations of equid tooth morphology and its relationship to diet [7,21,22] suggest to us that Hipparionini should have the most complicated occlusal enamel, followed by Equini, then the ''Merychippus'' grade horses, and finally ''Anchitheriinae''. Background Evolutionary Context Analyses of evolutionary adaptations must be investigated within the context of phylogeny [23]. Linnean taxonomy is a hierarchical naming system that was originally created in a pre-Darwinian context to describe similarity amongst organisms. Like most natural systems, phylogenetic relationships are more complicated than the initial set of categories defined by man. The current consensus on equid phylogeny includes three subfamilies, ''Hyracotheriinae,'' ''Anchitheriinae,'' and Equinae [5,24,25] (Fig. 2). Within Equinae, there are two sub-clades, the tribes Hipparionini and Equini, and a basal grade mostly assigned to ''Merychippus.'' This genus has long been considered a paraphyletic taxon, maintained through convenience to include all basal equines that do not possess apomorphies of either Equini or Hipparionini. Typical ''Merychippus'' have an upper dentition that maintains the plesiomorphic features of the basal ''Anchitheriinae,'' a paraphyletic grade below Equinae (Fig. 1), but also share characters with derived Equinae [5,26,27]. Hipparionini and Equini have distinct tooth morphologies as well (Fig. 1). Members of the tribe Hipparionini are hypsodont, but relatively lower crowned and have more complicated enamel borders than their equin counterparts [4,5,24]. The two tribes of Miocene horses, Hipparionini and Equini, are diagnosed on the basis of differences of the structures formed by the folding of enamel on the occlusal surface of their teeth [4,5,6,24,25]. The shape of the occlusal pattern was shown to be an important character in equin and hipparionin phylogeny [5,24,28]. This qualitative difference leads us to ask whether complexity of occlusal enamel evolved differently because of phylogenetic constraint and/or climatic pressures between Equini and Hipparionini. Because species are phylogenetically related to differing degrees, they cannot be considered as independent for statistical analysis [23]. To accommodate this dependence, Felsenstein [23] proposed the method of independent contrasts, incorporating the phyloge-netic relationships into regression analysis. Independent contrasts has been developed into a broad field of phylogenetic comparative methods [29,30,31], but at this point all of them require phylogenies with branch lengths derived from models of molecular evolution. Ideally, we would use one of these comparative methods for testing our hypothesis of variations in the context of phylogeny, but current methods require known branch lengths and have yet to be adapted to fossil-based morphological phylogenies [32,33,34]. We will accommodate phylogenetic interdependence amongst the fossil horses by using nested variables in a multi-way analysis of variance (ANOVA). In this way, we are able to model phylogeny using the hierarchical taxonomic system as a proxy for phylogeny [7]. Using these nested variables in an ANOVA is not ideal for phylogeny, because it does not completely take the topology of a phylogenetic tree into account, but as a coarse approximation, it functions for this scale of analysis. Measures of Complexity Species and other higher taxonomic groups in horses are primarily diagnosed by qualitative characters; in fact, a majority of equid diagnoses rely upon differences in pattern of occlusal enamel [4,24]. A complicated enamel pattern should have longer occlusal enamel length thus producing more enamel per unit surface area on the occlusal plane. Famoso et al. [7] introduced a numerical method to quantitatively measure and test the differences in enamel complexity in ungulates, a unit-less value called Occlusal Enamel Index (OEI): OEI = OEL/!(True Area) where OEL is the total length of enamel bands exposed on the occlusal surface as measured through the center of the enamel band, and True Area is the occlusal surface area constructed as a polygon following the outer edge of the occlusal surface, including any cementum that may exist outside of the enamel, where cementum on the lingual side is part of the occlusal surface while that on the buccal is not (Fig. 3). The True Area is not an occlusal length multiplied by width, but is instead representative of the area actually contained within the curved occlusal boundaries of the tooth. We are measuring True Area as a 2D projection, so we do not account for any increases in area that might arise from topography on the occlusal surface of the tooth. Because most equid teeth are on the low-relief end of the mesowear spectrum [34], this projection will have little effect on our current study; however, studies that extend this methodology to high-relief teeth might find improvements from a 3D approach. Analyzing images of teeth in the computer allows us to use the more precise true area instead of the more traditional technique of multiplying the measured length and width of the occlusal surface. True area is a proxy for body size, so OEI removes the effects of absolute scale on complexity; however, the effects of body size are not completely removed, as OEI does not adjust for size-related differences in complexity, i.e., allometry [7]. Becerra et al. [36] have introduced a similar enamel complexity metric, applying it to rodents. The enamel index (EI) is calculated as: EI = OEL/(True Area). OEI differs from EI in that the occlusal area is treated differently. OEI produces a unitless metric while EI does not, producing values in units of 1/length, so consistent length scales would have to be used to maintain comparability among analyses. Becerra et al. [36] found evidence to suggest that selective pressures from regional habitats, in particular vegetation, have shaped the morphological characteristics of the dentition of caviomorph rodents in South America. We use OEI for this study for three reasons: (1) we expect the unitless index to more completely account for isometric changes of enamel length with mass, (2) we want our results to be directly comparable to Famoso et al. [7], and (3) the unitless index is methodologically aligned to the unitless HI commonly used in horse paleoecology. Two recent studies have analyzed enamel complexity within the Order Artiodactyla, using a slightly different approach that focuses more on visible enamel band orientation. Heywood [21] analyzed molar occlusal surfaces and characterized them on the basis of length, thickness, and shape of the enamel bands, concluding that plant toughness is a primary driver of occlusal enamel form in bovids. Kaiser et al. [22] investigated the arrangement of occlusal enamel bands in the molars of ruminants with respect to diet and phylogeny, finding that larger ruminants or those with higher grass content in their diet have a higher proportion of enamel ridges aligned at low angles to the direction of the chewing stroke. Previous work on occlusal enamel patterns in equids has been limited to the observation that patterns change through wear stages [5,37]. Famoso and Pagnac [6] suggested that the differences in occlusal enamel patterns through wear correspond to evolutionary relationships in Hipparionini. To date, attempts at quantifying the patterns of evolutionary change in occlusal enamel complexity between and within these equid tribes have been limited by small sample sizes [6,7]. Tooth Position Beyond the pressures of the environment, differential expression by tooth position is another aspect of enamel band evolution that may be linked to phylogeny. Famoso et al. [7] demonstrated that enamel complexity is expressed significantly differently at each tooth position. Equid P2 and M3 are easily identifiable in isolation: the P2 has a mesially pointed occlusal surface while the M3 is tapered distally. The middle four teeth (P3-M2) are more difficult to identify to position when isolated as they have uniformly square occlusal surfaces. Premolars tend to be larger than molars within a single tooth-row, but size variation within a population overwhelms this difference for isolated teeth. As with many mammals, the majority of identifiable fossil equid material tends to be isolated teeth, as teeth are composed of highly resistant materials (enamel, dentine, and cementum) in comparison to the surrounding cranial bone. Many taxa, including Protohippus placidus, Pliohippus cumminsii, and Hipparion gratum, are only known from isolated teeth [1,5,24]. Because of their relative abundance in each tooth-row, a majority of isolated teeth tend to be the more difficult to distinguish P3 to M2. Including isolated teeth in our analysis would increase geographic and taxonomic diversity, but variation in enamel complexity amongst the tooth positions could overwhelm the signal. Optimizing the sample size in our study design makes it important to identify whether tooth position has a significant effect on OEI for P3 -M3. Materials Our data consists of scaled, oriented digital photographs of the occlusal surface of fossil and modern equid upper dentitions. We Table S1, and geographic locations of repositories are indicated in the Institutional Abbreviations section. Each named museum listed in the Institutional Abbreviations section gave us permission to access their collections. Care was taken to select individuals in medial stages of wear (no deciduous premolars and no teeth in extreme stages of wear). Skulls and complete to nearly complete tooth rows were preferred because we can be more confident in taxonomic identification and tooth position. Isolated teeth were also included when more complete tooth-rows were not available for a taxon. (Table S1). OEI was calculated following Famoso et al. [7] (Fig. 3). Institutional Abbreviations We used one-way and multi-way analysis of variances (ANOVAs) in JMP Pro 9 to determine whether the relationship between tooth size and enamel length fit our predictions. We used a Shapiro-Wilk W test [38] to test whether OEI values were normally distributed and the Bartlett test of homogeneity [39] to determine whether the variances in OEI among groups were homogeneous. If OEI is normally distributed and the variances are homogeneous among groups, then the data will not violate the assumptions of the ANOVA and a parametric test can be performed. ANOVA is generally robust to violations of both of these assumptions, particularly if the sample sizes amongst groups are similar [40]. Our sample sizes are not similar among all of our groups, so we have supplemented ANOVAs with nonparametric Wilcoxon tests [41] when one or both of these assumptions are violated. When data from all tooth positions were pooled, they did not display a normal distribution. Upon further investigation, we determined that all but one position in the tooth row was normally distributed and excluded the non-normal tooth (M3) from further analysis. As discussed below, we used nested (hierarchical) ANOVAs to account for evolutionary relatedness in our analysis. Nested ANOVAs include levels of independent factors which occur in combination with levels of other independent factors. Because ANOVAs can only provide a test of all factors together, we have included Tukey-Kramer tests where needed to investigate statistically significant groupings [40]. An analysis of tooth position was run on a subset of the data (n = 528 teeth) with known tooth position. This ANOVA allowed us to determine whether there was a tooth position or group of tooth positions with indistinguishable OEI values, allowing us to limit the number of specimens to be measured for the subsequent analyses. The results of this analysis would provide a justification for the selection of a subset of teeth to consistently measure. We ran a multi-way ANOVA with OEI as the dependent variable and tribe, region, NALMA, and tooth position as the independent factors. P2 and M3 were excluded as they have an overall different shape and are statistically different in OEI from the teeth from the middle of the tooth-row [7]. We additionally ran a one-way ANOVA with OEI as the dependent variable and tooth position excluding P2 and M3 as the independent factor. Tukey-Kramer tests [42] were also performed to investigate the origin of significance for independent factors. We also ran a one-way ANOVA with OEI as the dependent variable and tooth position excluding P2 and M3 for the subset of the data that only belonged to the genus Equus, the genus with the largest overall sample size. Using just one genus would eliminate any influence from higher level evolutionary relationships. A one-way ANOVA with OEI as the dependent variable and tooth position excluding P2 and M3 by tribe (just Equini, just Hipparionini, and just ''Anchitheriinae'') allowed us to test whether variation in tooth position was consistent at this level of lineage. Tribal affiliations were used as a proxy for phylogenetic relationships, therefore all genera needed a tribal level affiliation to be included in the ANOVAs, but the basal members of the Equinae (members of the ''Merychippus'' grade) do not belong to the Hipparionini or Equini, so we applied the place-holder paraphyletic tribe ''Merychippini.'' Similarly, for all members of the paraphyletic subfamily ''Anchitheriinae,'' the place-holder name ''Anchitheriini'' was applied. Running our analyses above the genus level limits the influence of lumping and splitting at the genus and species levels which arise from qualitative analysis of characters found in isolated elements. While working through museum collections, we found several nomen nudum manuscript names assigned to specimens. We assigned these specimens to the most appropriate, currently-established genus name and left the species as indeterminate. Even for published species of equids, there are ongoing controversies about the validity of names. Major problem areas include genera and species split from the paraphyletic form genus ''Merychippus'' [5,43,44,45,46] as well as the number and identity of Plio-Pleistocene and recent Equus species [5,42,47,48]. There has been controversy as to the validity of the number of genera and species that belong to Hipparionini [5,15,37,43,45,49,50]. Leaving the analysis above the genus level removes any effect taxonomic uncertainty at the generic and specific levels. Limiting the taxonomy to the Tribe and above also allows a more robust sample size. Equid genera are typically diagnosed through a combination of dental and cranial characters [5,24,51,52]. Most isolated dental specimens can only be identified to genus because of the lack of diagnostic features, so a genus or tribal cutoff for our analysis allows us to access the rich supply of isolated teeth. It was necessary to combine two of the NALMAs, the Irvingtonian and Rancholabrean, to have sufficient sample size for the analyses used here. This combination is not ideal as it eliminates a portion of the temporal resolution of our study. The Irvingtonian and Rancholabrean are both part of the Pleistocene. The Irvingtonian was not well sampled enough to analyze on its own, and by combining it with the Rancholabrean we were also able to include specimens from the Pleistocene in the temporal bin when their NALMA was not known. To accurately investigate OEI through hierarchical taxonomic relationships and changing regions through time, it was necessary to use nested terms in our analyses. Nesting tests hypotheses about differences among samples which are placed in hierarchical groups. Nested factors are usually random-effects factors, or a factor with multiple levels but only a random sample of levels is included in the analysis. When applied to an ANOVA, it is considered a modified one-way ANOVA [40] where one variable is the random-effects factor and the other is considered a subsample. Including nested factors accounts for within-group variability. To make a single overall test of our hypothesis, we constructed a multi-way ANOVA with OEI as the dependent variable and tooth position, nested taxonomy (tribe within subfamily), and time (NALMA) as independent factors (Listed in the Results section as Nested Multi-way Analysis of Variance). In addition, we ran three groups of one-way ANOVAs with Tukey-Kramer tests to test our hypothesis of the influence of climate and phylogeny as on OEI through time. Our one-way ANOVAs use OEI as the dependent variable. Our first group of one-way ANOVAs (in Results as ANOVA 1: OEI vs. Tribe) uses tribe as the independent variable to investigate how OEI differs between lineages. Next, we used NALMA as the independent variable to examine how overall OEI changes through time (ANOVA 2: OEI vs. NALMA). Finally, we used tribe as the independent variable and separated by NALMA to explore whether the different lineages are distinct in OEI at different periods of time (ANOVA 3: OEI vs. Tribe within Each NALMA). Results All datasets were tested for the assumptions of ANOVA, Gaussian distribution and equality of variances among groups. For concision, only significant violations of these assumptions are noted. Tooth Position The Bartlett test of equal variance for this ANOVA showed significant differences among variances for this subset of the data, Nested Multi-way Analysis of Variance All independent variables are significant for OEI at the a = 0.05 level. Table 3 shows the p values for each variable. ANOVA 1: OEI vs. Tribe The Bartlett test of equal variance for this ANOVA was significant, so we supplemented the standard ANOVA with a Wilcoxon test for the comparison ( Table 4). The Chi Square approximation of the Wilcoxon was significant (p,0.0001), matching the ANOVA results (p,0.0001) ( Table 5). Tukey test results indicate that Hipparionini and Equini are separate from one another. ''Merychippini'' is between the Hipparionini and Equini. The ''Anchitheriini'' is in its own distinct group. ANOVA 2: OEI vs. NALMA The Bartlett test of equal variance for this ANOVA was significant, so we supplemented the standard ANOVA with a Wilcoxon test for the comparison. Results are presented in Table 6. The Chi Square approximation was significant (p,0.0001). The Wilcoxon test yields similar results to the standard ANOVA (Table 7), which was also significant (p,0.0001). The Irvingtonian/Rancholabrean stands out as a unique period of time with the highest OEI values. The Blancan has the next highest OEI values. The Recent is grouped alone with the lowest OEI values. The Clarendonian, Hemphillian, and Barstovian overlap with the Blancan and the Recent and have OEI values that are intermediate between the two groups. ANOVA 3: OEI vs. Tribe within Each NALMA The Bartlett test for the Hemphillian ANOVA was significant, so we supplemented the standard ANOVA with a Wilcoxon test for that interval (Table 8. All tests for NALMAs were significant ( Table 9). The Barstovian (p,0.0001) had two statistical groupings; one group is the ''Merychippini'' and Hipparionini, and the other is the Equini and ''Anchitheriini.'' The Clarendonian (p, 0.0001) had the same two groups. The Hemphillian (p,0.0001) and the Blancan (p = 0.0013) both have two distinct groups, the Hipparionini and Equini. The groupings of tribes stay the same through time. Discussion Tooth position does not significantly affect OEI for the middle four teeth (P3-M2) of the upper tooth row at the tribal level. Our investigation into tooth position indicates that we can safely include isolated molariform teeth in our study without taking tooth position into account if we exclude the P2 and M3. These two teeth have already been shown to be different from the other molariform teeth [7]. We also found that our data were normally distributed when the P2 and M3 were excluded. We found more variation in OEI for the P4 than for the M1, M2, and P3 according to Bartlett's test. We suggest subsequent work should focus on M1, M2, or P3 to take advantage of this lower variance. It is important to note that the Wilcoxon test was not significant for the main body of the data. While the ANOVA was significant, this dataset violated the assumption of equal variance, so the Wilcoxon is the more appropriate test. In the end, all of our analyses of tooth position suggest that the middle four teeth are not significantly different from one another and can be used interchangeably in an analysis at this broad a level. Our investigation into tooth position also explored whether the variation in OEI for the various tooth positions were the same among horse lineages. Within each tribe tooth position is not significant for the four square middle teeth. Tooth position OEI varies significantly between tribes, suggesting that each lineage is adapting differently for each tooth. The results of our nested multi-way ANOVA indicate that time, tooth position, and nested taxonomy are all significant factors for the length of enamel in horse teeth. Each of the subsequent oneway ANOVAs allowed us to tease apart the details of the multiway ANOVA result. Generally, OEI increases from the Miocene NALMAs to the Pleistocene, correlating with the overall cooling climate from the mid-Miocene Climactic Optimum (16 Ma) to recent [53]. This increase in OEI over time is compatible with our hypothesis that, as climate became cooler and dryer and the abrasiveness of the equid diet increased [12], increased OEI was selected for across horse lineages. OEI in the late Miocene is lower than in the Pliocene and the increase continues through to the Pleistocene. In the Holocene, we see a decrease in complexity to levels similar to that of the late Miocene. Increase in OEI though time matches the documented increase in HI through time [35]. OEI and HI are measures of ways in which ungulates increase the amount of enamel available for a lifetime of chewing abrasive foodstuffs, so higher values of either metric could suggest higher abrasiveness in diet [7]. The drop in complexity we observe in our Holocene sample could be influenced by the limitation in taxonomic sampling available for extant Equini. The only animals available for inclusion are influenced by conditions of artificial selection and human management, and are descended from the Old World lineage of horses unlike the New World fossils included in our dataset. These animals do not have the same diet, behavior, or morphology as they would in the wild [54,55], so if enamel complexity is phenotypically plastic and reflects diet during tooth development, as suggested for elephantids and rodents [56,57,58], their simpler enamel may reflect the dietary conditions under domestication. This possibility warrants further investigation but is beyond the scope of our study. More likely is that the domestic and feral horses in our Holocene dataset are descended from animals in distinctly different selective regimes in the Old World; future studies with larger spatial sampling would be needed to test this hypothesis. At this point, we do not feel confident interpreting the drop in OEI from the Pleistocene to the Recent as an evolutionary change, but instead interpret it as suggestive of the biogeography of this trait. The overall analysis of OEI by tribes (ANOVA 1) strongly supports the hypothesis that the Equini and Hipparionini had distinct evolutionary responses in occlusal enamel evolution. The results of the Tukey-Kramer test very closely reflect the evolutionary relationships of the family. Hipparionini and Equini are sister taxa, and both are in distinct groups from one another. ''Merychippini'' includes the common ancestor between these two within the subfamily and is grouped with both the Hipparionini and Equini as is expected in light of the phylogeny. ''Anchitheriini'' is the paraphyletic stem group ancestral to ''Merychippini.'' The ''Anchitheriini'' is in its own group statistically and has the lowest OEI. Members of ''Anchitheriini'' are low-crowned, or have low HI [5], and can be interpreted as either browsers or intermediate feeders with a low percentage of abrasive material in their diet. Browse comprises a larger portion of the diet for ''Anchitheriini'' than any of the other tribes, and if diet is shaping occlusal enamel evolution, this group should have the lowest OEI, as indeed it does. We can use geography to tease apart diet and environmental change. Incorporating independent diet proxies (e.g., stable isotopes from enamel and/or microwear) combined with a regional biogeographic approach in a future study would identify the relative impact of local environmental change versus changing diets in shaping the evolution of OEI. ANOVAs for tribes by NALMAs present an interesting pattern that enhances our interpretation of occlusal enamel evolution in horses. Ancestry seems to be an important influence on enamel length: the characteristic OEI values for a group are established at its origin and persist through time. When we consider interpretations of diet for each group [25] we find an unexpected pattern: many of the Barstovian equin horses are interpreted to be grazers but have OEI values consistent with contemporaneous browsing taxa. Interestingly, HI values for these equin horses are higher than those of hipparionin horses, while the converse is true for OEI. This supports our qualitative assertion that equin horses have more hypsodont yet less complicated teeth then their hipparionin relatives. When the four tribes are present, Hipparionini and ''Merychippini'' are grouped together. Equini and ''Anchitheriini'' are also grouped. This pattern is only seen in the Barstovian and Clarendonian. Groupings may either represent tribes closely competing for resources or more evidence for the importance of phylogenetic constraint in this character. That is, the sample for ''Merychippini'' may be dominated by ancestral forms of Hipparionini, producing the observed connection between the tribes. We suspect this may be the case because more of the equin ''Merychippus'' have been split out into their own genera [5,43,44,45,46]. Conversely, it is possible that typical fossil members of Equini were more intermediate feeders and competing with browsing ''Anchitheriini'' for resources. Notably, in Great Plains Clarendonian faunas, Equini and ''Anchitheriini'' both compose a small percentage of the relative abundance of horses [6]. The similarity in OEI and relative abundance between these two groups warrants further investigation because previous workers have assigned the equines to grazing niches on the basis of their hypsodonty and isotopic data [25], but their OEI values would suggest that they are browsing along with their contemporaneous anchitherine relatives. In terms of species richness, Hipparionini were the most successful tribe during the Clarendonian in the Great Plains, but were eventually replaced by Equini at the end of the Blancan. ''Anchitheriini'' and ''Merychippini'' go extinct by the Hemphillian, leaving Equini and Hipparionini (Fig. 2). The two tribes are significantly different in the Hemphillian and Blancan. Hipparionini are constrained to the southern latitudes during the Blancan and are extinct by the end of the Blancan [59]. Hipparionini remain in regions closer to the equator where the effects of climate change would not have been as strong [53,60]. In those regions, they continue to have higher OEI than their equin counterparts. The food source for hipparionines may have been restricted to warmer climates as the globe cooled, thus restricting the range of the tribe. The warmer regions may have served as refugia for North American hipparionin horses. We can better understand the drivers of occlusal enamel complexity when we can look across geography because we can compare regional patterns unfolding under slightly different environmental changes. Adding these data would allow us to investigate changes in response to regional climate changes through time. We would like to apply these methods to other megafauna which have adaptations to increased ingested abrasiveness, such as camels, rhinos, African large primates, and South American notoungulates. A majority of enamel complexity in Equids is found in the hypsodont forms which originate in the Barstovian and are included in this study. However, it would be interesting to extend our methods back deeper into the Anchintheriinae and perhaps include the Eohippus-grade equids to see whether they also reflect other metrics of changing ecology. We would also like to test differences within Plio-Pleistocene Equus (e.g., caballine and stiltlegged horses), comparing them to Hipparionini genera to see if any equin horses are independently evolving complex enamel patterns similar to hipparionin horses as or after those hipparionins go extinct. This way we could test whether these Equini species converged on vacated niche space left by the extinct hipparionines. The results of our Occlusal Enamel Index (OEI) study suggest that the complexity of the occlusal enamel of equid teeth is influenced by a combination of evolutionary relatedness, developmental constraint (tooth position), and changing environments over time. Equini seem to have an overall lower OEI than Hipparionini which supports the qualitative hypothesis that Equini have less occlusal enamel than Hipparionini. Our study shows that enamel band shapes are being influenced by climate and evolutionary history. As climate dries through time, we see an overall increase in enamel complexity. Phylogenetic relationships also have an influence on relative enamel complexity between clades (i.e., Equini tends to have less complex enamel than Hipparionini). Our results are consistent with the hypothesis that horses increase their enamel complexity in response to increased tooth abrasion from the Miocene through the Holocene. Supporting Information Table S1 Raw OEI data for statistical analysis. (XLSX)
2018-04-03T05:40:46.187Z
2014-02-27T00:00:00.000
{ "year": 2014, "sha1": "95be1fa4d63faa9ae80af1bce0ed518a20055bff", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0090184&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "95be1fa4d63faa9ae80af1bce0ed518a20055bff", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
246340023
pes2o/s2orc
v3-fos-license
Modelling landscape management scenarios for equitable and sustainable futures in rural areas based on ecosystem services ABSTRACT Scenario analysis is a useful technique to inform landscape planning of social-ecological systems by modelling future trends in ecosystem service supply and distribution. This is especially critical in floodplain agroecosystems of rural areas, which are at risk of losing riparian forest corridors due to increasing land use conversion for agricultural production and other ecosystem services due to rural abandonment. However, few studies investigating the effects of land management combine social and ecological modelling in scenario analyses. We estimated the supply of 16 ecosystem services under five alternative scenarios along two gradients: agricultural intensification of the floodplain and active ecological restoration of the riparian forest. We used redundancy analyses to detect ecosystem service bundles and interviews to identify societal gains and losses associated with each management scenario. Our results show how land management influences both the supply and distribution of ecosystem services. Scenarios promoting active ecological restoration supplied more services and benefited a larger range of societal sectors than scenarios focused on provisioning services. We also found two consistent bundles across scenarios, one related to less intensive food supply and another one related to outdoor activities. Interestingly, additional services were included in these bundles in the different scenarios, reflecting land management effects. Landscape scale management promoting both the conservation of ecosystem functioning and the sustainable use of provisioning services could supply a more balanced set of ecosystem services and benefit a larger number of societal sectors, contributing to more equitable and sustainable futures in rural areas. Graphical abstract Introduction Worldwide trends show accelerating rates of urbanization while rural areas undergo depopulation (United Nations 2018). These trends are consistent with shrinking rural regions across Europe, which is particularly strong in Northern and Mediterranean countries (ESPON 2017). For instance, more than 80% of rural municipalities in Spain have shrunk between 1961 and 2011 (ESPON 2017). As a result, a third of provinces in peninsular Spain have currently a population density lower than 30 inhabitants/ km 2 , lowering to less than 8 inhabitants/km 2 in seven provinces (INE 2019). This has resulted in two contrasting landscapes: inhabited rural areas characterized by agricultural intensification and depopulated rural areas characterized by the abandonment of agricultural practices (García-Llorente et al. 2012). It is well known that agricultural intensification can increase the supply of a few provisioning services, such as crop yield, at the expenses of regulating and cultural services (Foley et al. 2005;Felipe-Lucia et al. 2014;Qiu et al. 2021), and it is also associated with biodiversity loss (Allan et al. 2015;Felipe-Lucia and Comín 2015;Newbold et al. 2015). This trade-off is especially critical in floodplain areas, which are threatened by agricultural intensification because of their nutrient-rich soils (Tockner et al. 2008). In turn, natural revegetation following the abandonment of agricultural practices can help improve some ecological functions and services, such as erosion control and water quality (Navarro and Pereira 2015;Darwiche-Criado et al. 2017). However, rural abandonment also has important consequences for the social-ecological system, including the loss of local traditional knowledge associated with low-intensity and semi-subsistence agriculture (Gómez-Baggethun et al. 2010;Iniesta-Arandia et al. 2014a). In this context, active ecological restoration can enhance multiple ecosystem services and foster the development of rural economies via ecotourism and other naturebased activities (Aradottir and Hagen 2013). Given the varied effects of landscape management on the ecosystem, understanding the social-ecological consequences of the different options is fundamental to inform decision-making on landscape management policies. Scenario analysis is a common tool to identify the pros and cons of landscape management according to one or more criteria (Nelson et al. 2009;Kubiszewski et al. 2017;Lerouge et al. 2017). Scenario analysis is generally composed of three steps: definition of scope, design of alternative scenarios (i.e. narratives or storylines) and modelling or assessment of such scenarios ( Figure 1). The first step defines the timeframe and extent of the scenario analysis (Kirchgeorg et al. 2010). In the second step, since one of the aims of scenario analyses is to visualize potential endpoints and long-term consequences of particular management decisions, the alternatives are designed to show contrasting situations together with intermediate alternatives (Arkema et al. 2015). In the third step, the selection of response variables to be measured against each scenario is critical. In landscape management scenarios, the variables assessed range from biodiversity loss (Liekens et al. 2013) to economic gains (Van Berkel and Verburg 2012). Due to their ability to account for ecological, economic and social values, ecosystem services are gaining importance as key response variables in scenario analyses (Plieninger et al. 2013;Arkema et al. 2015;Rosa et al. 2017). Most scenario analyses model a very small set of ecosystem services and neglect the interactive effects of multiple ecosystem services in the landscape in terms of trade-offs and synergies (Bennett et al. 2009;Felipe-Lucia et al. 2014;Qiu et al. 2018). This can lead to important biases in the management decisions informed by those assessments, especially if economic indicators of provisioning services outnumber other indicators and service categories (Martín-López et al. 2014). Therefore, it is important to assess the effect of future scenarios on a variety of ecosystem services, including the often-underrepresented regulating and cultural services, to ensure a better overview of the effects of land management on the ecosystem and to inform policy and decision-making. In this context, the analyses of ecosystem service bundles (i.e. ecosystem services that repeatedly appear together; (Raudsepp-Hearne et al. 2010;Qiu and Turner 2013;Hanspach et al. 2014;Queiroz et al. 2015) can help aggregate the information on multiple ecosystem service indicators and facilitate landscape management by identifying patterns in ecosystem services (Meacham et al. this issue). Whereas ecosystem services are one type of response variable that can be modelled in scenario analyses, the social component of these scenarios, such as identifying winners and losers, is often neglected or overlooked (Rosa et al. 2017). Indeed, research is increasingly showing that different sectors Figure 1. Main steps for scenario analysis to inform decision-making. Note that steps 2 and 3 are interchangeable depending on the scenario type. Response variables should be inclusive to reflect plural valuation of scenarios (e.g. ecological, social and economic variables), represented by different colours in the pie chart. of society or stakeholders may benefit or lose from alternative land management policies, which can result in strong inequalities between stakeholders (Zafra-Calvo et al. 2017;Benra et al. under review). Despite its critical importance, studies integrating both components of the social-ecological system in scenario analyses by modelling ecological as well as social response variables are few in the literature. Here, we analyse social and ecological effects of five alternative future scenarios for a Mediterranean agricultural floodplain (Table 1), building on previous knowledge that assessed ecosystem services in different landuse types of the floodplain and social preferences for ecosystem services across a range of stakeholders. The scenarios are based on the combination of two gradients showing typical land management trade-offs: agricultural intensification (i.e. from rural abandonment to intensive agriculture) and ecological restoration (i.e. from the current situation to the active restoration of riparian habitats). Specifically, we i) assess the ecosystem service supply of the five alternative scenarios using proxies of 16 ecosystem services, including supporting, regulating, cultural and provisioning services, ii) identify bundles of ecosystem services that are maintained across scenarios, and iii) investigate the effects of these scenarios on four stakeholder groups in terms of how they are impacted by changes in ecosystem services (i.e. winners and losers). A diagram summarizing the procedures for scenario analysis used in this study can be found in the supplementary Figure S1. Study area The study area is the floodplain of the River Piedra (River Ebro basin) located in north-east Spain (Figure 2). Annual average temperature is 12.7°C, annual average rainfall is 450 mm, and the water flow is characterized by marked seasonal variability. The River Piedra is approx. 76 km long and the watershed covers an area of 923 km 2 , ranging in altitude from 1100 m.a.s.l. to 600 m.a.s.l. The River Piedra floodplain ranges from 50 to 300 m wide and occupies 19.3 km 2 , covering 12 municipalities with a total area of 532.64 km 2 and a population of 1539 inhabitants (Felipe-Lucia et al. 2014). The main land use types of the floodplain are dry cereal crops, abandoned croplands, irrigated cereal crops, poplar groves, urban areas, fruit orchards, and riparian forests. The upper part of the River Piedra (ca. 46 km long) is dry for most of the year due to the semiarid climate and karstic substrate, and the main land use type is dry cereal crops. The middle and lower parts of the river have a continuous flow (ca. 30 km long), and the main land use types are irrigated cereal crops, poplar groves, fruit orchards and abandoned croplands. La Tranquera reservoir, built in 1959 with 5.60 km 2 surface area and 78.8 million m 3 , is located between the middle and lower lands of the River Piedra, occupying the formerly most productive lands (Felipe-Lucia et al. 2014). Remnants of riparian forests are scattered along the floodplain. Identification of social actors We conducted 71 face-to-face, semi-structured interviews with the main stakeholders of the study area between August 2011 and March 2012 (see locations in supplementary Figure S2). These included the primary sector (i.e. farmers, shepherds, and workers at a fish farm; n = 16), the recreation providers (i.e. owners or workers at restaurants, hotels, lodges, nature tour operators, adventure enterprises, and regional-level tourist site Monasterio de Piedra; n = 13), the recreation users (i.e. retired residents, visitors, hikers, bikers, fishermen; n = 26) and institutions (i.e. local councils, regional governmental bodies for the management of water catchments and natural areas, scientific and educational institutions; n = 16). Interviewees were asked about the uses and benefits they derived from the River Piedra valley. They were also asked to rank a pre-defined list of 21 ecosystem services according to the importance of these services for their livelihoods. Interviews lasted between 30 Scenario design We created five scenarios framed around two typical land management trade-offs (agricultural intensification and active ecological restoration) that could exist in 20 years ( Figure 2). These scenarios reflect changes in the ecosystem services ranked highest across all stakeholders and that we were able to measure (e.g. water quality, recreational activities, food production): i) Current situation (CURSIT); ii) Riparian forest conservation and active ecological restoration (CONRES), which strongly influences water quality and recreation; iii) Intensive agriculture (INTAGR), which is related to food production; iv) Riparian forest conservation and agricultural production (CONPRO), which affects most ecosystem services Table 2 for a summary). studied; and v) Rural abandonment (RURABA), which represents an existing trend in the study area. The narrative for each scenario is detailed below and summarized in Table 2, indicating the variables included in both the ecological and social assessments or solely in the social assessment. Current situation (CURSIT) In the current situation, the River Piedra floodplain is mostly covered by agricultural crops (43.6% of the floodplain), including dry cereal crops in the upper lands, irrigated cereal crops and poplar groves in the middle lands, and fruit groves and orchards in the lower lands. A substantial portion of the study area is abandoned cropland (15.9%). Riparian forest (1.6%) is limited to upland river gorges and to a private natural park located in the middle lands. Tourist activities generated around this park are the main economic driver of the area. There is also a hydropower facility and a fish farm. Pastoralism in the area is rare, with some municipalities having only one or two smallscale shepherds. Companies offering recreational activities in nature start to be developed. Riparian forest conservation and active ecological restoration (CONRES) Riparian forest is protected and actively restored across: i) the 5 m of ' Intensive agriculture (INTAGR) Agricultural production is increased by: i) cultivating formerly abandoned croplands; and ii) irrigating all cropland in middle and lower lands. In the upper lands, only water-fed cropland is farmed. In the middle and lower lands, one-third part of the agricultural lands of each municipality are planted with cereals, one-third part with fruit groves and one-third part with poplar groves. Chemical fertilizers and pesticides associated with irrigated cereal crops cause an increase of the pollutant concentrations in the river. As a consequence, negative impacts on the river rise and fishing opportunities are limited to La Tranquera reservoir, decreasing the emerging nature tourism. Riparian forest conservation and agricultural production (CONPRO) Riparian forest is protected and actively restored across: i) the 5 m of 'Public Hydraulic Domain' established by the Spanish law (BOE 2008) in both riversides; and ii) Sites of Community Importance (SCI) defined by the European Commission Habitats Directive (92/43/EEC) within the River Piedra floodplain. In these areas typical riparian forest species are planted (e.g. Salix sp., Populus sp., Fraxinus sp., etc) and maintained. Agricultural production is increased by farming abandoned cropland. In the upper lands, dry cereal crops are grown. In each municipality of the middle and lower lands, a one-third of abandoned croplands are transformed to dry cereal crops, a one-third to fruit groves and a one-third to poplar groves. Formerly cultivated lands keep same practices in order to maintain the existing activities of the local population (Barbastro Gil 2005;González et al. 2017). Small dams no longer used for irrigation are removed from the stream, facilitating the dispersal of fish and seeds. Restored riparian forests are open to public access, which develops nature tourism based on environmental education, trekking, birdwatching and fishing. Companies offering adventure activities in nature (e.g. climbing, rafting, kayaking) are encouraged. Traditional hydraulic infrastructures (e.g. waterwheels, fulling mill) are restored and ethno-tourism activities increase, fostering the renovation and rental of cottages. Rural abandonment (RURABA) Existing investments in agriculture are retained but there are no new investments in irrigation facilities and machinery. This means that the least productive croplands are abandoned. Thus, production is limited to dry cereal crops in the upper lands and irrigated cereal crops and poplar groves in the middle and lower lands. Riparian forests are maintained at their current extent as natural recovery from abandoned cropland in this area takes much longer than 20 years (Moreno-Mateos et al. 2012). Ecosystem services supply We estimated 16 indicators of ecosystem services separately for each of the main seven land use types of the study area. The ecosystem services comprised two supporting services, seven regulating services, three provisioning services, and four cultural services (Table 1). Supporting services We collected soil data in three plots per land use type except in urban areas, where most of the soils were covered by impervious surfaces. Three transects 25 m apart of each other and perpendicular to the river channel were established in each plot. We collected three samples along each transect at 1 m, 5 m, and 15 m away from the river in July 2011 and July 2012. The organic matter layer depth (cm), excluding leaf litter, was recorded in the field with a measuring tape, and average values at each point across both years were used as an indicator of soil formation. The volume of the soil organic matter layer was calculated in cubic metres. Soils rich in organic matter are more productive (Bauer and Black 1994), and therefore, underly provisioning services. We estimated habitat quality in three plot replicates per land use type between July 2011 and July 2012 using the Riparian Quality Index (RQI) (González del Tánago and García de Jalón 2011). RQI evaluates seven riverbank attributes: (i) dimensions of land with riparian vegetation (average width of riparian corridor); ii) longitudinal continuity, coverage, and distribution pattern of riparian corridor (woody vegetation); iii) composition and structure of riparian vegetation; iv) age diversity and natural regeneration of woody species; v) bank conditions; vi) floods and lateral connectivity; and vii) substratum and vertical connectivity). RQI results in a relative score between 10 and 120, which was reclassified between 0 and 100. Larger RQI scores indicate better performance of riparian ecological functions (González del Tánago and García de Jalón 2011). Regulating services For water quality, we sampled dissolved nitrate as a measure of pollutant concentration in the river (ppm). 21 samples along the river were collected monthly in 2009. The sampling was designed to cover a wide range of situations representing the water quality of the study area and was repeated in specific months of 2010 and 2011 to account for possible variation in the water flow rates. Samples were kept refrigerated and analysed in the laboratory within a week using ionic chromatography (APHA 1998). Values per sample point were averaged across years and then by municipality. As this indicator reflects pollutant concentration, we used the inverse value to account for water quality (Felipe-Lucia et al. 2015b). To model this service in the different scenarios, we considered that the buffer area between the river and the agricultural crops created in the CONRES and CONPRO scenarios can reduce 90% of the water pollution caused by nitrate (Osborne and Kovacic 1993;Parkyn 2004). In turn, it was accounted that the chemical fertilizers and pesticides associated with increasing irrigated cereal crops in the INTAGR scenario cause an increase of 20% in nitrate concentrations in the river (Darwiche- Criado et al. 2017). In the RURABA scenario, the limitation in agricultural production reduces the concentration of nitrate in the water flow by 40% (Darwiche- Criado et al. 2017). For available nitrogen, we followed the same soil sampling protocol as for soil formation. Nitrogen can be a limiting nutrient for plant growth and condition the functioning of the ecosystem (Vitousek and Howarth 1991;LeBauer and Treseder 2008). Half a kilogram of topsoil (0-10 cm) was collected at each point, dried (48 hours at 60°C), sieved and milled. Total nitrogen (available nitrogen) was measured using a macro elemental analyser (Vario Macro Max CN). Average values across the two sampling campaigns were used as an indicator. For available phosphorus, we followed the same soil sampling protocol. Phosphorus can be a limiting nutrient for plant growth and condition the functioning of the ecosystem (Vitousek et al. 2010;Lang et al. 2017). Soluble reactive phosphorus (available phosphorus) was extracted following the Olsen protocol (Olsen et al. 1954) and filtered. The extract was analysed in an ionic chromatograph. Average values across the two sampling campaigns were used as an indicator. For soil carbon storage, we followed the same soil sampling protocol. Soils are an important reservoir of carbon (Lal 2002;Olsson and Ardö 2002). Total carbon was measured using a macro elemental analyser (Vario Macro Max CN). Average values across the two sampling campaigns were used as an indicator. For tree carbon storage, we used annual CO 2 sequestration rates by land use type from a nationwide study, which estimated the amounts of carbon stored by above-and below-ground biomass of the main Spanish plant species and woody formations (Montero et al. 2005;CITA 2008). Calculations are based on the species annual growth and transformed into CO 2 equivalent tons per hectare using stoichiometric equations (Montero et al. 2005). We used the plant species or woody formations closest to the land cover composition of our study area (e.g. for fruit groves we used average values of apple, pear, peach, and plum groves). Herbaceous species -and therefore, irrigated cereal crops and dry cereal crops -were not included because their annual CO 2 storage balance is null (CITA 2008). For abandoned croplands, only its woody formations (e.g. hawthorn) were considered (Felipe-Lucia et al. 2015b). Urban areas were not included since they usually act as a source of carbon rather than as a sink (but see Davies et al. 2011). For climate regulation, we recorded air temperature every 60 minutes over a period of 8 months (February to September 2012) using data loggers (iButton). Three devices per plot were hung from trees located at regular distances along a river transect perpendicular to the river channel. Three replicate plots were sampled in representative sites of each selected land use type. Dry cereal crops and fruit groves were not surveyed but surrogate values from abandoned croplands and poplar plantations were used, respectively, due to their similar cover and structure. To estimate local temperature regulation, we used the mean value of the daily temperature range (DTR = maximum temperature of day xminimum temperature of day x) (Scheitlin and Dixon 2010) per land use type. We took the inverse values (1/DTR) so larger indicator values reflect a larger role in buffering extreme temperatures (Hubbart et al. 2005;Hubbart 2011). For biological pest control, we estimated the richness of plant strata, as higher plant diversity is expected to host a larger number of insects, thus increasing the probability of biological pest control (Soliveres et al. 2016). We surveyed three plot replicates per land use type in July 2012, except in urban areas. Within each plot, three floodplain-wide transects (average transect length 57 m) perpendicular to the river channel were established 25 m apart. In each transect, we used the point-intercept method (Goodall 1952) every 10 cm to estimate species occurrence and percent cover of each plant species (i.e. number of contacts relative to the total number of points sampled). Identification of plants at the genus or species level was corroborated using a regional herbarium (i.e. herbarium of Jaca: http://proyectos. ipe.csic.es/herbario) and a botanist expert. Then, we classified vegetation records into four types of plant strata (i.e. herb, creeper, shrub, and tree) and estimated the richness of plants strata using the vegan package (Oksanen et al. 2013) of the R software (R Core Team 2019). Provisioning services For food production, we calculated two indicators, namely, caloric content and economic yield. We estimated the average yield (kilograms per hectare) of each of the main land use types of our study area from the latest update of a national public database (INE 2012), updated on 30.10.2012. For irrigated crops, we averaged the yield values of irrigated wheat, barley, and corn yields. For dry cereal crops, we used averaged yield values of dry wheat, barley, and corn. For fruit groves, we used average yield values of apple, pear, peach, and plum. The rest of land use types were assigned a yield value of 0. To obtain the caloric content per hectare, we multiplied the yield (kilograms per hectare) by the crop caloric content (kilocalories per 100 grams) (Felipe-Lucia et al. 2014). We calculated crop productivity (economic yield) based on crops yield and the index of agricultural prices provided by the regional government (Gobierno de Aragón, http://www.aragon.es) (Felipe-Lucia et al. 2015b). For fibre production, we used the yearly aboveground dry biomass accumulation by land use type. Data were obtained from a nationwide study that estimated the annual growth rates of woody species as tons of dry biomass per hectare, according to the average timber diameter (Montero et al. 2005;CITA 2008). We adapted data from the closest woody species to the land cover composition of our study area (e.g. for fruit groves we used average data of apple, pear, peach, and plum groves). Herbaceous speciesand therefore, irrigated cereal crops and dry cereal crops -were not included because their annual accumulated biomass balance is null (CITA 2008), whereas for abandoned croplands, only its woody formations (e.g. hawthorn) were considered (Felipe-Lucia et al. 2015b). Note that supply of fibre production considers existing management practices, i.e. only supplied by poplar plantations. Cultural services For aesthetic value, we used pictures uploaded to Panoramio, a web platform for pictures with a special focus on landscape and environment. This platform has been utilized in previous research about social preferences on ecosystem services (Casalegno et al. 2013;Martínez Pastur et al. 2016;Nahuelhual et al. 2017;Oteros-Rozas et al. 2017). We accessed the platform on 27.03.2014 and counted each single picture taken by a different person in each of the main land use types of the River Piedra floodplain for each municipality. This measure has been considered to be more appropriated than the total number of pictures, which would rather reflect the individual activity of photographers (Casalegno et al. 2013). Pictures focusing on buildings from all sorts (e.g. houses, towers, crosses, churches, hermitages, monasteries, etc.) with no environmental background were excluded because they were not directly related to the use of the ecosystem (Felipe-Lucia et al. 2015b). For recreation, we counted the number of areas used for social activities (e.g. picnic areas; Posthumus et al. 2010) by land use type and municipality in August 2012. To compare these data across municipalities and land use types, we used a density measure (i.e. Total number of picnic areas by land use type and municipality/Extent of each land use type at each municipality) (Felipe-Lucia and Comín 2015). For sport opportunities, we downloaded all tracks of sign-posted and user-designed paths from the regional tourist office website (http://senderos.turis modearagon.com) and wikilocs (http://www.wikiloc. com), respectively, available as of 12.10.2012, following Trabucchi et al. (2014). Tracks around the study area were unified using GIS tools (Quantum GIS Development Team 2012), and intersected with the land use cover. Then, we calculated the length of paths per land use and municipality. For environmental education, we counted the number of educational panels with information about the ecosystem by land use type and municipality in August 2012. To compare these data across municipalities and land use types, we used a density measure (i.e. Total number of educational panels by land use type and municipality/Extent of each land use type at each municipality) (Felipe-Lucia and Comín 2015). Data analyses Data on the extension of each land use type per municipality for the CURSIT scenario was extracted from the Spanish crop and land-use digital map (MMAMRM 2009) using ArcGIS 10 (ESRI 2012). The extension for the alternative cenarios was calculated according to the changes in land use described in the narratives above. Note that we did not assume a magnitude of ecosystem services directly from the land use cover maps. Instead, we used our own field sampling or secondary data to map ecosystem services. Given that we used different scales in the sampling of ecosystem services (e.g. plot per land use type or municipality; Table 1), we upscaled or downscaled ecosystem service measurements to obtain a single value per land use type and/or municipality for each scenario (see supplementary Figure S3). For ecosystem services estimated per unit area of land use type (e.g. food and fibre), we calculated the average supply per hectare for each land use type and multiplied it by the cover of each land use type in each scenario. For ecosystem services estimated at the municipality scale (e.g. cultural services), we divided the supply value by the size of the municipality and multiplied it by the cover of each land use type in that municipality. In the case of water quality, it was not possible to assign a supply value per land use type; therefore, changes in water quality derived from land use changes in each scenario were only estimated at the municipality level, taking as a reference the CURSIT scenario (see Methods for Ecosystem service supply). In order to facilitate the comparison across scenarios, we normalized ecosystem services to a common scale ranging between 0 and 1 using the formula StV = (x− x min )/(x max − x min ); where StV is the normalized variable, x is the target variable and x min ,x max are the minimum and maximum value across all plots, respectively. We used radial plots to represent the relative supply of ecosystem services under each scenario. For each scenario, we calculated the relative change in ecosystem service supply from the current situation (i.e. CURSIT scenario) using the formula C = ((t-b)/b)×100; where C is the proportional change in percentage, t is the target value (i.e. the estimated value for the alternative scenarios) and b is the baseline value (i.e. CURSIT scenario). To identify ecosystem services bundles, we performed a redundancy analysis (RDA), i.e. a multivariate multiple linear regression using the vegan package (Oksanen et al. 2013) in R version 3.6.2 (R Core Team 2019). The supply of each ecosystem service per municipality was the response variable, while the extent in square metres of each of the seven main land use types plus water per municipality was the explanatory variables. Bundles of ecosystem services, i.e. services that repeatedly appear together, sensu Raudsepp-Hearne et al. (2010), in each scenario were identified using both RDA scales 1 and 2, and RDA factor loads (Zoderer et al. 2019). The statistical significance of RDA models and the variance explained by RDA axes were tested by 1000 permutations (Borcard et al. 2011) (supplementary Table S1). Interviews were transcribed and coded in order to identify the role of each stakeholder group in relation to the studied ecosystem services. We identified stakeholders' use versus ability to manage each ecosystem service by adapting the existing dependence-influence matrix approach (Reed et al. 2009 . This information allowed us to identify the stakeholder groups benefiting or losing from each alternative management scenario. We distinguished between strong gain, weak gain, weak loss, and strong loss based on: i) the level of use of ecosystem services of each stakeholder group, which is related to distributive equity; ii) their ability to manage the services they use, which is related to procedural equity; and iii) the increase or decrease in the ecosystem services they use in each scenario. Strong gain was considered when the stakeholder group used and/or managed at least two ecosystem services that improved in that scenario. Strong loss was considered when all or most of the services used by the stakeholder group decreased in that scenario. Weak gains or losses were situations with both improvement and declines in the services used by the stakeholder group, with a slight overall increase (weak gain) or decline (weak loss). Ecosystem service supply under alternative land management scenarios We found large variation in the effects of land management scenarios on ecosystem service supply ( Figure 3). Under the Current situation (CURSIT), we observed a low supply of supporting services and most cultural and regulating services but intermediate supply levels for provisioning services in comparison to the other scenarios. Conservation & Restoration (CONRES) achieved the largest supply of cultural services and most supporting and regulating services (excluding soil formation, available phosphorus and climate regulation) but a low supply of provisioning services in relation to other scenarios. This was the scenario with largest ecological gains (Table 3). Intensive agriculture (INTAGR) contributed relatively the least to the supply of supporting services, and most cultural and regulating services (excluding tree carbon storage) and the largest to the supply of food production. This was the scenario with the fewest ecological benefits (Table 3). Conservation & Production (CONPRO) contributed an intermediate supply of most ecosystem services in comparison to the alternative scenarios, maximizing climate regulation and minimizing fibre production. Finally, Rural abandonment (RURABA) contributed relatively the most to soil formation, available phosphorus and fibre production but very little to the remaining ecosystem services. Bundles of ecosystem services across scenarios The redundancy analysis showed significant relationships between ecosystem services across land management scenarios (supplementary Table S1). In all scenarios, more than 85% of the total variance was explained by the first three axes, with the first axis contributing more than 40% of the total variance, and the second axis adding at least 28% more (supplementary Table S1). In terms of ecosystem services, the first axis separated water quality from the rest of ecosystem services and the second axis separated a group of regulating services from cultural and other services ( Figure 4). Regarding land management, the first axis was related to increasing poplar groves and irrigated cereal crops from right to left while the second axis clearly separated dry cereal crops from riparian forests, accompanied by fruit orchards and abandoned croplands in some scenarios. We found two ecosystem services bundles consistent across the five scenarios (Figure 4 and supplementary Figure S4). The first bundle was composed of food caloric content, nitrogen availability, soil carbon storage and biological pest control. In addition, this bundle included additional services in different scenarios, such as climate regulation and habitat quality in CURSIT; climate regulation and fibre production in CONRES; climate regulation, habitat quality and soil formation in INTAGR; and climate regulation in RURABA. In axis 1, the land use types most associated with this first bundle were poplar groves and to a lesser extent irrigated cereal crops; whereas in axis 2, it was positively associated with dry cereal crops and negatively associated with riparian forests (note that in CONRES, this bundle was mostly explained by axis 1 uniquely). The second bundle included all four cultural services and tree carbon storage in all five scenarios; however, it was slightly more scattered in INTAGR, where it also included food economic yield, and in RURABA. In axis 2, this bundle was mostly associated with riparian forest, and to a lesser extent to abandoned crops and fruit orchards, while negatively associated with dry cereal crops. It was only partially explained by axis 1 in CONRES. Surprisingly, food economic yield was not associated with food caloric content in any of the scenarios. In addition, water quality was always unrelated to other services, and partially associated with abandoned croplands and fruit orchards. Other ecosystem services were associated with different bundles in each scenario but did not show a consistent pattern across the five scenarios and, thus, are not further discussed. Winners and losers of land management scenarios The land management scenarios had different effects on the total supply of ecosystem services in the study area, with important consequences for the main stakeholders identified ( Figure 5). In relation to an optimal situation where all ecosystem services would be supplied in high levels, we observed that under the current situation (CURSIT), both the primary sector and recreation providers have weak losses, because the services that they depend upon are in an average situation. However, the recreation users and institutions have strong loses in CURSIT scenario because the possibilities for individual activities are low and there is little supply of regulating services. Comparing CURSIT with the alternative scenarios, we observe that in CONRES scenario, the primary sector has a weak loss, as some of their cultivated land is turned back to riparian forest. All the other groups have strong gains, as the services they use Rural abandonment (RURABA). Note that the area covered by each scenario is arbitrary (i.e. depends on the sorting of ecosystem services displayed) and should not be used for comparison (see main text for a full description of the scenarios and Table 2 for a summary). and manage increase. In INTAGR scenario, the primary sector has strong gains because provisioning services are promoted and the recreation users have weak gains due to the slight increase in cultural ecosystem services. However, the recreation providers have strong losses because water quality deteriorates, and institutions have weak loses because of the decrease in the regulating Table 2 for a summary; see supplementary Figure S4 for RDA scaling 2). Note that CURSIT and INTAGR show axis 2 reversed from CONRES, CONPRO and RURABA. services that they manage. In CONPRO scenario, all stakeholder groups have weak gains because most services increase in relation to current conditions and there are not major decreases in service supply. In RURABA scenario, both the primary sector and recreation providers have strong losses because of the decrease in both provisioning and cultural services. The recreation users and the institutions have weak losses because some of the regulating services that they use and manage, respectively, slightly increase while others decrease (Table 4). In particular, our results show that while the primary sector (e.g. farmers) would benefit from the most productive scenarios (i.e. INTAGR and CONPRO scenarios), the rest of the stakeholder groups (i.e. recreation providers, recreation users and institutions) would benefit from the scenarios promoting some level of active ecological restoration (i.e. CONRES and CONPRO scenarios). Interestingly, we observed that all stakeholders would lose in RURABA. These results indicate that CONPRO can satisfy the demands of the main stakeholder groups of our study area more equally, while CONRES and INTAGR entail clear winners and losers (i.e. farmers versus the other stakeholder groups) ( Figure 5). Discussion We analysed the effects of alternative management strategies on ecosystem service supply and stakeholder benefits using scenario analysis. In particular, we identified bundles of ecosystem services that are maintained or changed across scenarios, and explored the social consequences of those scenarios for different stakeholder groups in terms of winners and losers. Our results highlight the importance of combining both ecological and social aspects to inform land planning scenarios that promote multifunctional and inclusive landscapes for different sectors of society (Fischer et al. 2017;Martinez-Sastre et al. 2017). Understanding patterns in ecosystem services bundles We found two bundles of ecosystem services consistent across the five scenarios, which highlights the stability of such bundles. The first bundle was related to less intensive food supply, as it combines provisioning (i.e. caloric content) and regulating services. The second bundle was related to outdoors activities around riparian forests. Similar bundles have been identified in other studies (i.e. agroservice and experiential service bundles, respectively; Zoderer et al. 2019). The existence of stable bundles supports the importance of management policies that acknowledge the role of multiple ecosystem services working synergistically rather than focusing on isolated services regardless of their dependencies (Raudsepp-Hearne et al. 2010). In practice, this means that management actions directed to a particular service could also affect other services of the shared bundle, and hence potentially having indirect Table 4 for a detailed rationale). repercussions on the stakeholders using those related services. In turn, our results also show that additional services can be gained or lost to the bundle depending on the management practices. For example, habitat quality and climate regulation would be lost from the current food supply bundle in Conservation & Production (CONPRO) due to the farming of abandoned croplands. However, in that scenario we could identify a potential third bundle composed of habitat quality, soil formation, phosphorus availability and fibre production. Our results also show consistent trade-offs in the supply of ecosystem services between management practices preserving the riparian forest versus intensive agriculture. These trade-offs could be reduced by applying soil conservation agriculture measures such as, restoring riparian forest alongside the river, preserving hedgerows, reducing the use of fertilizers and pesticides, and avoiding tilling in fallows (Pretty 2008;Tscharntke et al. 2021). These measures are especially critical in land use types covering larger extents, as these contribute more to the service supply at the landscape scale (Felipe-Lucia et al. 2014). Besides, to promote a more balanced set of ecosystem services, genetic diversity, local knowledge and other cultural services could be enhanced by restoring public paths between farms and along the river, local fruit and fish varieties and fostering active agro-tourism across the valley, as suggested in Conservation & Production (CONPRO). For example, less intensive farming of dry cereal crops could contribute to the conservation and bird-watching of threatened steppe birds such as the Great bustard (Otis tarda) (De Frutos et al. 2015). Our analyses were based on the most comprehensive dataset available to show the variety of responses of land-use change on a large number of ecosystem services, identifying the potential for synergies and trade-offs among ecosystem service bundles. However, future studies could focus on those services that have been shown to be most important for the different stakeholders and the functioning of the study area, or that are more prone to variation to particular management options. In turn, methods for bundle analysis should be further developed to objectively identify the ecosystem services belonging to the same bundle, regardless of the spread of those services within and across the bundles. For example, the outdoors activities bundle in INTAGR and RURABA was scattered but still a bundle based on RDA scaling 2. Further, a deeper analysis of the stakeholders associated with a bundle could help detecting potential indirect effects, in terms of gains or losses, derived from the management of one or more services within the bundle (Baró et al. 2017;Quintas-Soriano et al. 2019;Zoderer et al. 2019). As in any study, the selection of indicators might compromise the interpretation and extrapolation of our results to other areas where these indicators are not available. Although other studies have reported similar bundles in distinct study sites (Raudsepp-Hearne et al. 2010;Queiroz et al. 2015), the identified bundles could vary if a different set of ecosystem services had been explored. Another potential limitation of our study is due to the fact that the slow recovery times of natural vegetation in Mediterranean areas, especially of riparian vegetation due to increasing droughts, might not reflect the natural recovery following rural abandonment in more humid areas. Therefore, we advise land managers and decision makers to interpret our results with care and to consider the uncertainties associated with the chosen indicators and timescales. Scenarios for multifunctionality and equity We found that, despite existing trade-offs, all alternative scenarios analysed increased the supply of ecosystem services, meaning that the Current situation (CURSIT) is under-supplying ecosystem services relative to its potential in the study area. Interestingly, we found that Rural abandonment (RURABA) only enhanced a few services but decreased the majority; emphasizing the importance of a proactive landscape management instead of simply 'abandoning rural areas to their fate' if we are to avoid losses of ecosystem services at the landscape scale (Posthumus et al. 2010;Rouquette et al. 2011). This result highlights how current trends in rural abandonment in many European (e.g. Portugal, Germany, Greece, Romania; ESPON 2017), and North American (Li and Li 2017) countries is an immediate threat for ecosystem service supply (Bruno et al. 2021). Scenario modelling can contribute to avoiding further losses by forecasting the consequences of those changes. In order to advise landscape management, it is thus important that scenario analyses incorporate intermediate solutions together with situations of complete change (Arkema et al. 2015), such as landscape intensification, rural abandonment and active ecological restoration. Additional considerations should include the availability of funding and land to implement those actions (Comín et al. 2018). Our study also illustrates how identifying the most suitable alternative management scenario depends not only on the preferred ecosystem services to be enhanced, but also on the interest of decision-makers in distributing service benefits equally across stakeholder groups. For instance, in our case study, the Intensive agriculture (INTAGR) scenario would be most adequate to increase provisioning services, but it would only strongly benefit the primary sector. On the other hand, the Conservation & Restoration (CONRES) scenario would maximize most ecosystem services and benefit most stakeholder groups, but at the expense of losses to the primary sector due to a reduction in provisioning services. The latter scenario would make it difficult to maintain the local population, given that most inhabitants of the study area are farmers (Felipe-Lucia et al. 2015a). Therefore, in order to supply and distribute a more balanced set of ecosystem services, an intermediate land use management strategy would be more appropriate, as was also found in other areas in Spain ). In our case study, this could be achieved in the Conservation & Production (CONPRO) scenario, which promotes provisioning services while preserving and enhancing cultural, regulating and supporting services. Such combination is key to nurturing a more equal distribution of ecosystem services across the main stakeholder groups. Therefore, decisionmaking should not only be informed by which scenarios contribute more to overall ecosystem services (multifunctionality), but also by who are the beneficiaries of these services (Martinez-Sastre et al. 2017). Incorporating the analyses of power asymmetries among stakeholders is, thus, critical to balance the dominance of particular interests over common goals (Berbés-Blázquez et al. 2016). Multifunctionality research and metrics are rapidly evolving in sustainability sciences, from considering multiple ecosystem functions, to services and stakeholders at the landscape scale (Manning et al. 2018;Hölting et al. 2019), but it still needs to go one step further to incorporate issues of equity in the distribution of ecosystem services. Previous research has shown that different aspects of equity (i.e. procedural, distributive and recognition) are important to ensure access to ecosystem services (Vallet et al. 2019;Zafra-Calvo et al. 2019), and that the relations between stakeholders at multiple spatial scales shape access to ecosystem services (Martin-Lopez et al. 2019). Our results support decision-making that takes into account the needs of different stakeholders by clearly indicating the winners and losers of alternative management scenarios. In this way, decision-makers are informed of both the ecological and social consequences of policies and are provided with alternatives to balance inequalities derived from land management strategies. In particular, our scenario Conservation & Production (CONPRO) contributes weak gains to all four main stakeholder groups without placing any of them in a vulnerable situation as losers of the decision-making process. Therefore, this scenario could be used as a strategy to promote equity in the access to ecosystem services among the main stakeholder groups of the studied area. On the contrary, our results show that other scenarios would favour some stakeholder groups while disfavouring other groups, hence, causing inequalities in the access to ecosystem services. Scenario analysis is an excellent tool to identify long-term effects of landscape planning, but needs to incorporate stakeholder analyses to understand the different facets of landscape management if we are to design scenarios promoting truly sustainable landscapes, i.e. inclusive to different sectors of society and that offer equal opportunities of access and benefits to all of them. Our approach can, thus, guide further research aiming at plural valuation (Jacobs et al. 2020) for sustainability by combining the assessment of multifunctionality and equity associated with alternative management scenarios. Management policies for rural areas High-level agricultural policies, such as the Common Agricultural Policy (CAP) in Europe, should be able to support multifunctional landscapes. However, despite the promotion of new greening measures, the CAP is not having the expected positive benefits (Pe'er et al. 2019) and seems to still be driving both agricultural intensification and rural depopulation in many European countries (Martinez-Sastre et al. 2017). In our case study, multifunctional landscapes would be achieved through the scenario Conservation and Production (CONPRO), but its implementation would only be feasible if the suggested reforms of the current CAP are followed, such as, supporting public goods, biodiversity conservation and active restoration, together with participatory and integrative landscape scale planning (Pe'er et al. 2020). In addition, future research should continue to investigate the limits of ecosystems' ability to supply services within a socially just space (Raworth 2017), if we are to advice environmental management policies and design realistic scenarios for rural areas. Because of the multiplicity of policies applicable in rural areas, it can be complicated to simultaneously comply with all of them at the local scale (Baur 2020). Adaptive management systems to particular ecological and societal conditions, integrated within multilayered governance systems, should be promoted to ensure coherence in the implementation of policies at different spatial scales (Nagendra and Ostrom 2012;Hölting et al. 2019;Winkler et al. 2021). The development of flexible institutions open to public participation are essential to develop learning and adaptation needed in the face of new situations. For example, in our scenario Conservation & Production (CONPRO), poly-centric governance could cope with conflicts derived from new management practices in rural areas, such as the combination of traditional agricultural practices with increasing agro-tourism (Castro et al. 2011;Nagendra and Ostrom 2012). Conclusion Scenario planning based on ecosystem services is a useful tool to forecast the effects of alternative landscape management on ecological and social variables. Studies need to consider a varied range of ecosystem services, at least in the initial phase, in order to be comprehensive, identify synergies and trade-offs, and detect the existence of ecosystem service bundles, as we do here. We found evidence of two consistent bundles of ecosystem services across different scenarios (related to less intensive food supply and outdoor activities), and identified the ecosystem services that are lost or added to the bundle depending on the scenario's management regime. Our results highlight the importance of considering ecosystem services in land management to avoid the loss of potential services and to include stakeholder analyses to identify winner and losers of the alternative management options. The combination of both types of information (i.e. social and ecological) is crucial to achieve truly sustainable landscapes, which maximize the number of services (multifunctional) and stakeholder groups benefiting from them (equal). In our case study, and probably in many other similar contexts, an intermediate management scenario preserving both the conservation of natural resources and the local productive uses of the ecosystem might be the best compromise in the long-term. Our approach can be used as a method to assess the sustainability of future scenarios in rural areas, based on the analyses of procedural and distributive equity of ecosystem services, taking into account the synergies and tradeoffs derived from the alternative management scenarios. Disclosure statement No potential conflict of interest was reported by the author(s).
2022-01-28T17:11:53.222Z
2022-01-23T00:00:00.000
{ "year": 2022, "sha1": "a89948dddd2bdb7f5b3d11e926af75ec67015a8d", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/26395916.2021.2021288?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "266bdd418eb2bbb54defd2e525d8e67d49a0e699", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
15352272
pes2o/s2orc
v3-fos-license
A Comparative Characterization of the Microstructures and Tensile Properties of As-Cast and Thixoforged in situ AM 60 B-10 vol % Mg 2 Si p Composite and Thixoforged AM 60 B The microstructure and tensile properties of the thixoforged in situ Mg2Sip/AM60B composite were characterized in comparison with the as-cast composite and thixoforged AM60B. The results indicate that the morphology of α-Mg phases, the distribution and amount of β phases and the distribution and morphology of Mg2Si particles in thixoforged composite are completely different from those in as-cast composite. The Mg2Si particles block heat transfer and prevent the α-Mg particles from rotation or migration during reheating. Both the thixoforged composite and thixoforged AM60B alloy exhibit virtually no porosity in the microstructure. The thixoforged composite has the highest comprehensive tensile properties (ultimate tensile strength (UTS)) of 209 MPa and an elongation of 10.2%. The strengthening mechanism of the Mg2Si particle is the additive or synergetic effect of combining the load transfer mechanism, the Orowan looping mechanism and the dislocation strengthening mechanism. Among them, the load transfer mechanism is the main mechanism, and the latter two are minor. The particle splitting and interfacial debonding are the main damage patterns of the composite. Introduction Magnesium alloys are the lightest commercially-used metals, offer excellent cast-ability, machinability, low density, high specific strength and stiffness, electromagnetic shielding characteristics and are, thus, attractive for applications in the transportation industry, electronic products, portable tools, sporting goods and aerospace vehicles [1][2][3].Unfortunately, with the rapid expansion of magnesium applications, magnesium alloys suffer from the challenge of meeting the requirements of strength, ductility, fatigue and creep properties under high temperature.Metal-matrix composites possess many advantages over monolithic materials, such as high-temperature mechanical strength, good wear resistance and dimensional stability, and they have been widely used in aircraft, space, defense and automotive industries [4,5].Thus, the fabrication of magnesium-based composite is a maneuverable and reliable way to overcome these shortcomings by taking full advantage of magnesium alloys [6].There are several methods to fabricate magnesium-based composite, such as self-propagating high temperature synthesis (SHS), directed reactive synthesis (DRS), mechanical alloying (MA) and reaction spontaneous infiltration (RSI), blend press sintering, disintegrated melt deposition and in situ synthesis [7][8][9][10][11][12][13][14][15].Alternatively, the in situ synthesis process produces the desired reinforcements via reaction synthesis in melting alloy by adding grain refiner during traditional casting and is a relatively good potential technique with good maneuverability and reliability for industrial manufacturing, because it does not need special treating procedures and equipment.In the authors' previous investigations, uniform distribution and dispersion of fine-grained, in situ Mg2Sip/AM60B composite have been achieved via traditional gravity casting by the addition of 0.5 wt% Sr and 0.2 wt% SiCp [16].Modified Mg2Si particles with a grain size of 20~40 μm were uniformly distribute in the matrix. As a novel metal process manufacturing, thixoforging involves the advantages of both casting and forging technologies and significantly decreases or even eliminates porosities [2,17].Consequently, superior tensile properties of the thixoforged components resulting from the pore-free fine microstructure can be achieved.Moreover, the amount and size of the β phase (Mg17Al12), which is harmful for the tensile properties of the alloy, can also be reduce.Therefore, it can be supposed that fabrication of thixoforged in situ Mg2Sip/AM60B composite is a promising way to further expand the applications of AM60B alloy.However, limited information on thixoforged in situ Mg2Sip/AM60B composite is available in the open literature. This article presents the progress of an ongoing research work on thixoforged in situ Mg2Sip/AM60B composite.The microstructure, tensile properties and fracture behavior of in situ Mg2Sip/AM60B composite are studied.The informative results are compared with identical as-cast composites and thixoforged AM60B alloy, in order to elucidate the strengthening mechanisms of the thixoforging technique and Mg2Si particles. As-Cast Preparation The in situ Mg2Sip/AM60B composites used in this work were prepared by the traditional gravity casting route using commercial AM60B magnesium alloy, pure Mg (99.9 wt%) and Al-30Si (all of them were provided by Changfeng factory in Lanzhou, China).Homemade Mg-30Sr master alloys and Mg-25SiCp press cake (mixture powders) were used as a modifier and grain refiner for the Mg2Si phase and α-Mg phase, respectively.The AM60B alloy was prepared by this method, also, using commercial AM60B magnesium alloy by the addition of 0.2 wt% SiCp.The chemical compositions of those materials are listed in Table 1. A quantity of AM60B alloy, pure Mg and Al-30Si master alloy was melted in an electric resistance furnace at 790 °C and then modified by 0.5% Sr (using Mg-30Sr master alloy).The melt was then isothermally held for 20 min, and 0.2% SiCp (using the pressed cake of Mg-25SiCp mixture powders) was introduced.Finally, the resulting melt was degassed using C2Cl6 and poured into a steel mold with a cavity of ϕ 50 mm × 500 mm after it had been held for 10 min.Thus, the as-cast composite with 10 vol% Mg2Sip was obtained (as-cast in situ AM60-10 vol% Mg2Sip composite).The melting process of AM60B alloy is similar to the composite, melted at 790 °C and refined by 0.2% SiCp.Owing to the Mg2Si phase being absent, the Mg-30Sr is not necessary to add to this melting alloy.Then, after the melt was degassed by C2Cl6, it was poured into the same mold.A covering agent of RJ-2 (Hongguang Company, Shanghai, China) designed for magnesium alloys was used for protecting the melt from oxidation during both of the melting processes. Thixoforging Process For the thixoforging, some ingots with dimensions of ϕ 42 mm × 30 mm were cut from the as-cast rods and then reheated in a resistant furnace under argon gas protection at a semisolid temperature of 600 °C for 60 min.The obtained semisolid feedstocks were quickly handed into a die with a cavity of ϕ 50 mm × 20 mm and then thixoforged using a hydraulic press.The preheating temperature of the die was 300 °C ; the applied punch velocity and pressure were 60 mm/s and 192 MPa, respectively.The holding time was 20 s.Repeating the above experimental procedures, thixoforging composite (10 vol% Mg2Sip containing) and AM60B alloy were obtained. Microstructural Analysis The metallographic specimens were cut from the center region of each product and the cross-section polished by standard metallographic techniques.Subsequently, they were chemically etched using 4% nitric acid ethanol solution and observed on an optical microscope (OM, Nikon Instruments, Shanghai, China) and a scanning electron microscope (SEM, NEC Electronics Corporation, Tokyo, Japan).The compositions of the primary α-Mg phase in the microstructures were examined by energy dispersive spectroscopy (EDS, NEC Electronics Corporation, Tokyo, Japan) using spot scan mode in the SEM.The average of at least five primary α-Mg phases was taken as the composition of each specimen.Porosity was evaluated via measuring the optical micrograph of the un-etched metallographic specimens.The related images were analyzed by Image-Pro Plus 5.0 software (Media Cybernetics Company, Silver Spring, MD, USA), and the percentage of the porosity to the whole was quantitatively examined and the results based on the average of three images. Tensile Testing The mechanical properties of the materials were evaluated by tensile testing, which was performed at ambient temperature on a universal material testing machine with a loading velocity of 1 mm/s.Samples for tensile testing with a cross-section of 1.2 mm × 2.5 mm and a gauge length of 10 mm were machined by a Computer Numerical Control (CNC) wire-cut machine (Taizhou Dengfeng CNC Machine Company, Taizhou, China) from the center of each product.The tensile properties of each product, including ultimate tensile strength (UTS) and elongation to failure (Ef), were obtained based on the average of at least five tests.Some typical fracture surfaces and side views of fracture surface were observed on the SEM and OM, respectively, to ascertain the nature of the fracture mechanisms. Microstructural Analysis Figure 1 presents the microstructures of the as-cast composite, the thixoforged composite and AM60B revealed by OM and SEM, respectively.The microstructure of the as-cast composite mainly consists of primary α-Mg dendrites, Mg2Si particles and eutectic phases (Figure 1a).The size of the primary α-Mg dendrites is around 70~90 μm, which is relatively large compared with the reported value in the literature [18,19].This is primarily due to the somewhat slow solidification rate taking place in the current work.The diameter of the mold in this work is 50 mm, which is only 16 mm in the literature.The primary Mg2Si particles with a size of 15~30 μm were located at the primary α-Mg dendrite boundaries.The SEM result (Figure 1b) displays that the β-Mg17Al12 eutectic phase (bright contrast) belongs to divorced eutectics and tend to form a network surrounding the α-Mg phase (dark contrast).According to the Mg-Al binary phase diagram [20], it is known that AM60B alloy is a hypoeutectic alloy, since its Al content and composition is far away from the eutectic point.Under this circumstance, the residual liquid amount should be very low when the eutectic reaction occurs and exists in thin layers between the primary α-Mg dendrites and dendrite arms.Then, the eutectic α phase preferentially directly grows on the primary α-Mg phase without renucleation, and only the eutectic β and eutectic Mg2Si phases are left in the interdendritic regions.The previously-formed primary Mg2Si particles are pushed to the interdendritic regions by the growing interface. As shown in Figure 1c, the microstructure of the thixoforged composite is composed of primary α-Mg particles, secondarily solidified structures and Mg2Si particles.The morphology of the primary α-Mg particles and the distribution of both the eutectic β phases and Mg2Si particles are completely different from those present in the as-cast composite.The primary α-Mg particles coarsen and connect to each other, the size being approximately 90~120 μm, which is significantly larger than the primary α-Mg dendrites in as-cast microstructure.The Mg2Si particles coarsen, as well, and their size is about 30~40 μm.The sharp edges and corners become blunt.However, the Mg2Si particles in the thixoforged coupon not only surround the primary α-Mg particles, but also some of them locate inside the primary α-Mg particles.The size and amount of the β phase clearly decrease in the thixoforged specimen, which is located at the boundaries and inside the primary α-Mg particles, as well (shown in Figure 1d).Figure 1e presents the microstructure of the thixoforged AM60B alloy.It indicates that the primary α-Mg particles slightly coarsen, and the outline becomes indistinct (comparing Figure 1f with 1d).The secondarily solidified structures almost disappear and only can be found in some triple point.As shown in Figure 1f, the β phase size and amount of both the thixoforged composite and AM60B alloy are at a comparable level. The morphological change of the α-Mg phase occurs during the reheating process.The thixoforged composites are subjected to a partial remelting treatment.During this technical process, the primary α-Mg dendrites transform into spheroidal primary α-Mg particles uniformly suspended in the liquid phase.During the subsequent thixoforging, the liquid solidifies to form secondarily-solidified structures.Coarsening of the primary α-Mg grains in the thixoforged specimen should be attributed to the following two aspects.One is Ostwald ripening and the coalescence of the nearby primary α-Mg grains during reheating, driven by minimizing the interfacial energy [21].Grain growth by coalescence by grain boundary migration is dominant for short times after the liquid is formed, and Ostwald ripening is dominant for longer times [22].This is a common phenomenon in thixoforged materials [23][24][25].The Mg2Si particles and the liquid might be engulfed by the merged primary α-Mg grains, so Mg2Si particles or the eutectic β phase would distribute inside the primary α-Mg grains in the thixoforged specimens.The other is the solidification behavior of the thixoforged materials.Table 2 gives the compositions of α-Mg phase under these three methods.It reveals that the Al concentration in these two thixoforged materials is significantly higher than that in the as-cast coupon.This is because the eutectic phase is dissolving towards the primary α-Mg grains during reheating, which results in the Al concentration increase in the primary α-Mg grains and a decrease in the liquid.In this case, the formed secondarily primary α-Mg phase (to differentiate the primary α-Mg grains, the primary α-Mg phase solidified form of the liquid is named the secondarily primary α-Mg phase) and the eutectic α-Mg phase should increase, accompanied by the decrease of the eutectic β phase (the Al element is a necessary constituent for forming the eutectic β phase).The secondarily primary α-Mg phase should preferentially directly grow on the surfaces of the primary α-Mg grains, and then, the eutectic α-Mg phase also preferentially attaches on the surfaces of the secondarily primary α-Mg phase, which leads to the primary α-Mg grains coarsening and connecting with each other. Although the reheating temperature of 600 °C is lower than the eutectic point of Mg-Mg2Si (637.6 °C) [20], the primary Mg2Si particles and eutectic Mg2Si phases should partially melt due to the penetration of the liquid and diffusion of Mg and Si atoms between Mg2Si and liquids, especially at the sharp edges and corners.During the thixoforging, the melted Mg2Si (including primary and eutectic Mg2Si phases) grows as halos surrounding the nearly spherical Mg2Si particles.Therefore, the Mg2Si particles are somewhat coarse and spherical in the thixoforged specimen compared to those in the as-cast specimen. With regard to the Mg2Si particles, two primary factors affect the semisolid microstructural evolution, as described in this section.Firstly, the Mg2Si particle acting as a ceramic phase with the low thermal conductivity, which uniformly distributes in the matrix, would block the heat transfer from the edge to the center of the semisolid ingot [26].This would suggest that the heating rate is delayed, so that the phase transformation rate is reduced.It is known that the microstructure evolution closely depends on the phase transformation.Therefore, coarsening from Ostwald ripening is suppressed.This would also affect the composition of the primary α-Mg grains simultaneously.On the other hand, the pin effect of the Mg2Si particles in the primary α-Mg boundary prevent the primary α-Mg boundary from rotation or migration [27].Thus, the coalescence of the contacted primary α-Mg grains through mergence would be suppressed.As a result, the size and Al solubility of the primary α-Mg grains in thixoforged composite are slightly lesser than in the thixoforged AM60B alloy.Based on these standpoints, it is no difficult to understand the resultant microstructures under those different processing technologies. Porosity Evaluation Figure 2 reveals the porosity distribution in the polished specimens.Representative pores can be easily spotted in the as-cast composite, as indicated in Figure 2a.However, it is evidently shown in Figure 2b,c that the thixoforged coupons are virtually free of gas and shrinkage porosities.In comparison with that (4.00%) of the as-cast composite, the porosity percentage of the thixoforged composite is 0.15%, and the thixoforged AM60B is 0.12%. The porosity elimination of thixoforged ingots should result from the following reasons.The first is the high applied pressure during solidification and low filling velocity during mold filling.The high applied pressure reduces the shrinkage porosity by squeezing the liquid metal into the last region of the casting to solidify.Thus, the feeding ability to solidification shrinkage is enhanced.The purpose of the low filling velocity is to effectively avoid air entrapment.The second is the inherent characteristic of the semisolid forming [2,5,17].The spherical morphology of the primary α-Mg grains would be more favorable to liquid penetration for feeding [28].The proper liquid fraction of the thixoforging, which is lower than that of the traditional casting, should reduce the probability of the solidification shrinkage and effectively avoid entrapped gas, as well. Tensile Properties Figure 3 gives the tensile properties of those three materials.It can be evidently seen from Figure 3 that the UTS of the thixoforged composite is 209 MPa, which is significantly higher than that of the as-cast conditions (108 MPa) and the thixoforged AM60B alloy (146 MPa).However, the elongation of the thixoforged AM60B is 13.3%, which is the maximum value among these three materials. The thixoforged composite has the highest comprehensive tensile properties, UTS of 209 MPa and elongation of 10.2%. Figure 4 shows the typical fractographs of these three materials.It indicates that the fracture surface of the as-cast composite is characterized by porosity features (Figure 4a), which are somewhat brittle in nature.As mentioned above, the porosities can be obviously observed from the micrograph of as-cast composite, which should generate in the last solidified zones, i.e., the eutectic structures between the primary α-Mg dendrites.The porosities serve as the initiation point of cracks in the as-cast specimens.Then, the cracks grow and propagate along the eutectic structures during tensile testing, which is depicted in Figure 5a.The failure is mainly attributed to the intergranular fracture and is partially caused by the segregation of the brittle eutectic β phase at dendrite boundaries.As described in the previous section, the thixoforged composite is virtually free of gas and shrinkage porosities.Correspondingly, the porosity characteristics on the fracture surface of the thixoforged composite disappear and are substituted by small dimples (Figure 4b).Plenty of fractured Mg2Si particles could be observed on the fracture surface.The crack propagation path turns from along the eutectic structures into across the primary α-Mg grains (Figure 5b, marked by arrows).The failure transfers to the transgranular fracture mode.The resultant fractographs and side-view of the fractured surfaces of both the as-cast and thixoforged composite are in good agreement with the data of tensile properties shown in Figure 3.It is well established that the tensile properties of the material are determined by their microstructures.Therefore, it is concluded that the elimination of porosity, the decrease of the eutectic β phase and enhancement of solution strengthening are responsible for the superior tensile properties of the thixoforged composite. Figure 4c illustrates the fracture surface of the thixoforged AM60B alloy, which is characterized by small dimples and flat facets.The dimples result from localized microvoid coalescence, due to the dislocation motion or grain boundary sliding.The microvoids grow and connect, eventually leading to the creation of cracks.The flat facets are caused by cracks moving through the primary α-Mg grains.Cracks propagate between the primary α-Mg grains (marked by A) and occasionally across the primary α-Mg grains (marked by B) in some local zones (Figure 5c), which is consistent with the fracture surface.The fracture of the thixoforged AM60B belongs to a mixture of transgranular and intergranular modes.The inferior UTS of the thixoforged AM60B should be ascribed to there being no reinforcing phase to strengthen the α-Mg phase.A large amount of eutectic β phase is a harmful effect for UTS.However, the amount of the β phase reduced to a given value may also decrease the UTS, owing to the strengthening role of the β phase being reduced.This is also the same reason that results in the superior elongation of this alloy.However, the adopted technologic parameters (reheating time and temperature, mold preheating temperature, etc.) in this work are optimum for the thixoforged composite.There may not be optimized parameters for the thixoforged AM60B alloy.It can be reasonably concluded that the tensile properties of the thixoforged AM60B alloy will be further improved through adjusting the technologic parameters.This will be discussed in further work. Strengthening Mechanisms of the Mg2Si Grains For the purpose of verifying the strengthening mechanism of Mg2Si particles for the α-Mg matrix, some typical fracture surfaces are carefully observed with high magnification (Figure 6).Pay attention to the composite in the as-cast condition: the Mg2Si particles can only be found in some local zone on the fracture surface.The Mg2Si particles are surrounded by the deformed α-Mg matrix and keep their original morphology (Figure 6a).During tensile testing, the pin effect of the Mg2Si particles in the grain boundaries keeps them from sliding.The interfaces of the Mg2Sip/matrix belong to an incoherent interface, due to the differences of the crystal structures and the lattice constants [29].Therefore, local stress concentration should be preferentially generated at the sharp edges and corners of the Mg2Si particles.Therefore, the interfaces of the Mg2Sip/matrix are easily debonded under this local stress (marked by arrows in Figure 6a).Subsequently, the debonding areas extend and connect with the cracks, which initiate from the eutectic structure or porosity, eventually leading to the final fracture.Owing to the non-compact microstructure of the as-cast composite, the reinforcement of Mg2Si particles of the matrix has not fully taken part in the contribution to the improved the tensile properties.There are two kinds of failure behaviors related to the Mg2Si particles, which can be found on the thixoforged composite fracture surface.One is the interfacial debonding of Mg2Sip/matrix (Figure 6b).As mentioned above, the sharp edges and corners become blunt.Therefore, the local stresses are uniformly distributed along the Mg2Sip/matrix interfaces and increase as the tensile stress increases.Microvoids should generate in the surrounding matrix under this local stress.Then, the localized microvoids' coalescence results in the debonding of the interface (Figure 6c).The other is the fragmentation of Mg2Si particles.There are two ways for the Mg2Si particle to fracture: either fractured parallel to the fracture surface of the whole specimen (Figure 6d) or broken into several parts (Figure 6e).The composite mainly contains two phases with very different mechanical behaviors: the α-Mg phase and Mg2Si particles.While the soft magnesium alloy deforms plastically during tensile testing, the Mg2Si particles are rigid and deform only elastically.The reinforcing Mg2Si particles prevent the straining of the surrounding matrix.Thus, the composite is plastically non-homogeneous, because the plastic deformation gradient is imposed on the Mg2Sip/matrix interfaces.Therefore, a high stress concentration generates at the interface and gives rise to the increase of tensile strain.The existing investigation supposed that the stress concentration was two-to four-times higher than that of the α-Mg matrix [30].When the stress concentration exceeds some value, this should even result in the fragmentation of Mg2Si particles (Figure 6d,e).Figure 6f reveals that the Mg2Si particles split into several parts and that there are no visible cracks in the surrounding matrix.The interfacial debonding and particle fragmentation are the indications of the absorption of energy and the relaxation of local stress concentration.This is just due to the formation of local stress concentration at the Mg2Si/matrix interface and the subsequent relaxation; the growth rate and nucleation of the crack in the surrounding matrix are delayed.Thus, the strengthening of the Mg2Si particles of the matrix should primarily be the result of the load transfer mechanism.The particle splitting and interfacial debonding are the main damage patterns of the composite. The contribution of Mg2Si particles to improving the mechanical properties should also be attributed to other mechanisms.One is the Orowan looping mechanism, which is described as the interaction between dislocations and fine particles: the resistance of the reinforcing particles to the passage of dislocations from a balance between the force acting on the dislocation and the force coming from the line tension acting on both sides of the reinforcing particle [31].However, this mechanism is only effective in action when the reinforcing particles are located within the grains.In the composite employed in this work, the Mg2Si particles are mainly located at the boundaries of primary α-Mg grains and occasionally located inside of them.Therefore, the strengthening effect due to the Orowan mechanism will be minor.The other is the dislocation strengthening mechanism, which results from the coefficient of thermal expansion (CTE) mismatch between Mg2Si particles and the matrix: dislocations are created, due to the relaxation of thermal expansion mismatch between reinforcing particles and matrix, and may cause an increase in the dislocation density [32].This effect can impede the dislocation movement, also playing a very important role in strengthening the matrix. Although the contribution of each strengthening mechanism to improving the mechanical properties has not been calculated separately, it should also be believed that an additive or synergetic effect probably occurs by combining several mechanisms.Among them, the load transfer mechanism should primarily take part in strengthening the matrix, and the Orowan looping mechanism and dislocation strengthening mechanism should be minor. Conclusions (1).In comparison with the as-cast composite, the morphology of α-Mg phases, the distribution and amount of β phases and the distribution and morphology of Mg2Si particles in thixoforged composite are completely different from those in as-cast composite. (2).The α-Mg dendrites evolve into spheroidal α-Mg grains uniformly suspended in the liquid phase during reheating.The liquid solidifies to form a secondarily-solidified structure.The eutectic structure dissolving towards the α-Mg grains results in the β phases' decrease, and the Al concentration in primary α-Mg grains increases.The β phases and Mg2Si particles are entrapped within the merged α-Mg grains.The coarsening of α-Mg grains results from coalescence, Ostwald ripening and subsequent solidification behavior. (3).The Mg2Si particles block heat transfer, so as to delay the Ostwald ripening.The pin effect of the Mg2Si particles prevents the α-Mg grains from rotation or migration, so as to reduce the probability of α-Mg grain mergence.The resulting α-Mg grains in the thixoforged composite are slightly finer than that in the thixoforged AM60B alloy. (4).The porosity elimination in the thixoforged component is attributed to the low filling velocity during mold filling, the applied high press during solidification, the enhanced feeding ability of spherical primary α-Mg grains and the low liquid fraction of the semisolid slurry. (5).The UTS of the thixoforged composite is 209 MPa, which is significantly higher than that of the as-cast conditions (108 MPa) and the thixoforged AM60B alloy (146 MPa).The thixoforged AM60B has the maximum value of elongation (13.3%).The thixoforged composite has the highest comprehensive tensile properties, UTS of 209 MPa and elongation of 10.2%.(6).The strengthening mechanism of the Mg2Si particles is the additive or synergetic effect combining the load transfer mechanism, the Orowan looping mechanism and the dislocation strengthening mechanism.Among them, the load transfer mechanism is the main mechanism, and the latter two are minor. Figure 3 . Figure 3. Tensile properties of the materials at room temperature.UTS, ultimate tensile strength. Table 1 . Chemical composition (in wt%) of the materials studied. Table 2 . Compositions of primary α-Mg dendrites or the grains of these materials.
2019-04-28T13:12:28.607Z
2015-03-13T00:00:00.000
{ "year": 2015, "sha1": "f29d692bad02148070482e34d6a60d0b2865d9cd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4701/5/1/457/pdf?version=1426249845", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f29d692bad02148070482e34d6a60d0b2865d9cd", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
248109601
pes2o/s2orc
v3-fos-license
Geochemistry Characteristics and Paleoenvironmental Significance of Trace Elements in Coal and Coal Gangue in the Yangcheng Mining Area, Qinshui Basin The geochemical characteristics of trace elements in Carboniferous–Permian coal and gangue in the Yangcheng mining area in Qinshui Basin, a large Carboniferous–Permian coalfield in China, were studied by inductively coupled plasma mass spectrometry (ICP–MS), and their geological significance was discussed. The results show that the content of trace elements in late Paleozoic coal in the Yangcheng mining area is depleted, except for slight enrichment of Li. Except for Li, Co, and Mo, the content of trace elements in the gangue was higher than that in the coal. The content of rare earth elements in gangue (324.28 μg/g) is much higher than that in coal (66.22 μg/g). The rare earth elements (REY) content in the coal in the Shanxi Formation (93.88 μg/g) is slightly higher than that in the Taiyuan Formation (66.19 μg/g). The mean values of δEu and δCe are 0.71 and 0.94, respectively. Except for the YIC3-1 curve of the Shanxi Formation, which is obviously convex and shows a positive Eu anomaly, the REY distribution patterns of the remaining samples are similar, showing the characteristics of light rare earth elements (LREY) enrichment and heavy rare earth elements (HREY) depletion. The Carboniferous–Permian coal-forming environment in the Yangcheng mining area is in an anoxic-reducing, warm, humid, and brackish water sedimentary environment. The paleosalinity and paleotemperature of the Shanxi coal formation are higher than those of the Taiyuan Formation, which is more inclined to be a reducing environment. The provenance of Carboniferous–Permian coal in the Yangcheng mining area is mainly derived from acidic sedimentary rocks of the upper crust of the post-Archaean, mixed with a small amount of granite, alkaline basalt, and oceanic tholeiite. The tectonic setting of the provenance is mainly an active continental margin related to a continental island arc mixed with an oceanic island arc and a passive continental margin tectonic setting. INTRODUCTION The Qinshui Coalfield is one of the six large Carboniferous− Permian coalfields in Shanxi Province and is located in southeastern Shanxi Province. The study of elemental geochemistry in coal is of great significance to the clean utilization of coal resources, the judgment of the coal accumulation environment, and the comprehensive development of beneficially associated minerals. 1−3 The migration and enrichment of trace elements often accompany in the coal formation process, and the regular variation of trace elements has important research significance for indicating sedimentary environment and sediment source. 4 At present, most scholars use the contents of V, Cr, Sr, Ba, Cu, Zn, Ni, Co, U, Th, B, and rare earth elements in coal and related geochemical discriminant indices to discuss the coal-forming environment. 5,6 Some trace elements in coal have great stability, especially inactive elements such as La, Th, Ti, Zr, Sc, Co, and Ni, which are nontransferable and rarely affected by other geological processes during weathering, transport, and deposition. Their content changes are intrinsi-cally related to the nature of the parent rock and the tectonic background of the provenance area. Many scholars at home and abroad have conducted fruitful research and exploration on the structural background and parent rock properties of the source area using trace element analysis. 7,44,69,73 Predecessors have performed some research on the formation environment and of coal seams in the Yangcheng mining area, but they are basically analyzed it from the perspective of sedimentation, sequence, and petrologic features. 8−10 In this paper, the coal-forming environment, provenance characteristics, and tectonic setting of the source area of Carboniferous−Permian coal seams and gangues in the Yangcheng mining area on the southern edge of the Qinshui coalfield were discussed from the perspective of coal geochemistry. This research provides basic reference materials for the development and utilization of coal resources in the Yangcheng mining area to provide the basis for the enrichment mechanisms of trace elements in the Qinshui coalfield. GEOLOGICAL BACKGROUND The Qinshui Coalfield is one of the most important Carboniferous−Permian coal-producing bases in China. 11 It is located in central eastern and southeastern Shanxi Province between Taiyue Mountain and Taihang Mountain in the east, strikes NNE-SSW, and is a large synclinal tectonic basin ( Figure 1). The Yangcheng mining area is located on the southern edge of the Qinshui Coalfield. The main coal-bearing strata contain offshore sea−land transitional facies developed on Ordovician paleoweathered crust and mainly include the upper Carboniferous Taiyuan Formation and lower Permian Shanxi Formation. The thickness of the coal-bearing strata is approximately 125 m and includes the coal seam layer No. 11; stable and minable seam layers No. 3 and No. 15; and local coal seam layers No. 4,No. 5,No. 6,No. 7,and No. 9. The total thickness of the coal seams is 12 m, the coal coefficient is 9.3%, and they have a flat stratum, simple structure, and shallow burial. The strata generally strike nearly east−west and dip nearly to the north, and their dip angle is 5−12°. The upper Carboniferous Taiyuan Formation is a coastal carbonate shelf sedimentary system. 12 It is typically approximately 85 m thick and comprises coal, gray-black mudstone, siltstone, dark white sandstone, and limestone. The lower Permian Shanxi Formation is generally 40 m thick and contains transitional facies consisting of grayishwhite medium-to fine-grained sandstone, grayish-black siltstone, and mudstone with coal seams. The middle and lower coal seams are stable and recoverable and are the main coalbearing strata in Yangcheng 13 (Figure 1). The trace elements and rare earth elements of the samples were analyzed by inductively coupled plasma mass spectrometry (ICP−MS). The sample predigestion process was as follows: 50 mg of the sample below 200 mesh was weighed and placed into a poly(tetrafluoroethylene) dissolving bottle, and 6 mL of HNO 3 , 0.5 mL of HClO 4 , and 2 mL of HF were added successively. Researchers commonly refer to yttrium (Y) and lanthanide as rare earth elements, or REE+Y or REY for short. According to the calculation and analysis of geochemical parameters of rare earth elements (Table 2), the content of rare earth elements in the coal samples ranges from 19.2 to 179.22 μg/g (average 66.22 μg/g) ( Table 2). The overall content is relatively low, significantly lower than the average 168.37 μg/g in the upper crust and 135.89 μg/g in the Chinese coal. 19 It is slightly lower than the global average of 68.47 μg/g and slightly higher than the US average of 62.19 μg/g. 20 The average content of rare earth elements in gangue in the Yangcheng mining area is 324.28 μg/ g, which is significantly higher than that in the coal samples, approximately 4.9 times that in the coal samples. It is higher than the global clay REY content (226.42 μg/g). 20 The ash content of gangue is high, and terrigenous detrital material is sufficient. Zhao et al. 21,22 and Huang et al. 28 studied Carboniferous to Permian coal in northern China and found that the mass fraction of rare earth elements increased from marine to terrestrial environments. The sedimentary environment of the Shanxi Formation in the southern Qinshui Basin contained delta deposition, and that of the Taiyuan Formation was mainly a barrier island and lagoon-tidal flat environment. 12 23 Shen et al., 24 and Hou et al. 25 Rare earth elements are usually divided into three types: light rare earth (LREY), medium rare earth (MREY), and heavy rare earth (HREY). 71 The range of LREY in coal is 10.18−168.01 μg/g (average 48.97 μg/g); MREY ranges from 2.80 to 40.09 μg/g (average 15.08 μg/g); and HREY ranges from 0.56 to 8.52 μg/g (average 3.13 μg/g) ( Table 3). The range of LREY in gangue is 237.47−275 μg/g (average 261.22 μg/g); MREY ranges from 49.85 to 54.76 μg/g (average 52.85 μg/g); and HREY ranges from 9.76 to 10.47 μg/g (average 10.21 μg/g) ( Table 3 (Table 3), with characteristics of LREY enrichment and HREY depletion, which is basically consistent with previous studies on Paleozoic rare earth elements in North China. 26 However, these data values are far lower than the arithmetic average value of L/H in Chinese coal and upper Paleozoic coal in North China (41.16 and 43.24, respectively), which may have been closely related to terrigenous debris when the coal seams formed. 27 As HREY are more likely to dissolve and migrate under the action of seawater than LREY, the coal-bearing strata in the study area represent a set of offshore coal-bearing transitional facies deposits developed on an ancient weathering crust of Ordovician rocks, and HREY are relatively depleted in the coal. Similarly, the mean values of L/M, M/H, and L/H ratios in gangue are 4.94, 5.18, and 25.57, respectively (Table 3), which are similar to those in coal, but the difference is that LREY in gangue are more enriched than that in upper Paleozoic coal in the Yangcheng coal mine. Distribution Characteristics of Rare Earth Elements. La N , Lu N , Sm N , and Gd N are obtained after chondrite standardization, and their ratios can reflect three enrichment types of rare earth elements in coal seams: light rare earth enrichment type (La N /Lu N > 1, L-type REY), medium rare earth enrichment type (La N /Sm N < 1 and Gd N /Lu N > 1, M-type REY) and heavy rare earth enrichment type (La N /Lu N < 1, H-type REY). 32,33 In addition, mixed enrichment types of light-medium rare earth elements (L-MREY) and medium-heavy rare earth elements (M-HREY) also occur. 33 In the study area, except for the YIC3-1 and YUC3-1 coal seams representing M-HREY and the YUC3-2 coal seam representing M-type REY, the other coal seams and gangue are L-type REY ( Table 3). The normalized data of chondrites are used to draw the distribution pattern map of REY ( Figure 4). As seen from Figure 4a, except for the YIC3-1 curve of the Shanxi Formation, which is clearly convex and shows a positive Eu anomaly, the distribution pattern of REY in other samples is approximately the same, showing a wide and gentle "v-shaped" curve with a high on the left side and a low on the right side. The curve crossover phenomenon is serious, and some samples have an obvious "up" or "down" pattern in the HREY segment. These changes were caused by the interaction of sea and land at that time, and with the advance and retreat of seawater, the REY contents in the sediments obviously changed. The curve distribution patterns of the Taiyuan Formation and Shanxi Formation are similar (Figure 4b), but the LREY segment crossover phenomenon of the Taiyuan Formation curve is not serious, which may be related to the sedimentary environment of the Taiyuan Formation at that time, such as the low depositional rate and weak hydrodynamic force. The distribution patterns of REY in gangue and coal are similar (Figure 4c), indicating that the sedimentary environment and provenance of REY in gangue and coal were basically the same. (La/Yb) N ranges from 0.52 to 58.88, except for the YIC3-1 and YUC3-1 curves ((La/Yb) N < 1) (Table 3); other curves are inclined to the right, which also indicates that LREY are enriched more than HREY. The (La/Sm) N values range from 0.24 to 9.97 (average 3.54) (Table 3), reflecting a high degree of internal fractionation in LREY; the (Gd/Yb) N values range from 0.50 to 3.15 (average 1.36), indicating a low degree of internal fractionation in HREY ( Table 3). The δEu values in the study area range from 0.43 to 1.20 (average 0.71) (Table 3), and Eu has a negative anomaly, indicating that rare earth elements in coal are closely related to terrigenous clastic rocks. 3,70 δCe in the study area ranges from 0.85 to 1.15 (Table 3) (value 0.94). The weak negative Ce anomaly indicates that the influence of seawater did not cause significant Ce loss in coal. The Carboniferous−Permian system in northern China is basically deposited in a stable cratonic basin with a stable provenance supply, flat topography, and shallow water; therefore, the Ce content of most samples shows positive or slightly negative anomalies. 29 4.3. Correlation Analysis between REY and Some Trace Elements. The vertical distribution characteristics of some trace elements and REY were analyzed by the systematic cluster analysis method in mathematical statistics, and trace elements were divided into three groups at a clustering distance of 25 ( Figure 5). The vertical variation in different elements indicates that their provenance or sedimentary environment was different. There are three groups: The first group had a good correlation with ∑REY. The groups are as follows: Class I: ∑REY, Be, Sc, V, Cr, Cu, Ga, Rb, In, Sb, Cs, Ba, W, Ti, Bi, Th, U, Nb, Ta, Zr, and Hf; Class II: Li, Zn, Sr, Mo, Cd, and Pb; and Class III: Co and Ni ( Figure 5). The cluster analysis tree ( Figure 5) shows that Class I has a good correlation with REY, most of which are lithophile elements, mostly related to rock-forming minerals and clay minerals, which is consistent with the research results of Zhao et al. 21 To a certain extent, the rare earth elements in the coal seam mainly came from terrigenous sediments. In addition to most Paleosalinity refers to the salinity in the sediments of a certain geological era and is widely analyzed using trace element content and trace element ratios. 35 The Sr/Ba ratio is also a very effective method to infer paleosalinity. Sr/Ba > 1 indicates marine brackish water deposition, Sr/Ba < 0.6 indicates terrestrial freshwater deposition, and Sr/Ba between 0.6 and 1 indicates semibrackish transitional facies deposition. 31 For samples in the study area, the Sr/Ba is between 0.28 and 8.66 (average 2.56), indicating that the study area is mainly semibrackish transitional facies deposition. Combined with Figure 6, most points plotting in the marine saltwater environment show that the vast majority of coal seams formed were influenced by the sea and the relatively high salinity of the peat swamp environment, and a few coal layers formed on the delta plain were relatively weakly affected by seawater. This is consistent with the conclusion drawn by Shao et al. 36 that the Taiyuan Formation in the Qinshui Basin formed in an offshore shelf and barrier-lagoon sedimentary environment and that the sedimentary environment of the Shanxi Formation was on a delta plain. The average Sr/Ba values of the Shanxi Formation and Taiyuan Formation are 2.98 and 1.08, respectively, indicating that the paleosalinity of the Shanxi Formation was higher than that of the Taiyuan Formation. This result is different from those summarized by predecessors in which the paleosalinity of Taiyuan Formation coal affected by seawater is higher than that of Shanxi Formation coal from continental deposits. 12,16−18 The reason may be that the overlying water of the peat bog in the Shanxi Formation became shallower and the water salinity increased with the gradual decrease in seawater and the increase in evaporation during coal-rock formation. To some extent, this indicates that the temperature in the Shanxi group is relatively high. Studies have shown that Th is relatively stable in the lowtemperature surface environment, does not easily migrate and is easily enriched in clay minerals, while U is easily oxidized and leached and is not easy to preserve. 37 Th/U can be used to distinguish the nature of the water medium. 37 Th/U > 7 indicates terrestrial freshwater deposition; 2 < Th/U < 7 indicates semibrackish transitional facies deposition; and Th/U < 2 indicates marine brackish water deposition. 37 The Th/U ratios of the coal rocks range from 1.05 to 6.0 (average 3.02). Combined with Figure 6, these results indicate a briny saltwater environment. Th/U fluctuates within a certain range, and this may have been caused by the sea level within a small range of oscillation and periodic changes and a dry climate and large evaporation, leading to the rise and fall of paleosalinity. The average Th/U values of the Shanxi Formation and Taiyuan Formation are 3.01 and 3.47, respectively, also indicating that the paleosalinity of the Shanxi Formation was higher than that of the Taiyuan Formation. 5.1.2. Paleoredox Conditions. Rare earth elements are often used as an indicator of the sedimentary environment and accumulation processes due to their stable chemical properties and noninterference from various geological processes. 39,74 δCe is sensitive and can effectively reflect the depth and oxidation reducibility of sedimentary water. 30 δCe in the study area ranges from 0.85 to 1.15 (Table 3) (value 0.94). The weak negative Ce anomaly indicates that the depositional environment of the coal seam is between reduction and weak oxidation. δCe/δEu is used to reflect the redox nature of the sedimentary environment. 23 The δCe/δEu ratios of coal seams in the study area range from 0.90 to 1.99 (average 1.39). The δCe/δEu ratios of coal seams in the Shanxi Formation vary from 0.90 to 1.81 (average 1.31). The δCe/δEu ratios of coal seams in the Taiyuan Formation range from 1.12 to 1.99 (average 1.66). The correlation between δCe/ δEu and ∑REY is shown in Figure 7. Most input points of the coal samples are greater than 1, indicating that the overall environment during the coal depositional period was reducing, and a few samples from the Shanxi Formation are less than 1, indicating that the sedimentary environment was probably oxidizing and that the water column was shallow. 38,39 Strawberry pyrite was observed under a scanning electron microscope in the No. 15 coal seam of the Taiyuan Formation, indicating that the coal environment of the Taiyuan Formation was mainly a reducing environment. 40,41 Elderfield et al. 42 and Li 43 proposed the calculated value of the Ce anomaly Ceanom = lg[3Ce N /(2La N + Nd N )] and noted that Ceanom > −0.1 represents the enrichment of Ce and reflects an anoxic-reducing environment in the water column. Ceanom < −0.1 represents the loss of Ce, reflecting that the water column was in an oxidized environment. The average Ceanom value of samples in the study area is −0.05. Combined with Figure 8, this reflects that the water was a stable reducing environment conducive to the enrichment and preservation of organic matter. The Ceanom of the Shanxi Group was −0.12 to 0.11 (average −0.04), and that of the Taiyuan Group was −0.12 to 0.02 (average −0.07). This shows that the Taiyuan coal formation is more reductive than the Shanxi Formation ( Figure 8). Ce/La < 1.5 indicates an oxygen-rich environment; 1.5 < Ce/ La < 2 indicates an oxygen-poor environment; and Ce/La > 2 indicates an anaerobic environment. 44 The Ce/La ratios in the study area range from 1.46 to 4.25 (average 2.08). Most coal sample points in Figure 8 are greater than 1.5, indicating that the environment during the coal depositional period was also reducing overall. Only a few samples from the Shanxi Formation have values less than 1.5, indicating that the corresponding sedimentary environment was oxidizing and that the overlying water was relatively shallow. Figure 9 shows that the vertical variation trends of V/(V + Ni) and V/Cr are not completely consistent with the conclusions of Ni/Co in some layers. Due to the influence of multiple factors, the discrimination of redox indices has multiple solutions, and the discrimination of trace element indices does not completely correspond to environmental changes, which should be combined with other geological evidence. 45−47 If sedimentary rocks are heavily influenced by terrigenous detritus, their trace elements are not suitable for environmental analysis. 48 Therefore, using trace elements alone to infer the paleoenvironment of coal seam deposition in upper Paleozoic coal and gangue from the Yangcheng mining area is not sufficient. A poor negative correlation was found between δCe N and (La/Sm) N (R 2 = 0.0987), indicating that the Ce anomaly can contain information regarding the original sediments and that inferring the paleosedimentary environment with rare earth elements has high reliability. 49 5.1.3. Paleoclimate. Climate plays a crucial role in coal formation, and the content and ratio of elements in coal represent the climatic conditions during coal formation to a certain extent. 50 The ratio of dry-loving Sr to wet-loving Cu can be used as a parameter for studying the change in paleoclimate. Generally, the Sr/Cu ratio is considered to be between 1.3 and 5 in a warm and humid climate and greater than 5 in a dry and hot climate. 51,52 The average value of Sr/Cu in the study area is 28.31, the maximum value is 227.74, and the minimum value is Figure 10, the value of Sr/Cu is generally greater than 5, reflecting that the study area mainly had a dry and hot sedimentary environment. In addition to several anomalous high values (ZLS3-1, ZLS3-2, YUC3-1, and TC3-3) in the Shanxi Formation, the average value of Sr/Cu in the Shanxi Formation is 16.34, and the average value of Sr/Cu in the Taiyuan Formation is 11.17, indicating that the paleotemperature in the Shanxi Formation was higher than that in the Taiyuan Formation. Combined with The paleoclimate information is well recorded by Rb/Sr ratios. In general, a high Rb/Sr ratio indicates a relatively humid palaeoclimate, while a low Rb/Sr ratio usually reflects a climatic environment with a low weathering rate. 53,54 The Rb/Sr ratios range from 0.0006 to 0.55 (average 0.09). The mean value of Rb/Sr in the Shanxi Group is 0.01, except for the abnormally high values in YJG3J-1, YJG3J-1, YJG3J-1, BG3-1, and BG3-3. The mean value of Rb/Sr in the Taiyuan Formation is 0.03. This also indicates that the paleotemperature of the Shanxi Formation was higher than that of the Taiyuan Formation during deposition. Based on the analysis of the Rb/Sr, Sr/Cu, and Sr/Ba ratios and their vertical variation trends, the transition to an arid and hot palaeoclimate was accompanied by a gradual increase in palaeosalinity in the lake, and the palaeosalinity in the lake indicated a freshwater environment when the palaeoclimate was wet and warm. 54 A positive correlation (r = 0.39) between Sr/Cu and Sr/Ba in the Shanxi Formation and a good linear negative correlation (r = −0.56) between Rb/Sr and Sr/Ba were found ( Figure 11, Table 4), indicating that palaeoclimate and palaeosalinity were correlated during coal deposition in the Shanxi Formation; that is, palaeoclimate change was one of many factors affecting water salinity during this period. A positive correlation between Sr/Cu and Sr/Ba in the Taiyuan Formation (r = 0.47) and a good linear negative correlation between Rb/Sr and Sr/Ba in the Taiyuan Formation (r = −0.67) (Table 4) were found, indicating that the climatic conditions during deposition of the Taiyuan Formation were very likely to be the factor controlling the palaeosalinity change of higher salinity with higher temperatures. The paleosalinity of the Taiyuan Formation was more affected by climate than that of the Shanxi Formation, mainly because frequent transgressions and regressions occurred in the Taiyuan Formation. During regression, a semisaline closed flow peat swamp will develop. When the flow peat swamp is relatively closed, its change in salinity mainly depends on climate factors. 76 5.1.4.2. Influence of Paleoclimate on Redox. A poor linear positive correlation between U/Th and Sr/Cu in the Shanxi Formation (r = 0.37) and a good linear positive correlation between U/Th and Sr/Cu in the Taiyuan Formation (r = 0.83) were found ( Figure 12, Table 4), indicating that the climatic conditions of the Taiyuan Formation may have been one of many factors affecting redox conditions. A negative linear correlation between Rb/Sr and U/Th in the Shanxi Formation (r = −0.45) and a good negative linear correlation between Rb/ Sr and U/Th in the Taiyuan Formation (r = −0.55) were found (Table 4), also indicating that the temperature of the Taiyuan coal formation had a greater influence on the paleoredox environment than that of the Shanxi Formation. On the one hand, in the open delta sedimentary environment, climate conditions had limited influence on water salinity and the paleoredox environment in the Shanxi Formation. On the other hand, different effects of plants on element enrichment led to different degrees of element enrichment in water bodies and sediment, 77 which may also be the reason for the poor correlation between element characteristic values of the Shanxi and Taiyuan Formations. In the late Paleozoic, when northern China as a whole had a humid climate from tropical to subtropical, ferns and leamophytes flourished, mostly growing in a warm and humid swamp environment, and they are some of the main coal-accumulating plants on the North China Plate. 78 The response of paleoredox to single factors in climate and salinity is not obvious. The extremely low correlation coefficient and poor linear correlation between U/Th and Rb/Sr, V/Ni and Rb/Sr, V/Ni and Sr/Cu, and V/Ni and Rb/Sr (Table 4) indicate that the redox conditions were also restricted by other factors. However, further analysis of the data shows that when both humid climate and low salinity conditions are met, the correlation between redox conditions and them is significantly enhanced, indicating that the redox conditions in the sedimentary system during this period may be affected by climate and salinity. 5.2. Depositional Rate. The degree of differentiation of REY (expressed as (La/Yb) N ) is a reflection of the depositional rate and can be used as a proxy to express the depositional rate. 26 (La/Yb) N ranges from 0.52 to 58.88 (average 10.12). The Taiyuan Formation (La/Yb) N values range from 1.08 to 20.15 (average 5.83) ( Table 3). The (La/Yb) N values from the Shanxi Formation to the Taiyuan Formation have a decreasing trend, indicating that the depositional rate of the Shanxi Formation in the study area was higher than that of the Taiyuan Formation. Zr/Rb ratios can be used for hydrodynamic strength analysis. Zr/Rb ratios are low in a quiet environment with low water energy and increase with increasing water energy. Zr/Rb ratios are high in a relatively fluctuating environment. 55 19.39). The deposition rate of the Shanxi Formation is fast, and the hydrodynamic force is relatively strong, while that of the Taiyuan Formation is slow, and the hydrodynamic force is relatively weak. The paleoenvironment during deposition of the Shanxi Formation had relatively strong outflow power, high temperature, high paleosalinity, and a high sedimentation rate, which reflects the sedimentation processes of shallow water and was related to the decline in relative sea level. These results are consistent with the Carboniferous−Permian regressive sedimentary sequence of the North China Platform. 65,66 5.3. Provenance Analysis. Sedimentary rocks inherit the characteristics of REY from their parent rocks well, and the distribution patterns and ratios of REY in sedimentary rocks provide a good proxy for judging the material source of rocks and the properties of the parent rocks. 59,60,62,72,73 Except for YIC3-1, the samples in the study area showed basically the same distribution pattern as the upper crust after chondrite standardization (Figure 3), indicating that the provenance of late Paleozoic coal in the Yangcheng mining area was derived from the upper crust. The ratios of trace elements (such as Th, Zr, Hf, Sc, V, Cr, Co, and Ni) and rare earth elements are good indicators of provenance. 44 Combined with the La/Yb-∑REY diagrams (Figure 13), the sample points mainly plot in the intersection area of calcareous mudstone and sedimentary rock mixed into small plots within the intersection area of granite and alkaline basalt (2 samples) and oceanic tholeiite (2 samples). This is consistent with the phenomenon of Eu in the samples showing obvious negative anomalies, no anomalies, and weak positive anomalies, which also reflects the multisource type of provenance in the study area and is consistent with the results of the correlation analysis indicating that the rare earth elements mainly came from terrigenous clastic minerals. These results are consistent with the conclusion of Chang et al. 63 that the parent rock assemblages in the provenance area of the lower upper Paleozoic deposits in the Shanxi and Taiyuan Formations were composed of sedimentary rocks, metamorphic rocks, and igneous rocks according to the skeleton grain composition and heavy mineral characteristics of the sandstone. The sample points of the Shanxi Formation are scattered across many regions, revealing a more turbulent sedimentary environment and confounding sources, which is consistent with the change in sedimentation rate. Eu anomalies can be used as an important indicator to distinguish provenance types and indicate sediment prove-nance. 72 Generally, granites have negative Eu anomalies (δEu < 0.90), and basalts have no Eu anomalies (0.9 < δEu < 1.03). 61,73 The δEu values in the Yangcheng mining area range from 0.43 to 1.20 (average 0.71) (Table 3), and Eu has a negative anomaly, which indicates that the terrigenous rocks in the Yangcheng coal mine mainly comprise felsic rock granites rather than mafic rocks. The characteristics of parent rocks and high-temperature hydrothermal fluid in the sedimentary source area are the main geological factors controlling Eu anomalies. 3,41 The parent rock of YIC3-1 is oceanic tholeiite; generally, basalts have no Eu anomalies (0.9 < δEu < 1.03). The YIC3-1 (δEu = 1.20) Eu positive anomaly may be caused by hydrothermal activity. The correlation between δEu and Ba is extremely poor, R 2 = 0.178, indicating that positive Eu in the study area was not affected by barium elements in the experiment and may have been related to deep hydrothermal activity. 26,61 Liu et al. 41 found that a large number of pyrite veins were developed in the No. 15 coal in the southern Qinshui Basin. Vein pyrite in coal is generally a secondary mineral of hydrothermal origin. 33 The provenance of the Taiyuan Formation is relatively stable, the provenance of the Shanxi Formation is relatively rich over time, and the following analysis is presented. The ratios of Eu/Eu* and Gd N /Yb N are sensitive parameters for determining the properties and geological age of source rocks. 79 In the Eu/Eu*−Gd N /Yb N diagram (Figure 14a), the Gd N /Yb N ratio of most samples is 10.5−2, and the Eu/Eu* value of most samples is less than 0.85, indicating that the parent rocks of the study are mainly granitoids formed in the Late Archean mixed with some Archaean sediments. The average Gd N /Yb N value of the Taiyuan Formation in the study area is 0.96 and that of the Shanxi Formation is 1.47 (Figure 14a). From the Taiyuan Formation to the Shanxi Formation, the Gd N /Yb N value increases continuously, indicating that the age of the parent rock area is gradually increasing, and the uplift and denudation state of the provenance area is accelerated. Therefore, the sedimentation rate of the Shanxi Formation is faster than that of the Taiyuan Formation, which is consistent with the regional tectonic evolution background; namely, the uplift of the provenance area in the northern part of the North China Plate occurred earlier than that in the southern part. 67,68 In the La/Sc-Co/Th diagram (Figure 14b), most of the samples of the Taiyuan Formation are located around felsic magmatic rocks, while the samples of the Shanxi Formation are relatively scattered, mainly derived from a mixture of felsic magmatic rocks and mafic magmatic rocks, with small amounts of andesite and basalt. Based on the analysis of trace elements, rare earth elements, and regional geological data, the parent rocks of the Taiyuan Formation were mainly derived from the acidic sedimentary rocks of the upper crust of the post-Archaean. In addition to the intermediate-acidic sedimentary rocks of the upper crust of the post-Archaean, the parent rocks of the Shanxi Formation are rich in origin and mixed with small amounts of granite, alkaline basalt, and oceanic tholeiite. 5.4. Tectonic Background of the Source Area. Bhatia et al. 64 summarized trace element and rare earth element characteristics for judging the tectonic environment of sedimentary basins based on data from eastern Australia ( Table 5). Most sample parameters in this area are within the continental island arc tectonic background, but the La/Yb and La N /Yb N ratios of the Shanxi Formation coal and coal gangue are within the passive continental margin tectonic background; the Eu/Eu* ratios of the Shanxi Formation coal and La/Yb, La N /Yb N , and Eu/Eu* ratios of the Taiyuan Formation coal are close to an active continental margin orogenic background; and La, LREY/HREY and Ce in the Shanxi Formation and Taiyuan Formation coal are close to an oceanic island arc background. Based on the above comparison, the tectonic setting of late Paleozoic provenance in the Yangcheng mining area is mainly a complex tectonic setting of continental island arc, oceanic island arc, active continental margin, and passive continental margin. Bhatia et al. 69 proposed using Th-Sc-Zr/10, La-Th-Sc, and Th-Co-Zr/10 to identify tectonic environments. As seen from Figure 15, sample drop points are relatively scattered, plotting mainly in the continental island arc area, followed by active continental margin, passive continental margin regions, and the oceanic island arc (Figure 15a−c). Rare earth elements have different distribution patterns under different tectonic backgrounds. 82 The REY distribution patterns of coal gangue in the study area are similar to those of continental island arc, that of the Shanxi Formation is similar to those of active continental margin, and that of the Taiyuan Formation is similar to those of passive continental margins 82 (Figure 15d). Generally, the provenance has a tectonic background of an active continental margin of a gully−arc−basin system and a passive continental margin of a collisional orogenic belt. This is the result of the collision between the North China Plate and the Siberian Plate in the region. 83 CONCLUSIONS The content of trace elements in late Paleozoic coal in the Yangcheng mining area is depleted, except for slight enrichment of Li. The content of trace elements in the coal in the Taiyuan Formation is slightly higher than that in the coal in the Shanxi Formation. Except for Li, Co, and Mo, the content of trace elements in coal gangue is obviously higher than that in late Paleozoic coal in the Yangcheng mining area. The content of rare earth elements in gangue (324.28 μg/g) is much higher than that in coal (66.22 μg/g). The REY content in the coal in the Shanxi Formation (93.88 μg/g) is slightly higher than that in the Taiyuan Formation (66.19 μg/g). Except for the YIC3-1 and YUC3-1 coal seams, which are M-HREY, and the YUC3-2 coal seam, which is M-type REY, the other coal seams and gangue are L-type REY. The mean values of δEu and δCe are 0.71 and 0.94, respectively. Except for the YIC3-1 curve of the Shanxi Formation, which is obviously convex and shows a positive Eu anomaly, the REY distribution patterns of the remaining samples are similar, showing the characteristics of LREY enrichment and HREY depletion. The Carboniferous−Permian coal-forming environment in the Yangcheng mining area is an anoxic reducing, warm, humid, and brackish water sedimentary environments. The paleosalinity and paleotemperature of the Shanxi coal formation are higher than those of the Taiyuan Formation, which is more inclined to be a reducing environment. The paleosalinity of the Taiyuan Formation is more affected by paleotemperature than that of the Shanxi Formation, and the response of oxidation reducibility to single factors in climate and salinity is not obvious. The provenance of Carboniferous−Permian coal in the Yangcheng mining area is mainly derived from acidic sedimentary rocks of the upper crust of the post-Archaean, mixed with a small amount of granite, alkaline basalt, and oceanic tholeiite. The provenance of the Taiyuan Formation is mainly derived from acidic sedimentary rocks of the upper crust of the post-Archaean, while the original material source of the Shanxi Formation is relatively rich, in addition to intermediateacidic sedimentary rocks of the upper crust of the post-Archaean, and mixed with a small amount of granite, alkaline basalt, and oceanic tholeiite. The tectonic setting of the provenance is mainly an active continental margin related to a continental island arc mixed with oceanic island arc and a passive continental margin tectonic setting.
2022-04-13T15:12:00.472Z
2022-04-11T00:00:00.000
{ "year": 2022, "sha1": "a026571e5de60292f8965c041eb336e93f00212e", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "1ac8e134de4bb354c3ba4b839f60783ea1b9cebc", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
246508113
pes2o/s2orc
v3-fos-license
Medial catalexis in Sir Thomas Wyatt ’ s iambic pentameter There is a reasonable scholarly consensus that the long (“heroic”) line of Sir Thomas Wyatt is an iambic pentameter. However, a significant number of his long lines are apparently syllabically hypometrical, calling into question this interpretation. The doubt is further compounded by Wyatt’s nontrivial use of phrase-medial inversions. I argue that it is nonetheless possible to infer an iambic pentameter intention behind Wyatt’s syllabically hypometrical lines, which can be ‘repaired’ by medial catalexis. Syllabically canonical lines are known to favour major prosodic breaks (Intonational Phrase boundaries) between the second and third foot and, to some extent, between the third and fourth. On the assumption that medial catalexis exploits the natural pauses that occur at the boundaries between Intonational Phrases, what emerges is a significant preference for catalexis to target the weak position of the third verse foot (half-line boundary), followed by the fourth (immediately following the verse-foot adjunct of the second half-line). The finding opens up further possibilities for understanding Wyatt’s other licences, and linguistically informed literary criticism of his verse. The final part of the paper offers some speculations as to the nature of medial catalexis and how it can be approached within a linguistically informed framework compatible with generative metrics. Effectively expressive lines Since the publication in 1557 of Tottel's Miscellany (Rollins 1966, Holton andMcFaul 2011), the first printed anthology of English poetry, the problem of identifying the meter of Sir Thomas Wyatt's long ("heroic") line has vexed editors, critics and scholars alike. There is a substantial critical tradition, beginning with the work of A. K. Foxwell and Frederick Padelford, supporting an iambic pentameter interpretation (Foxwell 1911, Padelford 1923, Evans 1954, Daalder 1977, Kiparsky 1977, Noguchi 1983, Wright 1985, Rebholz 1997, Groves 2005. The proportion of anomalous lines given such an interpretation is nonetheless significant, raising questions regarding its adequacy. In this paper I look in particular at syllabically hypometrical lines, that is, lines with nine syllables or fewer (excluding extrametrical syllables and resolved disyllables), which on the face of things do not contain sufficient material for a replete iambic pentameter line. I argue that repleteness is achieved in such cases either through the use of intonation, as outlined below, or through medial catalexis at natural prosodic breaks, and whose distribution supports an iambic pentameter interpretation. Since intonation and catalectic pauses are in some sense an aspect of 'performance', the lesson of Wyatt's verse is that the linguistically informed study of metrics cannot adhere to the rigorous separation of competence and performance of early generative metrics, as set out by Halle and Keyser (1966). If the text is the only admissible evidence of the scansion, then Wyatt's approach to the iambic pentameter line becomes, as Thompson (1989Thompson ( [1961:2) observes, "hard to define", showing an "apparent disregard of the iambic metrical pattern" (p. 15). He adds that "no explanation of his practice has ever been generally accepted". In the literature on Wyatt one can find judgments to the effect that certain lines simply cannot be scanned as iambic pentameter, such as Schwartz (1963:159), discussing the last four lines of Sonnet XI/Egerton MS VII ('Who so List to Hounte'). Writing about Wyatt and his older contemporary John Skelton, Swallow (1950:5f.) wonders aloud "[w]hy, when both Skelton and Wyatt knew the iambic pattern […] did they allow so many variations from the pattern, variations which even, on occasion, destroy the pattern?". The assumption that Wyatt engaged in such destruction is one that is widely held, and can be traced back to the work of the literary critic George Saintsbury, who held Wyatt's metrics in rather low esteem (Saintsbury 1906; 1908; 1910; 1912. Chambers (1965Chambers ( [1933) paraphrases Saintsbury's view of Wyatt as one "fumbling his way to a comprehension of the pentameter […] perverted by oblivion of Chaucerian inflections". This assessment is part of a more general perception of the poetry of the fifteenth century as suffering a loss of rhythmical quality. Despite these doubts, more recent criticism suggests ways of scanning Wyatt's long lines that are consistent with a thoroughgoing iambic pentameter interpretation. One line that has attracted scholarly attention is l. 15 of Ballade LXXX/Egerton MS. XXXVII in (1a). In this line, the subject of the poem is recalling, in disbelief, an amorous encounter. Tottel's editors, however, amend to the prosodically regular (1b). Commenting on Tottel's imposed pattern in (1b), Thomson (p. 16) writes: "we do not know how far any reader would have let this [metrical-P.B.] pattern influence his voice, but even if the influence was only slight, the words would sound like nothing anyone ever spontaneously said". Contrasting the two versions, he suggests Wyatt gave up the effect of the pattern "for the effect of the phrases", while Tottel's editor sacrificed this effect for the pattern. Whether "the effect of the phrases" really did require giving up the pattern is a good question. Thompson suggests later, however, in a comparison with the Earl of Surrey, that Wyatt was interested in "maintaining the intonation patterns of language" (p. 69). In fact, I think we can say that certain intonation patterns may have been recruited to sustain the iambic pentameter pattern where the number of syllables fell short. This metrical use of intonation is thus partly what underlies Wyatt's "expressively effective" lines. (2) a. king With reference to 'implied' offbeats, Attridge writes (p. 98) that " [o]ffbeats can also be implied in the rhythm but not realised in the language". In (2b) there are three such 'implied' offbeats according to his scansion: the words no, lay, and broad each realize a beat and are also assigned an 'implied' offbeat. Although Attridge talks about 'implied' offbeats, the real intuition here seems to be that, in these cases, a stressed monosyllable may span a beat and an offbeat, or, in generative terms, a strong and a weak position. The same intuition is matched in other work on Wyatt's metrics, including Foxwell (1911), Padelford (1923), and Wright (1985:148). The question is what licenses such mappings phonologically, and whether there are metrical constraints on their distribution. Although not the focus of the present article, it is worth briefly setting out my thoughts on this, since this brings us back to Thompson's intuition that intonation plays a role in the crafting of the lines. Stretching a monosyllable over two metrical positions may be effected by a 'scooped', rising-falling or falling-rising contour. In his description of the present-day English intonation system, Gussenhoven (2004; 2016 posits two tritonal accents, L*HL and H*LH, respectively associated with meanings of 'significant addition' (Gussenhoven 2004:307) and 'listener engagement ' (cf. 'uptalk'; Tyler andBurdin 2016, Warren 2016). Since such contours also exist in other west Germanic languages such as Dutch (Gussenhoven 2005), it is not unreasonable to reconstruct them for Early Modern English as well. Allowing a tritonal accent to licence mapping a monosyllable to a SW sequence allows us to interpret (1a) as a full iambic pentameter as in (3). The rise-fall contours over lay and brode, commmunicating 'unexpectedness', compensate for the two 'missing' syllables. ( The intuition, then, echoing Thompson, is that intonation may be recruited for metrical purposes. I leave the task of formalizing this idea to a future occasion, however. Several commentators have sought to understand Wyatt's prosodic practice in the light of the circumstances of his life. In Appendix A I provide an outline of Wyatt's life based on the recent biography by Brigden (2012). For now, it is enough to note that, as a prominent member of Henry VIII's court, Wyatt led a remarkably precarious life. Imprisoned in the Tower of London no less than twice on suspicion of betraying the king, it is a testament to Wyatt's gifts that he found his way, on both occasions, back into Henry's favour, and went on to die a natural death rather than suffer execution. Brigden describes a man with all the gifts necessary to succeed at court, and although he achieved the highest renown as a courtier, Wyatt experienced life as a game of dissimulation, success in which life-his, and others-depended on. The subject of Wyatt's poems is thus someone that constantly has to monitor their thoughts in case unguarded speech betray them. Brigden perceives this inward deliberating consciousness as finding expression not only in the themes of Wyatt's poetry, but in his prosody as well. Her judgment stands in stark contrast to Saintsbury's. Wyatt, who knew the poetic theory of the Italian Renaissance, certainly knew the rhythms of a decasyllabic line, Italian prosodic principles, and the rules for placing caesura in verse in Romance languages. Yet he chose in his own complex rhythms to imitate the cadences of voice and feeling rather than achieve prosodic regularity. […] No easy flow or 'riding rhyme' fitted his subject's unease or his purpose to disconcert or unsettle. (Brigden 2012:13) Brigden's intuitions are echoed by Thompson, who again contrasts Wyatt's practice with the emendations of Tottel's editors. For Wyatt, the metrical pattern of the ten-syllable iambic line had one use. It threw into relief the language of a man speaking, with the abrupt shifts from outburst to meditation that allowed him to include in poetry everything from godly things to the swine that chaw the turds molded on the ground. Ten syllables more or less, five relatively strong stresses more or less: it was a standard maintained steadily enough to declare itself. In doing that it accomplished what the metrical system of Wyatt's immediate inheritance could not do, it emphasized the quality of living speech that brought with it all the qualities of the man. This is not exactly what the editors were looking for, nor what they were concerned to preserve. (Thompson 1989(Thompson [1961 prosodic break in a line, I shall follow classical usage (Maas 1962: §46) in reserving the term specifically for major breaks within verse feet. For major breaks across feet, I will use the term diaeresis. Any departure from this usage, for example, in quoting other authors, will be highlighted. Foxwell (1911) argues that Wyatt's "departures from the strict iambic pentameter line are in accordance with a body of recognized prosodic variants" (Padelford 1923:129). These are enumerated by Padelford (1923:139-140) as shown below. For ease of reference I provide the current term on the right where appropriate. Note that Padelford uses caesura in the broad sense to refer to any major break, not just breaks within feet. 1. Initial trochee (line-initial inversion) 2. Initial monosyllabic foot (acephaly) 3. Trochee after caesura (phrase-initial inversion) 4. Monosyllabic foot after caesura, preceded by regular foot (medial catalexis, see below) 5. Caesura in the middle of a foot (perhaps almost too universal to be recorded) 6. Epic caesura: additional weak syllable before caesura, followed by normal foot after caesura 7. Monosyllabic foot elsewhere than at the beginning of a verse or after the caesura 8. Anapaestic foot (a) First foot (b) Other than first foot 9. Final es (and perhaps final e) pronounced 10. Alexandrine verse (hexameter) 11. Hendecasyllabic verse: additional weak syllable at end of verse (extrametricality) 12. Slurred syllables, of which the most frequent are: ('elision') (a) R, l, m or n (usually unaccented), followed by weak syllable (b) Suffixes, such as eth, en, on, er or ing (c) Vowels in juxtaposition (d) Unimportant monosyllables 13. Long vowels or diphthongs treated as disyllabic (intonation, see above) 14. Vowel sound inserted between consonants 15. Four stressed line (tetrameter) In Saintsbury's spirit, Southall (1964:118) takes exception to the "numerous […] departures from the strict iambic pentameter line" that Foxwell and Padelford assume Wyatt practised. If the iambic pentameter is the pattern, Southall goes on, Wyatt's lines suffer "death by a thousand qualifications". He concludes (p. 119) that "[t]he Foxwell-Padelford findings prove either that Wyatt wrote very 'bad' iambic pentameter verse or that he did not write iambic pentameter verse at all". Indeed, the Foxwell-Padelford thesis has fuelled alternative proposals that Wyatt was actually using an Anglo-Saxon strong stress meter (Schwartz 1963) or a flexible line of between four and six stresses (Lewis 1938, Harding 1946, Swallow 1950. I argue that the Foxwell-Padelford thesis is substantially correct, but can be simplified in the light of research in generative metrics and phonology. For three of Padelford's licences,[6],[7], and [14], I find little or no empirical support. Padelford's [7] would presumably be medial catalexis within a Phonological Phrase or Clitic Group (or adjoined Prosodic Word). There are perhaps two or three cases where catalexis within a Phonological Phrase gives a decent reading, for example, between bryght and sonne in Pen.Ps. 309 (Appendix C, (5)), but not at prosodic junctures stronger than this. (For further discussion of this kind of case, see Section 3.) Evidence for the pronunciation of final es and e [9] is weak, especially given the possibility of [13] corresponds to my interpretation of Groves' and Attridge's proposals that a stressed monosyllable may span a beat-offbeat sequence, which I take to have an intonational interpretation, and which we won't pursue here. Here, we'll concentrate on [4], medial catalexis. Curiously, the list makes no mention of phrase-medial inversion ("trochee elsewhere than at the beginning of a verse or after the caesura"), surely one of the most salient of Wyatt's licences. This is a topic I will have to return to in a later paper, however. I have made use of two editions of Wyatt's poetry from the second half of the twentieth century, Muir and Thomson (1969), which retains Wyatt's orthography, and Rebholz (1997), which uses modern spelling. 1 Muir and Thomson's edition categorizes the poems by manuscript and numbers them consecutively. Rebholz's distinguishes between poems that can reliably be attributed to Wyatt, and other poems ascribed to him. Within each of these categories, poems are further categorized by verse form and consecutively numbered from 1-154 (270 including poems attributed to Wyatt after the 16th century). I have restricted myself to the 154 poems which on external evidence can be ascribed to Wyatt-largely the poems contained in the Egerton manuscript. For this reason, I follow Rebholz's organization of the material, but Muir & Thomson's edition of the texts, which retain the spelling and punctuation of the original manuscripts. In scansions, verse feet are shown enclosed in brackets. The material analysed includes seven rondeaux (Rond.; 106 lines), 29 sonnets (Son.; 420 lines), 30 epigrams (Ep.; 243 lines), one canzone (Can.) of 147 lines, eight ballades (Ball.; 193 lines), the three Epistolary Satires (Sat.; 306 lines), and the Paraphrase of the Penitential Psalms (Pen.Ps.; 775 lines). The total number of lines analysed thus comes to 2190. The structure of these difference verse forms is described in Section 2.5. The poems analysed are listed in Appendix B. All the lines which, under an iambic pentameter interpretation, implicate medial catalexis are given with scansions in Appendix C. Complete scansions of selected poems, including a scansion of Wyatt's Sonnet X by George Saintsbury, are provided in Appendix D. The remainder of the paper is organized as follows. In Section 2, we review the structure of the iambic pentameter, and go through some of the principal variations in Wyatt's verse. Section 3 examines the distribution of medial catalexis, and argues that it falls out naturally from iambic pentameter assumptions. Finally, Section 4 looks at some of the broader implications for the linguistically informed study of Wyatt's art. Language and meter In this section we review the structure of the iambic pentameter (Section 2.1), go over some the most important types of metrical variation that Wyatt employed (Section 2.3), review the factors at work in prosodic phrasing (Section 2.2), look at the small number of lines that defeat an iambic pentameter interpretation (Section 2.4), and review stanza structure and discuss how Wyatt's use of rhyme can be used to bootstrap the scansion of ambiguous lines (Section 2.5). The iambic pentameter The earliest work in generative metrics represented a meter as a linear sequence of metrical positions that alternate between strong (S) and weak (W) (Halle and Keyser 1966; 1971, Halle 1970. With the shift to nonlinear representations in the 1970s and 80s, however, the idea of metrical constituency was adopted, beginning with Kiparsky (1977) and Hayes (1983). Since Hayes (1988), it has also been standard to assume that metrical representations are built from a hierarchy of metrical categories in a way that parallels the prosodic hierarchy (e. g., Nespor andVogel 1986, Selkirk 1986). Metrical positions are grouped into verse feet, ( S W ) in the case of trochaic verse feet, ( W S ) in the case of iambic. Above the level of the foot, and below the level of the verse line, we can also recognize a half-line category (sometimes known as the 'dipody' or 'colon'). An iambic tetrameter line is a balanced structure, consisting of two half-lines of two feet each, shown in (4). (4) Line A basic theoretical assumption of much metrical theory is that all metrical structure is binary. Prince (1989:55) dubs this principle MAXIMAL ARTICULATION, given in (5). (5) MAXIMAL ARTICULATION All metrical structure is binary. Now let us consider the iambic pentameter in the light of (5). There are four possible structures: the additional foot may adjoin either to the first or second half-line, and it may adjoin either to the left or the right of whichever half-line is chosen. The choice between these is, at least initially, empirical. According to Kiparsky (1977:230), the iambic pentameter has the structure shown in (6). 2 The additional foot is accommodated by left-adjoining it to the second half-line. In line with the terminology of Ito and Mester (2009), adjunction gives rise to a minimal and a maximal half-line. The minimal half-line comprises positions 7-10 in (6); the maximal half-line positions 5-10. The major metrical break between the fourth and fifth metrical positions tallies with Renaissance verse criticism. Both Gascoigne (1868Gascoigne ( [1575:38) and Puttenham (1869Puttenham ( [1589:86) note that the normative placement of the 'caesura' (i. e., what I term the 'major break') is after the fourth syllable. 3 Kiparsky also bases his conclusion on the possible positions for major breaks described by Dillon (1977), who in turn attributes his observations to Home (Lord Kames) (2005 [1785]), in arguing that each line has a "capital pause" (Kames' term) "which is expected to fall after the fourth, fifth, sixth, or seventh syllable" (Dillon 1977:17), that is, between positions 4-5, 5-6, 6-7, and 7-8. Although considerations of space prevent going into detail here, there may be metrical reasons to prefer Kiparsky's proposed structure for the iambic pentameter over the three alternatives. As Hayes (1995) argues, the iamb is not simply a right-prominent mirror image of the trochee. The iamb is also quantitatively inherently uneven. Kiparsky's proposed structure can be understood as projecting this unevenness at the level of the verse line and (maximal) half-line, making the pentameter iambic at every level of metrical structure. In the same way that the iambic pentameter in the abstract may be seen as an invariant template compatible with limited variation in linguistic prominence assignment, I will assume that the higher-level metrical structure of the iambic pentameter is invariant, and is compatible with similarly limited variation in prosodic structure. The importance of this assumption will become clear below. Recent research by Groves (2019) also addresses the placement of major breaks in English heroic verse. Groves, who also uses caesura to refer to major breaks whether they fall within or across verse feet, distinguishes between neoclassical and non-neoclassical 'caesural' styles, which he illustrates with four poets from each style. Groves' own Figure 1 (p. 275) is reproduced below in Figure 1. 4 With the exception of the 7-8 position, which is only marginally more likely than 3-4 to be associated with a major break, Groves' findings are consistent with Dillon's and Kiparsky's claims. In both the neoclassical and non-neoclassical material that Groves examined, the major break occurred most frequently at 4-5. In the non-neoclassical material the second most likely position for a major break is 6-7. Assuming breaks in even-odd positions 4-5 and 6-7 occur within feet, these represent diaeresis (see Section 1). That is, as prosodic breaks, they align respectively with the strongest and second strongest metrical breaks in (6). Groves' findings may thus be taken to support Kiparsky's proposed structure for the iambic pentameter. This brings us to the major breaks observed in the odd-even position 5-6, which as described in Section 1 corresponds to the definition of caesura in the narrow sense of a major break within a foot. The difference between diaeresis and (foot-medial) caesura in Alexander Pope's The Rape of the Lock is illustrated in (7). In both the neoclassical and non-neoclassical material analysed by Groves, the foot-medial caesura in position 5-6 is frequent. In the neoclassical material, it is indeed second in frequency to a major break in position 4-5, while in the non-neoclassical material, it has third place, slightly behind major break in position 6-7. Because foot-medial caesura entails a misalignment between metrical and prosodic structure, its function may be assumed to be distinct from that of diaeresis. Although this is beyond the scope of the present paper, it is at least possible to give an indication what kind of avenue future research on this question might pursue. Discussing the Indo-European 'long' line, which was between 10 and 12 syllables, Gasparov (1996:57) notes that "the [foot-medial-P.B.] caesura was obligatory, since without it the ear could not encompass the long line". Placing the major break between feet could risk that it be confused with the end of the line, and "the line [would-P.B.] fall apart into two identical hemistichs" (p. 71). A strategy for avoiding this was that "the caesura should not fall between feet, but intersect a foot in such a way that its beginning should belong to the first hemistich, and the second hemistich should begin with the non-initial part of the foot". Now let us turn to the distribution of major breaks in Wyatt, which is illustrated in Figure 2. Some early work on this was carried out by Evans (1954), who examined the number of 'strong' pauses (marked by punctuation) after each position for the First Psalm (Psalm 6. Domine ne in furore). Evans tabular data are shown plotted in Figure 2a. They show that Wyatt's preference is for pauses to occur immediately following the fourth position, that is, between the second and the third foot. This is straightforwardly what we would expect given the structure in (6). I also examined the major breaks at the juncture between verse feet and found a pattern consistent with Evans' findings for each type of verse. The results are shown in Figure 2b. For this analysis, I included only the syllabically replete lines, removing any syllabically hypometrical lines, as well as tetrameters and hexameters. 5 Based on the distribution of punctuation in Muir and Thomson's (1969) edition, the major break in the line is between the second and third verse foot, that is, between the first and second half-line. On the same evidence, the juncture between the first and second verse feet, and between the fourth and fifth is strong, which is to say that this position repels prosodic breaks, consistent with membership in the same (minimal) half-line. The strength of the juncture between the third and fourth foot varies somewhat between verse forms, but seems intermediate in strength between the boundary across half-lines and the boundary within half-lines. This is again consistent with the claim that the third foot is adjoined to the second half-line. Future work may help nuance the picture by looking at the distribution of clausal and phrasal boundaries rather than punctuation. It is nonetheless perhaps surprising that diaeresis between the first and second foot is as common as it is in syllabically replete lines. As we will see in Section 3, the distribution of medial catalexis is closer to what we would expect given (6). The reason for the apparent relative strength of the break between the first and second foot may have to do with tendencies in information structure rather than anything metrical. For example, given the general correlation between verse lines and clauses, we might expect supplementary material such as vocatives and interjections to be more frequent in the first foot, as illustrated in (8). This might have the effect of inflating the frequency of diaeresis in this position. For comparison, Table 1a illustrates Wyatt's use of foot-medial caesura. Because he used foot-medial caesura very rarely, only the raw numbers are given. As can be seen, there is an apparent preference for caesura to target the third foot, consistent with Groves' findings. Table 1b also provides the raw numbers for Figure 2b. Prosodic phrasing The major breaks shown in Figure 2b are based on punctuation, which is taken to be indicative of the presence of an Intonational Phrase boundary. Although it is reasonable to assume that all punctuation marks evidence an Intonational Phrase boundary, not all Intonational Phrase boundaries will be evidenced by punctuation. Since we are assuming that medial catalexis will coincide with an Intonational Phrase boundary, let us examine the most important factors in the formation of Intonational Phrases. After more than thirty years of research on Prosodic Phonology (Nespor and Vogel 1986), there is now a reasonable consensus regarding what categories the prosodic hierarchy contains, and how they relate to syntactic structure (Selkirk 1996; 2000; 2011. Above the level of the phonological foot, we must in addition to the Intonational Phrase (ι) recognize the Prosodic Word (ω) and the Phonological Phrase (φ). The correspondence between these higher-level prosodic units and syntactic constituents is, according to Selkirk (2011), regulated by constraints that require particular syntactic and prosodic constituents to MATCH. Other things being equal, MATCH requires that a clause is matched by an Intonational Phrase, a syntactic phrase by a Phonological Phrase, and a syntactic word by a Prosodic Word. Each of these phonological domains also have certain phonological functions. For example, the Prosodic Word is the domain for the assignment of stress and the construction of iterating phonological feet; the Phonological Phrase is the domain for postlexical rhythm and the distribution of pitch accents (Gussenhoven 2004:278f.); and the Intonational Phrase is delimited by initial and final boundary tones marking meanings such as 'continuation' and 'finality' (Gussenhoven 2004:123, 296-320). In the unmarked case, a clause will map onto an Intonation Phrase, as in Selkirk's (1978) example in (9a), quoted in Gussenhoven (2004:287 Although the Intonational Phrase is sponsored by the clause, it may also commonly realize subclausal phrases. Information structure and phonological size or length are factors that increase the likelihood that a subclausal phrase is prosodified as an Intonational Phrase. For example, topicalized or parenthetical material will tend strongly to be prosodified as an Intonation Phrase, as shown in (9b). In this example, the topicalized PP In Pakistan and the parenthetical relative clause which is a weekday are both prosodified within their own Intonational Phrases, as is the intervening material (Tuesday). The subject of a sentence is also commonly prosodified as a separate Intonational Phrase. Thus, (9a) may be reprosodified as (9c). If the subject is a longer phrase, it is also more likely to be prosodified as a separate Intonational Phrase, as illustrated in (9d). Subclausal Intonational Phrase breaks within the VP are also possible. When such breaks occur, they show sensitivity to syntactic structure, in particular whether a phrasal modifier is attached 'high' or 'low'. Consider the ambiguous example in (10a), from Gussenhoven (2004:288). The ambiguity consists in a high and a low attachment reading of the PP, (i) as modifier of the NP 'every guest', as in (10b), and (ii) as modifier of the VP 'welcome every guest', as in (10c) Three possible prosodifications of (10a) are shown in (11). Parsing the entire clause as a single Intonational Phrase, as in (11a), results in a phrasing that is compatible with either the high or the low reading. The phrasing in (11b), however, implies the structure in (10b), while (11c) implies (10c). In sum, although Intonational Phrases tend to match clauses, Intonational Phrase breaks may also occur within clauses, separating out the topic or parenthetical material, heavy constituents, the subject, or even VP-internal phrases. Metrical variation Perhaps one of the most immediately apparent points of variation in Wyatt's verse is in the metrical treatment of verbal inflection, which is examined by Evans (1954). The metrical patterning indicates an alternation between a vowel and zero. Examples are shown in (12), where the unpronounced vowel is shown with an underring <V ̥ >. Where Wyatt writes an inflectional vowel between the exponents of a strong and a weak position, it is generally accepted that the vowel is not pronounced. This was no doubt variation in the spoken language that Wyatt exploited metrically. In at least one instance, there is also deletion of an inflectional vowel between the exponents of a weak and strong position, as in (13). (13) ( 'Peace', quod ) ( the towne· ) ( ·mowse, 'why ) ( speke̥ st thou ) ( so lowde?' ) Sat. CL.43 The third person singular present form of the verb deserves additional comment. Wyatt writes {-(e)th} for the suffix, which is the Southern pattern. The early modern period was a time of rapid evolution in the marking of tense, person, and number. As Lass (1999:162ff.) explains, Southern {-th} was replaced by the East Midland {-s} during this time. According to Lass, the {-s} marker is first attested in London in the fourteenth century and, after a period of slow growth, {-s} spread rapidly in the sixteenth and seventeenth centuries, becoming established as the spoken norm by about 1580. Thanks to the inertia of spelling, though, "-th seems to have been written long after it stopped being said". Shakespeare, for example, exploits both variants for metrical effect, as shown in (14). While hateth occupies two metrical positions (S W), hates occupies a single S position. Another process sometimes also called 'elision', is smoothing between adjacent vowels, which maps two syllable nuclei onto a single metrical position. Examples from weak position are shown in (16) Resolution (Dresher and Lahiri 1991, Kager 1993, Hayes 1995 exploits the equivalence of a heavy syllable and a sequence of two light syllables, or a sequence of a light syllable followed by a heavy syllable. See Hanson (1993), Hanson and Kiparsky (1996) Initially in the line, inversion occurs freely, as expected. For further discussion of inversion in iambic verse, see Jespersen (1933), Newton (1975, and several of the articles cited here by Halle and Keyser, and Hayes. The examples in (21) will suffice to illustrate its use in Wyatt's iambic pentameter. (22), all with inversion in the third foot, with one apparent exception in (22o), with apparent inversion in the fourth. 6 The location of the inversion, at the break between half-lines, is of course entirely as expected, given the structure of the iambic pentameter shown in (6). A licence that raises particular problems for the iambic pentameter interpretation is Wyatt's rather more frequent use of inversion in phrase-medial position. This apparently disruptive metrical practice is beyond the scope of the present article, but I will return to it in future research. The examples in (23) Hypo-and hypermetrical lines Several critics, such as Schwartz (1963:159), report struggling to see an iambic pentameter underlying Wyatt's syllabically hypometrical lines. Assuming that monosyllables under defined intonational conditions may span a SW sequence, as suggested in Section 1, and that medial catalexis (to be discussed in Section 3) is used, I find that, of 2190 lines, there are very few that admit of no iambic pentameter scansion whatever. I found nineteen hypermetrical lines, all hexameters. Some unambiguous examples are shown in (24). In what follows I will ignore this small residue of hexameter and tetrameter lines. Stanza structure and rhyming schemes The corpus includes poems written in a variety of verse forms, including rondeaux, sonnets, epigrams, canzoni, and ballades. The Satires and the Psalms are written in terza rima. I include a brief overview of these forms here, building to a certain extent on work on stanza structure by Aroui (2009). As will become clear, rhyme can play a useful role in choosing between competing scansions of a line. The simplest structure is displayed by the epigram, which is typically an octet, consisting of two quatrains, as in (26). The first three couplets consist of an ababab rhyming pattern, with the final couplet closing on cc. The ballades are also relatively simple, consisting of three septets, adding up to 21 lines. In (27) I supply a structure for the septet, which also forms the basis of Canzone LXXVIII. The rhyme pattern of each septet is independent of the others. There is no evidence of rhymal echo between septets, so there are no empirical grounds to assume any of the septets pair to form a larger intermediate unit under the ballade. Within each septet, however, we can posit a quatrain and a tercet. The quatrain contains two couplets iterating the rhyme pattern ab. The tercet contains a couplet bc, echoing the quatrain, and an appended line c. The most complex verse form in the corpus is the rondeau, with its characteristic refrain, as shown in (29). The rondeau can be divided into two main constituents, each of which build on a quatrain augmented by a single line that rhymes with the initial couplet, giving an aabba quintain. The first quintain is further augmented by a tercet with the rhyme structure aab to give an octet. This octet is host to the first instance of the refrain. The second quintain is not augmented, and hosts the second repetition of the refrain directly. Traditionally, the refrain consisted of the repetition of the first quintain. Later, the refrain was an appended half-line, which is thus unrhymed. The Epistolary Satires and the Penitential Psalms (excluding the prologues) have an interlocking three-line rhyming scheme (terza rima): aba bcb cdc, …. The prologues consist of octets with the pattern abababcc, as in (26). Knowing the rhyme scheme is useful for identifying rhyming terms. Compared with metrics, rhyme is little theorized in generative theory, but see Kern (2015) for a recent approach. Wyatt's use of rhyme has some unexpected features. In the unmarked case, the rhyme includes the last nucleus along with any extrametrical material. For example, in the Penintential Psalms 26-30, part of the first prologue, the rhyming terms are myndyth, fyndyth, and vndermyndyth. Consider (30). (32) STRUCTURAL IDENTITY OF RHYME (SIR) Rhyming terms occupy structurally identical positions, i. e., S rhymes with S, W with W, and extrametrical syllable with extrametrical syllable. The assumption of SIR is useful in determining the scansion of the remainder of the line. To see how, consider Pen.Ps. 328, which admits of two scansions. [-ɪәn], corresponding either to a weak position, or simply extrametrical. 7 By applying syncope in Mesuring and elision between the vowels in by our, it is possible to scan -ion as forming the final foot of the line, as in (33a). An alternative scansion in (33b) renders the suffix monosyllabic and, in this case, extrametrical. The metrical ambiguity is resolved by l. 330, which is straightforwardly iambic pentameter, and appeal to SIR, which points to the reduced form of the suffix in l. 328, i. e., the scansion in (33b). (34) ( Nor me ) ( correct ) ( in wrath· ) ( ·full cast· ) ( ·igat· ) <·ion.> Pen.Ps. 330 Medial catalexis Catalexis is generally associated with the catalectic form of the trochaic tetrameter, where the final W position in the line is unoccupied by linguistic material. Consider the first quatrains in the following poems by Ben Jonson (Parfitt 1996(Parfitt [1975) in (35) and (36), illustrating the difference between the two forms of the trochaic tetrameter. The lines in (35) are acatalectic-each tetrameter is replete. Those on (36), however, show catalexis of the final W position. In iambic verse, catalexis of a metrical position is generally equivalent to acephaly, nonoccupancy of a line-initial W position. Catalexis may target higher-level metrical constituents as well. The ballad form, for example, alternates lines of four and three verse feet. It is common to assume that the trimetric lines are only apparently so, and have an empty verse foot in final position (Adams 1997, Hayes and MacEachern 1998, Kiparsky 2006 Apart from the Poulter's measure, which I have relineated in ballad stanza form, catalexis of a verse foot is not found medially, or initially. This is no doubt because it would destroy the perceptual integrity of the line. Medial catalexis of a metrical position is also potentially disruptive, but it is attested. Groves (2001) illustrates the use of medial catalexis in the verse of Philip Larkin, and Groves (2007; 2011 investigates the same technique in Shakespeare. Groves (2005) provides interesting examples of scansions of Wyatt invoking the same. Building on Groves' work, Vaux and Myler (2012) propose that catalexis can be likened to a musical rest, thus pursuing an analogy between meter and music. Even before Groves' investigations, the idea that Wyatt availed himself of 'pauses' within the line is not new. Padelford (1923) seems to assume the equivalent when he suggests that "the word constituting a monosyllabic foot usually calls for a marked stress, and that when it occurs after the caesura the pause is pronounced and impressive, occupying the full time of a light syllable" (p. 141). For critics of Saintsbury's persuasion that good verse entails 'flow'-an uninterrupted chain of nonempty syllables-the apparent necessity to insert meditative pauses in order to maintain the iambic pentameter pattern was an affront to good poetic taste. The novel contribution of this paper is that inferred medial catalexis has a distribution that strengthens the interpretation that Wyatt's syllabically hypometrical lines conform to an iambic pentameter template. Appendix B lists all the instances of inferred medial catalexis, noting any plausible alternative scansions. Medial catalexis is something we would expect to be subject to both metrical and prosodic constraints. Second, we would expect medial catalexis to coincide with a major prosodic break, preferably an Intonational Phrase boundary. As explained in Section 2.2, there is a tendency for Intonational Phrases to realize clauses, although other factors may favour prosodifying subclausal phrases as Intonational Phrases as well. In all examples of medial catalexis in a strong position, however, the pause does indeed coincide with a clause boundary, with one exception (Ep. XLIII.6; see Appendix C, (1)), where it is VP-internal. The grammar of phrasing can be invoked to resolve competing scansions with medial catalexis. Let us look at medial catalexis in odd-numbered (W) positions. Consider the hypometrical line with nine syllables in (40) Now let us consider how medial catalexis in these positions would interact with the prosodic analysis in (41). This is shown in (45). ( 45) a. { [ ( Tho ) ( tyme ) ] [σ] [ ( doth passe ) ] }, { [ ( yet ) ] [ ( shall not ) ( my love ) ] } b. { [ ( Tho ) ( tyme ) ] [ ( doth passe ) ] }, { [ ( yet ) ] [ ( shall [σ] not ) ( my love ) ] } c. { [ ( Tho ) ( tyme ) ] [ ( doth passe ) ] }, { [ ( yet ) ] [ ( shall not ) ( my [σ] love ) ] } The first alternative (45a) introduces the catalectic syllable at the boundary between two Phonological Phrases-clearly more disruptive than at an Intonational Phrase boundary, as in (42). The second and third alternatives in (45b) and (45c) show the auxiliary shall and the possessive determiner my prosodified as proclitics to the following Prosodic Word, in line with the proposal of Ito and Mester (2009). Medial catalexis between a clitic and its host would be more disruptive than between Prosodic Words. 8 Given the grammar of prosodic phrasing, we would not ordinarily expect medial catalexis to separate Phonological Phrases within an Intonational Phrase, or Prosodic Words within a Phonological Phrase. It would be even less expected that it separate a clitic from its host, or separate phonological feet of the same Prosodic Word. It is nevertheless possible that Wyatt used medial catalexis with disruptive intent. A catalectic pause between Prosodic Words, or between a clitic and its host, might mimic a self-interruption, or a word search episode. This could open up potentially interesting readings that invoke Wyatt's 'uneasy subject' (Brigden 2012:13). Indeed, Groves (2007) suggests that something of this kind may have been exploited by Shakespeare, and even proposes to designate "a catalexis where there is no potential intonational break, as between an adjective and its noun" with its own term, drag (p. 135). 9 There seems to be a very small number of cases in Wyatt's verse where invoking drag-type medial catalexis gives a superior scansion, however. Two possible examples are given in (46) A complicating factor in determining the placement of an inferred catalectic syllable, as explained in Section 2.2, is that a subclausal phrase may also be mapped onto an Intonational Phrase, as in (47) These results are summarized in the plot in Figure 3, which also includes acephalous cases (catalexis in the W position of the first foot). For a breakdown of the numbers according to verse form and line type, see Table 6 in Appendix C. As might also be expected given the structure of the iambic pentameter in (6), there are some cases where inferred medial catalexis can alternate between the weak position of the third foot and the fourth. In Appendix C, I have identified thirteen cases with medial catalexis in the third foot for which there is a plausible alternative scansion with medial catalexis in the fourth. Four of these are shown in (53) Comparing the scansions with medial catalexis in the third and fourth foot, the latter imply Intonational Phrase boundaries at more deeply embedded levels of syntactic structure. In (53a-i) the break occurs between the subject NP and the VP, while in (53b-i), (53c-i), and (53d-i), it coincides with a clausal boundary. In (53a-ii), (53b-ii), (53c-ii), and (53d-ii), on the other hand, the break is VP-internal. Shifting the location of the medial catalexis rightward in this way may trigger the use of additional licences. In (53c-ii) and (53d-ii), for example, it is necessary to invoke phrase-initial inversion of stalking and roring. There are also five cases of medial catalexis in the fourth foot for which there is a plausible alternative scansion with medial catalexis in the third. Two examples are shown in (54). Again, medial catalexis in the fourth foot correlates with breaking at a more deeply embedded level. Shifting medial catalexis to the third foot aligns it with a clausal boundary. The variability shown in (53) and (54) does not alter the basic picture that major breaks preferentially occur in the third and fourth verse feet, with the third taking precedence over the fourth. What we have seen in this section is that, if we make the inference that the hypometrical lines of otherwise iambic pentameter poems have medial catalexis, placing the catalectic syllable at an Intonational Phrase boundary results in a distribution that overwhelmingly tracks the structure of the iambic pentameter given in (6). Medial catalexis in W position is rarest in the second and fifth verse feet (within minimal half-lines). Apart from acephalous lines, it is most common in the third verse foot (between half-lines), and reasonably frequent in the fourth foot (between an adjoined verse foot and a minimal half-line). Some implications In this last section I would like to discuss some potential implications of these findings for both literary interpretation and the linguistically informed study of metrics. Students of literature may find something to object to in the very attempt to reconstruct a particular metrical intention for Wyatt's anomalous lines, perhaps invoking the 'intentional fallacy' of Wimsatt and Beardsley (1946). Surely, the variety of critical responses to Wyatt's verse cited in this article shows that a number of metrical interpretations are possible? Certainly, many readers of Wyatt profess themselves enchanted by the apparent irregularity of his lines. Alice Oswald, for example, prefaces her reading of 'Whoso List to Hunt' (Sonnet XI/Egerton MS. VII) 10 by saying "Wyatt was writing at a time when the pentameter was still being regularised and, for that reason there is a beautiful counterpoint in his verse, almost as if the prose rhythm and the verse rhythm were working against each other, and I think that's why I love his sonnets so much." Despite her positive assessment of Wyatt, Oswald's understanding of what he is doing clearly derives from Saintsbury's view that Wyatt and his contemporaries were groping towards realizing the iambic pentameter form in their verse-often unsuccessfully! Writing about how the modern reader can approach Wyatt's 'They Flee from Me' (Ballade LXXX/ Egerton MS. XXXVII), Attridge (1982:345) notes that "[t]he experience of a multitude of readers testifies to the poem's continuing vitality; it is unlikly [sic], therefore, that its rhythms are unsuccessful, whether or not they are what Wyatt or his audience heard". Indeed, he continues (p. 346), "[i]t would be a waste of time to look for a metrical structure common to all those lines, because the reader's experience is that they are metrically different -and that it remains a satisfying poem". He nevertheless goes on to say that "[w]hether further research in metrical and phonological history will throw light on Wyatt's intentions is a separate question". On the basis of the evidence bequeathed us in the manuscripts, we can agree that more than one metrical interpretation is possible. However, an iambic pentameter context for an anomalous line (on the page) can be taken to strongly implicate that the anomalous line, too, is iambic pentameter. In this paper, I have asked what follows from this inference for syllabically hypometrical lines. Clearly, it must imply that there are cases where syllables span both a strong and a weak metrical position, or metrical positions that lack phonological content (catalexis). Section 3 showed that, when we plot the distribution of inferred catalexis, it tracks the metrical breaks in iambic pentameter, strengthening the implicature that syllabically hypometrical lines are iambic pentameters. Only three dozen lines in the corpus completely resist iambic pentameter scansion. Contra Attridge, looking for a common metrical structure is not such a waste of time after all. Catalexis also raises some interesting questions about the nature of meter and the material it orga-nizes. In addition to the distinction between meter and material, there is in most metrical theories a further distinction to be made between the poem and its performance. Wimsatt and Beardsley (1959:587) write: "A performance is an event, but the poem itself, if there is any poem, must be some kind of enduring object." Jakobson (1960:366) quotes this statement with approval, but in contrast to Wimsatt and Beardsley, who privilege the written text, Jakobson sees the linguistic system as primary, constraining composer, performer and reader alike. We thus have two different conceptions of the material organized by the meter-the 'text'-one grounded in the written language, the other in the spoken. The question now is whether even the linguistic vision of the text is quite inclusive enough, and how exactly to draw Wimsatt and Beardsley's line between the performance and the enduring object that is the poem itself. With his characteristic fondness for binary oppositions, Jakobson (1960:364-366) distinguished, on the one hand, between verse and delivery, and, on the other, between design and instance. These two binary contrasts yield a four-way classification between 'verse design', 'verse instance', 'delivery design', and 'delivery instance'. Verse design, at least in metrical verse, minimally includes the meter that underlies a particular verse instance (particular line) that embodies the meter by association to prosodic units such as syllables and actual words. It would also include mappings between meter and specific text that constrain the verse as a whole, e. g., by being applicable to all lines. Final catalexis in a trochaic tetrameter verse such as Ben Jonson's in (35) would be considered an aspect of verse design. The verse instance would include metrical variations that arise in the mapping between meter and text, such as inversion and extrametrical syllables, and also Wyatt's (inferred) use of medial catalexis. Delivery instance refers to a particular recitation, which entails a particular act performed by a reciter at a particular time and place, but Jakobson also makes the case for the concept of delivery design. For example, the reciter may follow a particular tradition in reciting a verse, by adopting a more or less prose-like, more or less chanted, or more or less pronounced scanning style. The strong signalling of beats that is characteristic of teaching poetry at school has a different delivery design than, say, a style that hews more closely to natural spoken prosody. Jakobson (1960:365) adds, however, that verse design also includes things like the prosodic structure of the lines, or the cadence, understood as a recurring intonational pattern associated with a line. Despite this inclusive stance, it is quite striking how little generative linguistic approaches to verse have strayed beyond the metrical. The reasons would seem to be historical, and may have to do with Chomsky and Halle's early relegation of prosody to the domain of performance (Chomsky and Halle 1968:372) and Halle and Keyser's explicit exclusion of performance from the purview of generative metrics (Halle andKeyser 1966; 1971). In the wake of progress in prosodic phonology and the syntax-phonology interface (Nespor andVogel 1986, Selkirk 1986), few would subscribe to this view today. Generative approaches have nonetheless been slow to investigate nonmetrical verse and any constitutive role for prosody. Of course, if the claim that Wyatt recruits intonation and medial catalexis to fill out the iambic pentameter pattern, this is a matter of the verse instance, not the verse design, but the point still carries over. It entails a more inclusive conception of the "enduring object" of the poem than that of New Criticism, certainly, but Jakobson's linguistically based view also encounters a difficulty. The difficulty in question is how to interpret medial catalexis in the context of Halle and Keyser's insistence that metrically organized material satisfy the linguistic givens. In this connection it is worth going back to the early debates between Halle and Keyser, Wimsatt, and Magnuson and Ryder in College English, where Halle and Keyser's seminal paper on Chaucer's metrics was published (Halle and Keyser 1966). In what follows, I'll focus on Magnuson and Ryder's critique (1970), and Halle and Keyser's (1971) revision of their theory in response. In a memorable exchange that bears directly on our question, Magnuson and Ryder accuse Halle and Keyser of failing to apply their own distinction between competence and performance. Consider the line in (55) from Chaucer's General Prologue, which Halle and Keyser take to exemplify contrastive stress. This requires analysing the line as headless, with an unfilled initial position. (Note: The earliest papers labeled metrical positions as O(dd) and E(ven), rather than W and S.) In (55), the stressed syllables of both gold and iren align to a E (=S) position. Magnuson and Ryder dramatically characterize this appeal to contrastive stress as courting "the death of metrics", a move that "kills all possibility of theoretical rigor". They opt instead for the default line in (56), where there is no anacrusis and the stressed syllable of gold aligns to an O (=W) position, and invoke "the poetic effect which a writer can create by setting up a tension between his abstract matrix and his sense". Magnuson and Ryder's scansion is, of course, unimpeachable, but so is Halle and Keyser's. In responding to this criticism, Halle and Keyser (1971:174) argue on the grounds that emphatic stress is a linguistic given, provided that there are good contextual reasons to assume it. In this case, it is strongly implicated by the text, and so there is thus no reason to deny it metrical significance. In suggesting that Halle and Keyser have been hoist by their own petard, Magnuson and Ryder conflate linguistic givens with those of the written text. Halle and Keyser's assumption of contrastive stress is nonetheless an inference from the written textit cannot simply be read off. Whatever the "enduring object" of the poem is for Halle and Keyser, it is more than the linguistic content of the written text. If contrastive stress is admissible as an element of this linguistic text, then by the same token, so is intonation. It has to be conceded that a tritonal pitch accent may not be as strongly implicated by the written text as contrastive stress in (55). Since intonation is richer in pragmatic meaning, inferring its presence in the linguistic text requires a higher level of critical engagement. If it were any simpler, Wyatt's syllabically hypometrical lines would not have occasioned as much metrical controversy as they have. But the point remains that certain elements of the linguistic text must be inferred, by Halle and Keyser's own admission. It is a fundamental assumption of the generative metrics program that the material satisfy the linguistic givens. It is here that catalexis presents us with more of a conceptual challenge, since it raises the question in what sense the absence of something can be a linguistic given. One interpretation of catalexis is that a metrical position lacks a prosodic exponent completely. Another interpretation is that a metrical position maps onto a prosodic unit with no segmental content. Up till now I have been assuming the latter, and that catalexis simply involves an empty syllable. Is such an empty syllable a possible linguistic given? Certainly, empty prosodic nodes are assumed to occur in inputs to the phonological grammar. For example, Saba Kirchner (2013) argues that bare syllables can be affixes, and Trommer and Zimmermann (2014) argue the same point for moras. However, if they are to be posited by a learner or user of the language, segmentally deficient prosodic nodes must be evidenced by their effects. In morphological systems, they generally acquire segmental content by associating to a featurally replete root node. It is imaginable that a prosodically deficient node may be realized by a pause of some duration. Since pauses do not seem to have a role in distinguishing minimal pairs in natural languages, however, this suggests that empty syllables are not linguistic givens, at least not in output representations. 11 What of the possibility that the metrical position simply has no corresponding linguistic same at all? From a hearer-oriented perspective, pauses are crucial evidence for inferring the metrical structure, and from the speaker's, too, it is probably more accurate to see in the pause the effect of a momentary arrest of speech, rather than an absence of speech. As Levinson (2000) argues, if the hearer is to make use of what the speaker provides as evidence of their intention, and the speaker is to make use of evidence that the hearer can use in inference, they must share the same strategies. If pauses are evidence for the hearer, they should also be explicitly part of the speaker's communicative intention. If intentional, an 'arrest of speech' must have some kind of symbolic representation. This brings us to our third possibility, that the material organized by the meter is linguistic-gestural rather than just narrowly linguistic. Just as the linguistic text may prespecify certain intonational features of a performance, a multimodal, linguistic-gestural representation of the text may in addition prespecify certain gestural ones. The analysis of gesture initiated by McNeill (1985; 2012, Kendon (2004), Calbris (2011) has shown that a pause in speech may be synchronized with non-spoken gestures of various kinds (Quaeghebeur et al. 2014, Bavelas andChovil 2018). An example is the quotative construction, exemplified by 'he went (-)', where the pause coincides with, say, a facial gesture that enacts what he did, presumably receiving a thematic role from the two-place quotative predicate go. Much of Wyatt's medial catalexis may be interpreted as enacting deliberation, which has some well known more or less subtle gestural manifestations. An example of an obvious gesture is the 'thinking face' described by Goodwin and Goodwin (1986) to assert that one is thinking hard, implicating for example that one is searching for an expression (Clark 2006:380). Such gestures are not mere indices of increased cognitive load, but actually serve a communicative purpose. Equivalent gestures, according to Bavelas and Chovil (2018), might include turning the head, looking away with a thoughtful expression, making a grasping motion with a hand-all signalling engagement and an intention to resume speaking. Kita et al. (1998) and Bressem and Ladewig (2011) also point out that a gestural hold may be independently meaningful. The existence of spoken gesture (e. g., Okrent 2002), invites the speculation that medial catalexis may involve exploiting for metrical purposes the spoken gesture counterpart of an independent hold. Groves (2007) makes compatible observations with regard to Shakespeare, citing his use of catalexis to "underscore attention-getting imperatives and vocatives" (p. 134). Wyatt's texts, on this view, were more than the written texts evidenced in the Egerton manuscript (and a few others). They were multimodal representations, including intonational and gestural elements, that were transmitted through a living performance tradition. When this performance tradition was lost in the generations following Wyatt's, what was left was the incomplete evidence of the manuscripts, with problems of interpretation that were compounded by new expectations regarding the relation between template and written text. As Wright (1985:149) writes, "[t]he knack of hearing Wyatt's rhythms vanished soon after his death". It would be interesting indeed to see how these expectations line up with the invention and spread of the printing press, which began with William Caxton in 1476, a mere quarter of a century before Wyatt's birth. This would have given considerable power to editors, like Tottel's, who had to interpret the written evidence to come up with versions that could be distributed widely in print. Even if we assume our earliest editors were conversant with the performance tradition, there was no accepted way of representing intonation or gesture in print, and so we see a shift to verse with 'flow'. Even so, writing as late as 1589, the critic George Puttenham had a very favourable assessment of Wyatt's verse. Sir Thomas Wyat th'elder and Henry Earle of Surrey […] greatly polliſhed our rude and homely maner of vulgar Poeſie […] and for that cauſe may iuſtly be ſayd the firſt reformers of our Engliſh meetre and ſtile. (Puttenham 1869(Puttenham [1589:139) Puttenham may have been writing within living memory of a performance tradition that preserved the integrity of Wyatt's iambic pentameters and the intelligibility of his prosody. The difference between his judgment and Saintsbury's is in any case striking. ( And call ) ( crafft coun· ) ( ·sell, [σ] ) ( for prof· ) ( ·fet styll ) ( to paint. ) Sat. CXLIX.33 g. ( She chere̥ d ) ( her with ) ( 'how sys· ) ( ·ter, [σ] Table 6 summarizes, for each verse form, the number of inferred catalectic syllables in the W position of each foot in lines with and without extrametricality. The totals for each verse form (regardless of extrametricality) are shown in italics. The totals are plotted in Figure 3 in Section 3. Pen.Ps. 620 consonants, e. g., tryffel, between the 14th and 18th centuries ("trifle, n." OED Online, Oxford University Press, December 2020, www.oed.com/view/Entry/205961. Accessed 1 February 2021.). This is the reason for invoking resolution here. In the proposed scansion, the catalectic syllable intervenes between a verb and an object pronoun that would ordinarily be prosodified as enclitic. This is the only proposed scansion that features a 'meditative' pause, that is, at a prosodic break most likely weaker than an Intonational Phrase boundary. An alternative scansion of Son. XXXI.7 with acephaly is nonetheless available: (
2022-02-04T16:25:02.489Z
2021-12-30T00:00:00.000
{ "year": 2021, "sha1": "9a58fbae2c342ca9bac48e840f6f19fb168f77b9", "oa_license": "CCBYNC", "oa_url": "https://septentrio.uit.no/index.php/nordlyd/article/download/6260/6393", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7f8b4ce123b64d305958fa089db9ad0cb673ff82", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [] }
59062404
pes2o/s2orc
v3-fos-license
A comparative study of the effect of yeast single cell protein on growth, feed utilization and condition factor of the African catfish Clarias gariepinus (Burchell) and tilapia, Oreochromis niloticus (Linnaeus) fingerlings An investigation was carried out to compare the effect of feeding yeast single cell protein (SCP) on the growth and feed utilization parameters of Clarias gariepinus and Oreochromis niloticus cultured separately. The parameters investigated include the percentage weight gain (PWG); feed conversion ratio (FCR); specific growth rate (SGR); and condition factor (CF). The findings indicate that for C. gariepinus fingerlings optimum growth and nutrient utilization was obtained with 30% SCP substitution of fishmeal while for O. niloticus fingerlings 50% substitution level gave optimum growth and feed utilization. Fishmeal could therefore be replaced with yeast SCP at these levels to cut down on cost of aquafeeds for sustainable aquaculture development of these species. INTRODUCTION One of the constraints to the successful practice of fish culture in many developing countries like Nigeria is lack of nutritionally adequate and low cost feeds.Hence fish nutritionists are searching for alternative sources of feed that is cheap, reliable and accessible.The search for alternative feed resources in aquafeeds is increasing and has gained increasing significance as traditional ingredients are becoming costly or less available.Fish meals is regarded as the best natural feed ingredient for aquafeeds, due to its high protein content and balanced amino acid profile.However, global fish meal productions from wild sources continue to decline.Therefore, suitable alternative sources of protein have to be sought for sustainable aquaculture development. In order for aquaculture to register further growth and meet its potential of bridging the gap between fish supply from capture fisheries and the demand for fish, the direction of aquaculture development in Nigeria will have to be focused on increasing production efficiencies and intensities so as to produce more fish using less land, water, financial resources and the development of cheap, high quality source of protein for feeding fish E-mail: fayeoforib@yahoo.com(Ibigo and Olowosegun, 2005). Evaluation of yeast single cell protein (SCP) The term single cell protein refers to dead dry cells of micro-organisms such as yeast, bacteria, fungi and algae.Single cell proteins have reasonably well balanced amino acid profiles, is an excellent source of some vitamins and minerals and also possess useable lipids and carbohydrates (Israelidis, 2003).In Nigeria, the history of SCP is relatively new, and its inclusion in fish feed is a novel idea. According to Tacon and Jackson (1985), SCP is being developed as a veritable alternative to fish meal as protein source in fish feed.The cultivation of yeast as single cell protein for utilization in aquaculture is becoming more and more popular in the developing countries.This trend is so because SCP provides suitable alternative to fish meal which is becoming expensive for the fish nutritionist to utilize in feed formulation.The pivotal role of SCP in feeding fish cannot be over-emphasized as demonstrated by several studies (NRC, 1983;Steffens 1989;Jackson et al., 1996;Heindl, 2002;Debaath, 2003).It has been evaluated as viable sources of protein in fish feed; with advanced processing techniques, their nutritive value has been enhanced to such an extent that they are now almost considered as conventional ingredients in aquafeeds (Eyo, 2003).The most commonly used and widely available SCP for fish feeding is torula yeast (Candida utilis).This is cultured on substrates comprising a variety of industrial wastes, including molasses, dried citrus pulp, or sulphite liquor from the wood pulp and paper industries (FAO, 1983). Feed-qrade yeast have been shown to be excellent substitutes for fish meal at low levels in diets for fish (Jauncey and Ross, 1982).The amount of substitution depends upon the type of yeast and the manner in which it is produced.In general yeast are relatively low in methionine.Proper supplementation with synthetic sources of the amino acid could permit yeast to be the only protein source in the diet (Luguet, 1981).Yeast from molasses and other industrial wastes are less costly to produce than other SCP.Yeast as a class of feed stuff are very cheap to use than fish meal and other protein supplements of vegetable origin in fish feed (Marty, 1980). Culture potentials of Oreochromis niloticus Tilapia (O.niloticus) is the most common species of tilapia in Nigeria.The culture of this species have been more widely used in aquaculture because they are hardy, available all year round and are easily cultured in virtually any type of enclosure in monoculture system and in polyculture with other species like catfishes, torpor, snappers etc. (Anyani and Awa, 1988;Oladosu et al., 1990).They are very cheap, tasty, increasing the availability of protein and the quality of nutrition of poor fish farmers and consumers (Changadeya et al., 2003). Culture potentials of Clarias gariepinus This is the most cultured fish in Nigeria at present.Their important role in tropical aquaculture according to De Graaf and Janson (1996) includes hardiness to adverse environmental conditions, capacity to undergo aquatic and aerial respiration, resistance to parasite and disease.In addition, they exhibit reasonable growth rate in captivity and demand a high consumer preference in the market (Haylong, 1996).Additional attributes of C. gariepinus of relevance to culture include the high fecundity, potential for all year round induction of final oocyte maturation, favourable nutritional efficiency and tolerance of high density culture (Viveen et al., 1985).Since tilapia (O.niloticus) and the catfish (C.gariepinus) are the most popular cultivable fish in Nigeria, the use of SCP in their diet will go a long way in enhancing growth and maximization of profit in aquaculture ventures.In time past, the tilapias have been more widely promoted for farming, but now C. gariepinus the African catfish have over taken tilapia as major culture species in Nigeria (FAO, 1999(FAO, , 2003)).The present investigation is thus an attempt at substituting fishmeal in the diet of C. gariepinus and O. niloticus with single cell protein (SCP) at varying levels, to evaluate its effect on growth and nutrient utilization as well as determine optimum levels of substitution. Experimental site and materials The work was carried out in the Biology Department of the Rivers State University of Education, Rumuolumeni, Port Harcourt.The fingerling about 300 in number of equivalent weight was obtained from the African Regional Aquaculture Centre (ARAC) Aluu near Port Harcourt.The experimental tanks consisted of 24 concrete tanks in the Department of Biology of the University.The fish were left to acclimate in these tanks for four days during which they were not fed.This was to enable them empty their gut in preparation for the feeding trials.There were no mortalities during the period.After the period of acclimation the fish were reweighed and redistributed into the tanks.Each of the tanks contained 25 fingerling.The experimental design consisted of six treatments and four replicates.Given a total of 24 experimental units.The tanks have dimensions 1.0 × 0.8 × 0.7 m.Before the fish were stocked the tanks were cleaned, dried and left for two weeks.This is to ensure elimination and eradication of any micro-organism, competitive animals and parasites.The tanks were then filled ¾ with fresh tap water.At the end of the acclimation the fish were fed with the control diet for 2 days before the feeding trial started. Experimental diets Experimental diets consisted of six trial diets namely S A, S B, S C, S D, S E and S F . S A was the control diet which had 30% exclusive fish meal protein diet.S B had 30% protein in which 10% was substituted with yeast SCP.Diet S C had 20% of the fish meal substituted with yeast SCP.Diet S D had 30% SCP substitution of fishmeal.Diet S E had 40% replacement of the fishmeal and diet S F had 50% replacement of fishmeal.The diets were all isonitrogenous, and made into very small pellets.The composition of the experimental diets is presented in Table 1. Feeding of experimental fish The fish were fed with the prepared diets at the rate of 3% of their body weight twice daily at 0900 and 1500 h.The fish were fed six days in the week on the seventh day; they were left to browse on left-over feed in the tank.The water in the tanks was renewed once in the week during which period the fish were also weighed and new feeding regime determined for the next one week.The feeding trial lasted for 8 weeks. Determination of growth and feed utilization parameters Growth rate of the fish fed with the experimental meals were expressed as changes in the average fresh body weight of the fish during the experimental period.The percentage weight gain (PWG), feed conversion ratio (FCR), specific growth rate (SGR) and condition factor (CF) for the fish were computed using the formulae below: The percentage weight gain (PWG) was determined from the relationship between weight gain and mean fish weight (Reay, 1979). Feed conversion ratio The FCR was expressed as the proportion of dry feed fed per unit live weight gain of fish (Reay, 1979). SGR was calculated according to the method of Brown (1957) as: Where, W 2 =weight of fish at time T 2 days , W 1 = weight of fish at time T 1 days, Log e = natural log to base e.The condition factor was expressed as Fulton's condition factor (Nikolsky, 1963). Where W is the weight of fish in gram and L its length in cm. Statistical analysis The growth, feed utilization and condition factor were subjected to analysis of variance test (ANOVA) based on Wahua (1999).Duncan's multiple range test was used to determine mean differences at (p>0.05) (Table 1). RESULTS AND DISCUSSION The results of the PWG, FCR, SGR, and CF for O. niloticus and C. gariepinus are presented in form of bar graphs (Figures 1 to 4).The result of the investigation revealed that diet S F (50% SCP substitution of fishmeal) gave the highest PWG for O. niloticus while for C. gariepinus diet S D (30% SCP substitution) gave the highest values (Figure 1).The values of the control diet (S A ) are quite close to the above values.For the feed conversion ratio diet S F also gave the best result for O. niloticus while diet S B gave the lowest values.In C. gariepinus, diet S D (30% substitution of fishmeal with SCP) gave the best result which is comparable to the control diet S A (Figure 2) while the lowest values were obtained for diet S B (10% substitution) both for O. Viola and Zohar (1984) in which 50% of fishmeal protein in diets for hybrid tilapia (O.niloticus x O. aureus) was successfully replaced by bacterial SCP, 'pruteen'.It is also in consonance with the work of Davies and Wareham (1988).Lara- Flores et al. (2003) The use of yeast SCP may revolutionize the future of aquaculture industry world-wide as reported by Vielma et al. (1998), that it increases bio-availability of phosphorus and other minerals to fish. Also Debaath (2003) confirmed that apparent net protein utilization and digestibility in Atlantic Salmon was significantly improved by yeast SCP supplementation.Jackson and Robinson (1996) also observed a higher increase in weight of Ictalurus punctatus fed yeast SCP compared to the control group.Furthermore, Steffens (1989) reported on the inclusion of yeast SCP in the diet of Rainbow trout that it had a positive effect on its specific growth rate, percentage weight gain and feed conversion ratio.These findings are in agreement with the present study. It could be deduced from the findings in this work that in an attempts to reduce feeding cost in aquaculture by the use of yeast SCP in replacing fishmeal in pelleted fish diet, positive growth was recorded up to an inclusion level of 30% for C. gariepinus fingerlings while for O. niloticus fingerlings yeast SCP gave optimum growth and feed utilization at 50% inclusion level. Figure 1 . Figure 1.Bar graph showing percentage weight gain in O. niloticus and C. gariepinus fed the experimental diets. Figure 2 . Figure 2. Bar graph showing feed conversion ratio in O. niloticus and C. gariepinus fed the experimental diets. niloticus and C. gariepinus.The values of the SGR for O. niloticus was also highest for diet S F while the lowest values were obtained for diet S D (Figure3).These values are slightly higher than the control.For C. gariepinus diet S D also gave the highest values for SGR while diet S F had the lowest values.The values are also slightly higher than that of the control diet.The results of the CF indicate that diet S F had the best values for O. niloticus while the lowest values were obtained for diet S B and S D (Figure4).The CF values for C. gariepinus showed that diet S D gave the best result while the lowest values were obtained for diet S E and S F .The best values obtained for C. gariepinus with diet S D are comparable to those of the control diet (S A ). From the foregoing, it is evident that diet S F gave the best results for O. niloticus as it concerns, PWG, FCR, SGR and CF while for C. gariepinus diet S D gave the best result for these parameters.This is an indication that O. niloticus Figure 3 . Figure 3. Bar graph showing specific growth rate in O. niloticus and C. gariepinus fed the experimental diets. Figure 4 . Figure 4. Bar graph showing confition factor in O. niloticus and C. gariepinus fed the experimental diets. Table 1 . Composition of the experimental diets (g/100 g dry wt.).Vitamin-mineral premix (optimix premix animal care product Nigeria) was added to all of the diets. * sludge single cell proteins (BSCP) in high quality feeds for C. gariepinus fingerlings, at levels of 4 to 22%.BSCP incorporation was found to have a positive effect on the growth performance up to levels of 22%, while this study indicates that the 30% SCP substitution of fishmeal (diet S D ) gave the best value of percentage weight gain in C. gariepinus.The result of this work compares favourably with the work of Ozorio et al. (2005) work is also comparable to the work cited above since diet S F (50% SCP substitution) gave the best PWG, FCR, SGR and CF for O. niloticus.The findings in this work also agree with those ofBeck et al. (1979); Olvera-Novoa et al. (2002) andOzorio et al. (2005)who fed yeast-based diets to trout, tilapia and pacu (P.mesopotamicus) respectively and obtained optimum values at about 30 percent yeast inclusion level.
2018-12-18T01:08:59.149Z
2014-06-27T00:00:00.000
{ "year": 2014, "sha1": "0edb178570de5dc06d0c975e57941069ecc374f0", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJAR/article-full-text-pdf/25A076C45722.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "0edb178570de5dc06d0c975e57941069ecc374f0", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
245498829
pes2o/s2orc
v3-fos-license
Expansion of Phragmites australis in a Mississippi Estuary Determined from Aerial Image Data , classification images to assess areal change in P. australis extent as compared with other land cover types. Results show an increase in P. australis extent of 157 ha (approximately 1,960%) between 1996 (8 ha) and 2018 (165 ha), with greater expansion in the inland part of the study area. These land cover classifications will also be used to quantify shoreline movement for P. australis-dominated shorelines as compared with other marsh habitats, providing insights into past and future responses of P. australis to changes in sea level compared with neighboring marsh vegetation. Change detec�on analysis Image Data Color-infrared datasets were obtained December 2018, October 2014, August 2010, February 1996 by the Na�onal Agricultural Imagery Program and the Na�onal Aerial Photography Program. All analysis was completed at 1m GSD using the near-infrared, green, and blue image bands. Training Data Training pixels were selected based on 2002 maps [1] and present-day field photography. Ten total land cover classes were defined to obtain normal distribu�on of brightness values in each class. Filtering Each class image was filtered to remove pixel groups of less than three square meters to reduce noise. Class combina�ons Ten classes were reduced to four: Water, P. australis, all other marsh, and woodland. QAQC Data gaps were filled using a majority filter and final correc�ons were made. IV. Discussion and Conclusions Overall, P. australis extent increased in each year, with an increasing trend in extent and the rate of spread at both sites ( Fig. 1,2,3,4,5). P. australis colonized areas previously occupied by both marsh vegeta�on and open water. When P. australis area is combined with other marsh areas, total marsh extent increased between 1996 and 2018 at Mary Walker Bayou area, despite the overall trend of marsh loss exhibited across the estuary (Fig. 3,4,6) [11]. This has both posi�ve and nega�ve implica�ons for the future of Mississippi's coastal marshes in terms of biodiversity, species composi�on, and ecosystem func�on. In terms of marsh shoreline movement over �me, P. australis appears to be mi�ga�ng marsh loss due to rela�ve sea level rise. Removal of the species would have detrimental effects, including reduced coastal protec�on from wave ac�on and storm surge flooding, loss of wildlife habitat, lowered carbon sequestra�on, and reduced ability to filter pollutants from upland runoff. As we con�nue the project, we are working to compute shoreline movement over �me, expand the studied area to other parts of the Northern Gulf of Mexico Coast to evaluate broader trends, and quan�fy how certain environmental factors (e.g. wind fetch, salinity) are related to the rate of spread of P. australis. Phragmites australis (Cav.) Trin. ex Steud. is a highly compe��ve na�ve species in Northern Gulf of Mexico coastal marshes [1,2,3]. It is associated with decreased vegeta�ve diversity, but provides habitat for a variety of birds, insects and spiders [1,4,5,6]. In Mississippi's estuaries, it has been previously documented in disturbed areas, along natural levees and at higher eleva�ons in more saline areas [1,7]. The species can exist at wide range of eleva�ons under various salinity and hydrological condi�ons [8,9]. Recent work suggests it could protect against coastal erosion by facilita�ng sediment deposi�on be�er than other marsh species such as Spar�na alterniflora Loisel [10]. Our study aims to describe how P. australis has responded to recent changes in sea level compared with co-located marsh vegeta�on in the Pascagoula River Estuary, Mississippi, USA (Fig. 1). Research Ques�ons 1. How has P. australis extent changed over �me (1996-2018)? 2. How does this change in extent compare to the change in extent of other co-located marsh species? 3. Is P. australis extent changing more rapidly in areas which are more protected and experience lower salinity levels? Fig. 1. The Southwest Pascagoula River Estuary. Two sites were examined within the study area: a) the Mary Walker Bayou area, in green (inland, protected from wave ac�on, less saline) and b) the Marsh Islands at the mouth of the Pascagoula River, in yellow (exposed to greater wave ac�on and higher salini�es). Expansion of Phragmites australis in a Mississippi Estuary Determined from Aerial Image Data Margaret C. B. Waldron 1,2,* , Gregory A. Carter 1,3 , and Carlton P. Annual Rates of Change in Area Key Findings -P. australis extent increased in each year for a total of 15.8 ha, as the area of all other marsh decreased for each image date (Fig. 2, 3, 4). -Both sites exhibited an increasing trend in P. australis extent, with a 13.8 ha increase at the inland, protected, lower salinity site and a smaller 2.0 ha increase at the river mouth (Fig. 1, 5). -P. australis colonized other marsh areas at a rate of 12.4 ha per year overall and open water at a rate of 3.4 ha overall, with lower but s�ll posi�ve rates of coloniza�on at the exposed, higher salinity site (Fig. 6).
2021-12-27T16:03:45.604Z
2021-12-24T00:00:00.000
{ "year": 2021, "sha1": "193fba22c90aa86ffb5694a87a22fcaf87f52967", "oa_license": "CCBYNC", "oa_url": "https://essopenarchive.org/doi/pdf/10.1002/essoar.10509725.1", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "61c0fcc68ce5c62b0c2e423a864e21c415ccb37b", "s2fieldsofstudy": [ "Environmental Science", "Mathematics" ], "extfieldsofstudy": [] }
237601121
pes2o/s2orc
v3-fos-license
High mean arterial pressure target to improve sepsis-associated acute kidney injury in patients with prior hypertension: a feasibility study Background The optimal mean arterial pressure (MAP) in cases of septic shock is still a matter of debate in patients with prior hypertension. An MAP between 75 and 85 mmHg can improve glomerular filtration rate (GFR) but its effect on tubular function is unknown. We assessed the effects of high MAP level on glomerular and tubular renal function in two intensive care units of a teaching hospital. Inclusion criteria were patients with a history of chronic hypertension and developing AKI in the first 24 h of septic shock. Data were collected during two 6 h periods of MAP regimen administered consecutively after haemodynamic stabilisation in an order depending on the patient's admission unit: a high-target period (80–85 mmHg) and a low-target period (65–70 mmHg). The primary endpoint was the creatinine clearance (CrCl) calculated from urine and serum samples at the end of each MAP period by the UV/P formula. Results 26 patients were included. Higher urine output (+0.2 (95%:0, 0.4) mL/kg/h; P = 0.04), urine sodium (+6 (95% CI 0.2, 13) mmol/L; P = 0.04) and lower serum creatinine (− 10 (95% CI − 17, − 3) µmol/L; P = 0.03) were observed during the high-MAP period as compared to the low-MAP period, resulting in a higher CrCl (+25 (95% CI 11, 39) mL/mn; P = 0.002). The urine creatinine, urine–plasma creatinine ratio, urine osmolality, fractional excretion of sodium and urea showed no significant variation. The KDIGO stage at inclusion only interacted with serum creatinine variation and low level of sodium excretion at inclusion did not interact with these results. Conclusions In the early stage of sepsis-associated AKI, a high-MAP target in patients with a history of hypertension was associated with a higher CrCl, but did not affect the kidneys' ability to concentrate urine, which may reflect no effect on tubular function. Supplementary Information The online version contains supplementary material available at 10.1186/s13613-021-00925-2. Background Acute kidney injury (AKI) is a common clinical problem affecting approximately 50% of intensive care patients [1]. Sepsis is its main cause in this setting [2]. Traditional hemodynamic management of sepsis-associated AKI focuses on the prevention of hypoperfusion by optimizing blood pressure to maintain renal perfusion Open Access *Correspondence: antoine.dewitte@chu-bordeaux.fr 1 CHU Bordeaux, Department of Anaesthesia and Critical Care, Magellan Medico-Surgical Centre, F-33000 Bordeaux, France Full list of author information is available at the end of the article pressure and thus glomerular filtration rate (GFR), primarily through fluid resuscitation and administration of vasopressor drugs. However, optimizing blood pressure to limit kidney damage is a daily challenge for intensivists, especially since the optimal mean arterial pressure (MAP) target remains a subject of debate [3][4][5][6][7]. In patients with septic shock, the Surviving Sepsis Campaign guidelines recommend an initial target MAP of 65 mmHg. It is also highlighted that when a better understanding of any patient's condition is obtained, the MAP target should be individualized to the pertaining circumstances as it may be too low for certain patients [3]. In particular, the threshold for renal autoregulation may be higher in patients with atherosclerosis and/or previous hypertension than in young patients without cardiovascular comorbidity. European expert recommendations suggest higher MAP target in septic shock patients with history of hypertension and in patients that show clinical improvement with higher blood pressure [4]. On the other hand, there may be a risk of excessive vasoconstriction at higher MAP target requiring higher norepinephrine infusion rates, particularly in cases of sepsis-associated AKI where the pathophysiological mechanisms are complex [5]. Sepsis-associated AKI is characterized at an early stage by increased renal blood flow and decreased renal vascular conductance, resulting in redistribution of intrarenal blood flow and reduced medullar perfusion and oxygenation [1,6]. Restoration of blood pressure by norepinephrine infusion improves renal perfusion pressure but may further reduce renal medullary perfusion at high concentrations [6]. A higher MAP target could then improve blood pressure on the glomeruli, located in the renal cortex, without benefiting renal tubular function dependent on medullary perfusion. The objective of this study was to analyse the effects of a high-MAP target on renal glomerular and tubular function in critically ill septic patients with a history of chronic hypertension. Patients and setting During a 12-month study period (August 2016-July 2017), we included patients with a history of chronic hypertension and developing AKI at any KDIGO stage in the first 24 h of septic shock in two intensive care units (ICU) of Bordeaux University Hospital (one medico-surgical ICU including pulmonary or abdominal surgery and one medical ICU). Exclusion criteria were pregnancy, age ≤ 18 years, obstructive renal disease, AKI from an obstructive or suspected cause other than sepsis (e.g., toxic), severe chronic kidney disease defined based on a known eGFR < 30 mL/min/1.73 m 2 , renal replacement therapy (RRT) or anuria at the time of inclusion, and a presumed life expectancy < 24 h. Approval for this study was obtained from our institutional review board ( Definitions Patients were defined as having chronic hypertension when they were known to be hypertensive in their past medical history with at least one antihypertensive treatment in their usual medication regimen. Septic shock was defined according to the sepsis-3 definition [7]. AKI was defined according to the Kidney Disease Improving Global Outcomes (KDIGO) classification on the criteria of urine output (UO) and serum creatinine (sCr) [8]. The baseline sCr was determined by calling the referring doctor or by analysing the patient's medical records within the prior 3 months. Procedures The attending physician treated patients in accordance with the recommendations of the Surviving Sepsis Campaign after admission to ICU [3]. Initial management included fluid challenges to achieve a minimum of 30 mL/kg crystalloids and avoid excessive vasoconstriction in hypovolemic patients. Fluid administration was continued if there was haemodynamic improvement based on dynamic criteria (e.g., change in stroke volume). Haemodynamic was continuously monitored for all patients by an arterial line and repeated echocardiography. The administration of norepinephrine was performed to reach an initial MAP target of 65 mmHg. After haemodynamic stabilisation defined as a 3 h stable or decreased dose of norepinephrine without a need for fluid loading, the MAP target was challenged in accordance with the recommendations [4,9]. Patients were studied during two consecutive 6-h periods of MAP regimen administered consecutively in an order that depended on the ICU to which the patient was admitted: a group A with a high target period with MAP of 80-85 mmHg followed by a low-target period with MAP of 65-70 mmHg and a group B with a low MAP target period followed by a high MAP target period (Fig. 1). The order of assignment of the MAP regimens depended on the two participating ICU and the treatment was not blinded to the investigators or participants. Norepinephrine was titrated by the attending physician and the ICU nurses to achieve the MAP target according to unit protocols. During the high-MAP target period, a reduction in vasopressor doses to maintain an MAP of 65 to 70 mmHg was recommended if any of the prespecified serious adverse events that were potentially related to an increased rate of vasopressor infusion occurred. These events clinically relevant were bleeding, rhythm disorders, suspicion of myocardial infarction, mesenteric ischemia or distal-limb ischemia. Data collection All data, including hourly UO, were collected over a period of 6 h for each MAP target. Patient monitoring software (Metavision; iMDSoft, Wakefield, MA, USA) was used to continuously record all variables with a time interval of 1 min. The data were automatically averaged for each point analyzed. Endpoints The primary endpoint of this study was the GFR estimated by calculating creatinine clearance (CrCl) with the UV/P formula [9]. CrCl was determined from a urine sample obtained from the collection of the last hour of each MAP period. Urine and blood samples were collected simultaneously at inclusion and at each change in MAP target (Fig. 1). The other secondary endpoints were the sCr, urine creatinine (UCr), UO, urine sodium (UNa), serum osmolality, urine osmolality, proteinuria, urine-toplasma creatinine ratio, fractional excretion of sodium (FeNa) and fractional excretion of urea (FeU) variations from low to high-MAP period. The occurrence of adverse events was also analysed. Statistical analyses We estimated that at least 20 patients with AKI would be needed in this study to have 80% power to detect a 20% difference in CrCl between the two periods at a two-sided alpha level of 0.05. This calculation was based on the assumption that the standard deviation of the difference between the two CrCl values for the same patient would be 30%. Quantitative parameters are reported as their mean (standard deviation) or median [interquartile range] as appropriate and qualitative parameters are expressed as numbers (percentages). Baseline characteristics were compared using the χ 2 test or Fisher's exact test as appropriate. Continuous variables were compared using the Mann-Whitney U test. We performed a multivariate repeated measures analysis of variance (MANOVA) to compare the primary and secondary endpoints between the inclusion and the low MAP regimen and between the low and the high MAP regimen, including the order of assignment of MAP regimes and the KDIGO stage at inclusion as factors. Analysis of the interaction of pre-inclusion ACE inhibitor treatment, time from initiation of norepinephrine to inclusion, level of sodium excretion at inclusion and norepinephrine dose to achieve a high-MAP target on the variables studied was also assessed using multivariate repeated measures analysis of variance (MANOVA). All tests were two-sided with an alpha level of 0.05. Statistical analyses were performed using SAS version 9.3 (SAS Institute, Cary, NC, USA) and GraphPad Prism version 6.00 (GraphPad Software, La Jolla, CA, USA). Endpoints The intra-individual variations of the studied parameters from low-to high-MAP target and their values at inclusion are presented in Table 3. No parameters differ significantly between their value at inclusion and the low-MAP target period (Additional file 1: Figure S1). The CrCl was higher during the high-MAP period compared to the low-MAP period with an intra-individual percentage variation of 88 [7-227] % (P = 0.002) (Fig. 4). The high-MAP period was associated with a significant intra-individual increase of UO (11 [− 7 to 76] %; P = 0.04) and UNa (11 [− 8 to 46] %; P = 0.04) and a decrease of sCr (− 5 [− 12 to 1] %; P = 0.03) compared to the low-MAP period. There was no significant variation in uCr, urine osmolality, serum osmolality, FeNa and FeU between the low and the high-MAP period. Interaction analysis The order of assignment of MAP regimens did not significantly affect the effects of MAP on the variables analysed (Table 2 and Additional file 1: Table S1). Analysis of the interaction of the KDIGO stage at inclusion only showed a significant interaction on the variation of sCr between the low and high-MAP target period (P value for interaction = 0.04) (Additional file 1: Table S2). Treatment with ACE inhibitors prior to inclusion, time from initiation of norepinephrine to inclusion > 18 h, NaU at inclusion < 30 mmol/L and norepinephrine dose required to achieve a high MAP target > 0.5 µg/kg/min also did not interact with the effects of the high vs. low-MAP period on CrCl or any of the secondary endpoints. Adverse events Higher level of MAP target was not associated with more Discussion The main finding of this study is that a high-MAP target of 80-85 mmHg compared to MAP of 65-70 mmHg in septic patients with AKI and prior hypertension is associated with increased UO, UNa and decreased sCr, resulting in increased glomerular function as assessed by the UV/P formula. Conversely, a high-MAP regimen does not affect the kidneys' ability to concentrate urine, with no variation in urinary osmolality, urine-to-plasma creatinine ratio and fractional excretion of sodium or urea, which may reflect no effect on tubular function. The evaluation of renal function is complex in critically ill patients. Serum creatinine is traditionally used, because it is freely filtered into the glomerulus, with a small proportion being secreted along the tubule. The recommended formula for estimating GFR is UV/P, because it has the potential advantage that it can be used in the absence of a stable state [10]. However, its main limitations are that many factors influence sCr, in particular the patient's volume of distribution [11] and the proportion of tubular creatinine secretion, which remains unpredictable and depends on the relative increase in sCr to the patient's baseline creatinine [10]. However, the kidney has many other functions that are difficult to assess in AKI, including tubular transport. The ability of loop diuretics that are active on the renal tubule to induce natriuresis has, for example, been used to predict the development and severity of AKI and its prognosis [12]. To our knowledge, no studies have estimated the impact of higher pressure regimen on tubular function in sepsis-associated AKI. The effects of MAP level on AKI have been investigated in numerous studies, showing an improvement in UO with a MAP target between 65 and 75 mmHg [13]. Secondary Endpoints Creatinine clearance (UV/P) In the SEPSISPAM trial, 778 patients with septic shock were randomly treated with a low (65-70 mmHg) vs. high (80-85 mmHg) MAP target [14]. The authors demonstrated less renal failure, as defined by the doubling of plasma creatinine (38.8% vs. 52.0%, respectively, P = 0.02) in patients with previous hypertension treated with a higher MAP target and a decrease number of patients requiring RRT. Conversely, the 65-trial comparing permissive hypotension to usual care in patients 65 years of age or older receiving vasopressors for vasodilatory hypotension did not demonstrate an increase in the use of RRT in patients with chronic hypertension randomized to a lower MAP target group [15]. Legrand et al. also did not observe an association between most systemic hemodynamic parameters, including MAP and cardiac output, and sepsis-associated AKI [16]. Our results confirm that in patients with sepsis-associated AKI and chronic hypertension, a higher MAP target is associated with a significant increase in UO and a decrease in sCr, resulting in better CrCl. The observed increase in CrCl may not be considered as the only result of the increase in UO (e.g., single doses of diuretics have no significant effect on CrCl [17]), but could rather be the consequence of a residual level of glomerular filtration function when capillary pressure increases during a high-MAP regimen [18,19]. The stage of AKI interacted with variations in sCr in our study, possibly due to higher sCr values in patients with the most severe renal injury. More originally, we showed no variation in the ability to concentrate urine at a high-MAP regimen. Increased renal perfusion could, for instance, have curbed sodium reabsorption, but patients with low sodium excretion at inclusion showed no significant change in their ability to concentrate urine at high-MAP regimen compared to those with higher sodium excretion at inclusion. Furthermore, proximal tubular creatinine secretion accounts normally for 10-20% of the total creatinine clearance but increases to 50% in chronic kidney disease (CKD) when GFR falls [20]. The high-MAP regimen in our study did not result in a significant increase in uCr, which may also reflect no effect on tubular secretion of creatinine. Variable P value for effect of MAP This study has several limitations. First, this study should be considered exploratory and observational as the crossover was only performed at the individual level and not at the level of the grouping unit without randomisation. A carry-over effect remains, therefore, possible in this rapidly evolving disease without being able to assume that the patients had returned to their initial state before the application of the next MAP regimen. However, changes in creatinine clearance are described as rapid in AKI, with a period of approximately 7 h to reach a 100% increase when the baseline SCr is 88 µmol/L at a constant rate of creatinine production of 60 mg/h and a complete cessation of CrCl [21]. A washout period was also possible on the urine sample, since it was collected on the last hour collection. The time to assess a change in renal tubular function is not known for this clinical setting either, but response to renal tubular function tests is often observed within hours in nephrology studies [22]. Second, patients were included after the initial resuscitation of septic shock, i.e., after the crucial period for the onset and severity of AKI. The inclusions may then have been too delayed for the MAP-targeted norepinephrine regimen to have a significant impact on tubular function. Patient evolution prior to admission to the ICU may also have influenced our results and the time period for a potential impact of higher MAP regimen on renal function and its lasting effect is unknown. The haemodynamic stabilisation period defined by a stable or decreased dose of norepinephrine for 3-h fluid loading could also be debated. This delay was chosen to allow sufficient time to reach an optimised volume status before changing doses of norepinephrine, but without excessively delaying the possible effect of the change in pressure regimen on renal function. Estimation of GFR by calculation of a CrCl from a urine sample may be another limiting factor as it may not be representative of urine production over a longer period of time [9]. However, our analysis at an intra-individual level in combination with an MAP regimen administered consecutively in two different orders may avoid other biases related to the normal evolution of sepsis-associated AKI, given the great diversity of septic patients, including the patient's creatinine generation rate, the volume of distribution of creatinine, and dynamic changes over time as well as "renal reserve" of patients. The pathophysiological mechanisms underlying sepsisassociated AKI are still a matter of debate, but it has been demonstrated that AKI occurs during hyperdynamic sepsis with increased total renal blood flow [5,23]. Several mechanisms have been proposed to play a role, including tissue hypoxia, changes in microcirculation, venous congestion and mechanisms independent of haemodynamic impairment, such as inflammation and oxidative stress. Beyond the filtration function of the kidney, tubular transport is a determining factor in its oxygen consumption and evidence is now accumulating that places the tubular system at the center of AKI pathophysiology and recovery in established sepsis [24]. In addition, recent findings suggest that treatment with norepinephrine decreased medullary tissue oxygen tension by half and decreased medullary perfusion, region in which the renal tubules are inserted [6]. The increase in GFR during norepinephrine infusion may also increase sodium delivery to tubular elements within the medulla, and thus utilization of oxygen for sodium reabsorption, which could contribute to the observed medullary hypoxia [25]. Whether the risk of medullary hypoxia associated with high doses of norepinephrine combined to pressure induced glomerular injuries, which is the predominant pathway for nephron loss in CKD [26], have an impact on renal injury and its long-term prognosis is unknown. However, it is now well established that AKI survivors are at high risk of developing CKD, even if their AKI was not severe and their kidney function has recovered on discharge from the ICU [27]. The impact of a high-MAP target on renal tubular function and on the longer term prognosis of sepsis-associated AKI in patients with prior hypertension should be further investigated in larger randomised trials. Conclusion In the early stage of sepsis-associated AKI, a higher MAP target of 80-85 mmHg as compared to a standard MAP target of 65-70 mmHg in patients with prior hypertension was associated with a significant greater glomerular function evaluated by the UV/P formula, but did not affect the kidneys' ability to concentrate urine, which may indicate no effect on tubular function.
2021-09-23T13:11:54.109Z
2021-09-22T00:00:00.000
{ "year": 2021, "sha1": "f6ac6d3344e82e463989a506fd97f6d16a391ed1", "oa_license": "CCBY", "oa_url": "https://annalsofintensivecare.springeropen.com/track/pdf/10.1186/s13613-021-00925-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d39edf47f225cb1ecf09ede0b5d11235d1dd95d9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
208979286
pes2o/s2orc
v3-fos-license
DESIGN AND CHARACTERIZATION OF CANDESARTAN CILEXETIL ORAL NANOEMULSION CONTAINING GARLIC OIL Objective: This study was designed to prepare and characterize oil in water (o/w) nanoemulsion of candesartan cilexetil for oral administration. Preparation of candesartan cilexetil as nanoemulsion could increase its water solubility and thus could enhance its bioavailability. Methods: Aqueous titration method was used to construct the pseudo-ternary phase diagrams of nanoemulsion (NE) consisting of oil, various weight ratios of surfactant and co-surfactant (S mix), and deionized water. Different characterization techniques were conducted on the prepared nanoemulsions to obtain the optimized formula. Results: Characterizations of formula NE-4 (consists of 0.16% of candesartan cilexetil, 10% of garlic oil, 35 % of S mix (3:1) and 54.84% of deionized water) revealed the following characteristics: droplet size range (95-139 nm), polydispersity index (0.14), zeta potential value (-41.06 mV) and pH value (6.71), which are suitable for oral administration. Candesartan cilexetil in vitro release from this formula was significantly high (P<0.05) and scanning probe microscopy (SPM) study confirmed that the optimized formula (NE-4) was in nano-scale. Conclusion: Nanoemulsion formula 4 (NE-4) of candesartan cilexetil is the optimized formula and it could be a promising formula for improving the water solubility of candesartan cilaxetil. INTRODUCTION Oral conventional dosage forms are designed to provide a rapid onset via an immediate release of the active ingredient after administration. The desired therapeutic action, which can be achieved from these conventional drug delivery system, is based on the bioavailability of drugs. Bioavailability of immediate-release products is influenced by two important drug characteristics, which are water-solubility and permeability. Dissolution rate in the conventional dosage forms that contain a drug with low water solubility would be low, hence the bioavailability may be affected due to lower absorption across the gastrointestinal tract [1]. Based on permeability and water solubility, four classes of drugs are classified by Biopharmaceutics Classification System (BCS) into class I includes drugs with high permeability and high water solubility, class II includes drugs with high permeability and low water solubility, class III includes drugs that possess low permeability and high water solubility and class IV includes drugs with low permeability and low water solubility [2]. Enhancing water solubility of drugs belonging to class II using different techniques, such as self-emulsification, particle size reduction, and nanotechnology approaches, have the potential to improve absorption and thus enhance oral bioavailability of these drugs [3]. An example of these techniques is a nanoemulsion, which is a colloidal dispersion system consisting of oil, water, and surfactant and co-surfactant [4]. This system (nanoemulsion) is a thermodynamically stable system and is available in three different types: oil in water (o/w), water in oil (w/o) and bi-continuous nanoemulsions, where the microdomains of the two phases (water and oil) are inter-dispersed within the system [4]. Stabilization of all three nanoemulsion types is achieved via the presence of a good amount of surfactant and co-surfactant [4]. Based on the fact that this system (nanoemulsion) is prepared with a little energy input and has a long shelf life, it has a higher thermodynamic stability and solubilization capacity than other micellar solutions. Additionally, nanoemulsion can enhance the transport characteristic of drug, which is crucial for sustained and targeted drug delivery owing to their numerous interfacial area associated with this system [5]. Examples of successful implementation of nanoemulsion in enhancing water solubility have been reported previously, which were; oral nanoemulsion of rosuvastatin, rifampicin, and pterostilbene [6][7][8]. Candesartan cilexetil, a selective angiotensin II receptor subtype inhibitor, belongs to BCS class II with low water solubility and high permeability, while its oral bioavailability is only 14-40% [9]. Hence, this study was aimed to develop, optimize, and characterize candesartan cilexetil oral oil in water (o/w) nanoemulsion to improve its solubility and possibly the bioavailability. Materials Pure candesartan cilexetil powder was purchased from Hyper chem company, China. Tween 20, tween 60 and tween 80 were purchased from Thomas baker (chemicals) Pvt Ltd, India. Olive oil was supplied by Pomace olive oil, oilex, S. A, Spain. Polyethylene glycol 400 and propylene glycol were supplied by M/s provizer pharma, India. Ethanol was supplied by Avantor performance materials, Norway. Garlic oil and peppermint oil were purchased from Al-Emad Company, Iraq. Soybean oil was obtained from Genuine chemicals, India. Castor oil and deionized water were supplied by Al-Basheer company for chemical and laboratory materials, Baghdad, Iraq. Melting point measurement Candesartan cilexetil melting point was recorded by inserting a small amount of pure powdered drug into one side of a sealed capillary glass tube. By using a digital melting point instrument, the temperature of melting was recorded when all the powdered drug has melted [10]. Study of differential scanning calorimetry Differential scanning calorimeter (DSC) technique was made by placing a sample of the drug (5 mg) in the aluminum pan of DSC-60 Shimadzu. Analysis of this technique was made by using nitrogen at I In nt te er rn na at ti io on na al l J Jo ou ur rn na al l o of f A Ap pp pl li ie ed d P Ph ha ar rm ma ac ce eu ut ti ic cs s a rate of 10/min as inflow gas with heating range 50-250. DSC thermogram of candesartan cilexetil was recorded [11]. Study of saturated solubility The saturated solubility of candesartan cilexetil was determined in various surfactants (tween 20, tween 60 and tween 80), cosurfactants (polyethylene glycol 400 and propylene glycol) and oils (olive oil, garlic oil, peppermint oil, castor oil, and Soybean oil). Excess amount of powdered drug was added to 2 ml of each surfactant, co-surfactant, and oil in tightly closed plain tubes. The tubes were placed in an isothermal shaker water bath at 25±0.5 for 48 h. Then the samples were centrifuged at 2000 rpm for 10 min and supernatants for each sample were filtered by using filter membrane (0.45 µm). After dilution of filtrate with ethanol, solubility was measured using a UV-visible spectrophotometer in determined maximum wavelength [12]. Construction of pseudo-ternary phase diagrams Aqueous titration method was utilized to determine the components of pseudo-ternary phase diagrams. These components include a mixture of surfactant and co-surfactant (S mix), oil and deionized water. Different weight ratios (2:1, 3:1, and 4:1) were used for the mixing of surfactant and co-surfactant (S mix). Oil and S mix was blended in different weight ratios until the maximum ratio of oil and S mix was obtained. Fifteen different combinations of S mix and oil were prepared, then these combinations slowly titrated with deionized water and visual inspection was made for transparency. The titration was stopped when clear and transparent oil in water (o/w) nanoemulsion was produced [13]. Preparation of the candesartan cilexetil loaded nanoemulsions Candesartan cilexetil pure powder was dissolved in the oil that had the highest solubility for the drug, after that the quantity of S mix prepared from mixing of surfactants with co-surfactant was added to the oil loaded with the drug. Vortex mixer was used to blend the components of the whole mixture. Then titration of deionized water (drop by drop) on the mixture was made until clear (o/w) nanoemulsion is produced [14]. Thermodynamic stability tests of the prepared nanoemulsions Centrifugation test: Nanoemulsions were centrifuged for 15 min at 2000 rpm and checked for phase separation or cracking [15]. Freezing-thawing test: This test involved exposure of nanoemulsions to different temperatures, which were 21 °C and freeze using a refrigerator with no less than 24 h for each temperature. Heating-cooling test: This test was made by keeping nanoemulsions at 40 °C and 0 °C by the refrigerator. The time for each temperature was no less than 48 h. In this test, the cracking effect on nanoemulsion stability was reported Droplet size measurement Droplet size was measured using particle size analyzer ABT-9000 nanolaser. The droplet size and the plot for the distribution of the droplets were reported [16]. Polydispersity index (PDI) measurement (PDI) measurement was made utilizing particle size analyzer ABT-9000 nanolaser. This PDI determination indicates the distribution of droplets is within the nanoemulsion scale and determines the uniformity of droplets, i.e. higher value indicates lower uniformity [17]. Zeta potential (ZP) measurement Determination of zeta potential was made using zeta sizer instrument (Brookhaven). Zeta potential refers to the stability of colloidal dispersions, hence it describes the charge on the droplet surface [18]. Percent of transmittance measurement (% T) This measurement was performed using a UV-visible spectrophotometer (Emc Lab. UV-61 Double beam, Germany). The transmittance of the prepared nanoemulsions was measured at 650 nm using deionized water as a blank [19]. pH measurement pH of the prepared nanoemulsions was reported using digital pH meter (BP 3001, Trans instruments, Singapore), the measurement was made in triplicate [20]. Viscosity measurement Viscosity measurement was made using NDJ-digital viscometer (spindle no. 1) at 25. The viscosity was measured without making any formulation dilution [21]. In vitro release study In vitro release of candesartan cilexetil nanoemulsion was studied using dissolution apparatus USP-II (Copley dissolution tester DIS 8000, UK) with dialysis bag. Amount of candesartan cilexetil was 8 mg in each nanoemulsion formula (5 g). Each formula was placed in the dialysis bag and the dialysis bag was immersed in 900 ml of dissolution medium. The dissolution medium was phosphate buffer (pH 6.8). The apparatus was set at 37±0.5with rotation velocity of 50 rpm for 2 h. One Sample (5 ml) was withdrawn every 15 min for 2 h and was replenished by 5 ml of fresh medium to maintain sink condition. All samples withdrawn were filtered using filter membrane (0.45 µm). Then, samples were analyzed using a UVvisible spectrophotometer at 255 nm to determine the candesartan cilexetil amount in the formula [22]. Kinetics and mechanism of drug release Various kinetics models were applied to the data obtained from in vitro release study to determine kinetics and mechanisms of drug release. These models are zero-order, first-order, Higuchi's and Korsmeyer's model [23]. Scanning probe microscopy (SPM) SPM (triple probe microscope) study was made to show the morphology of the droplets and droplets distribution within the prepared system. A drop of nanoemulsion was placed on a glass slide where detection was made [24]. Statistical analysis Analysis of variance test (ANOVA) was used to analyze the data. Variables with P-value>0.05 were considered statistically insignificant. RESULTS AND DISCUSSION The melting point of the drug The melting point of candesartan cilexetil was found in the range of (171-172). This result was similar to that reported in the literatures, which indicates the purity of the powdered drug used in the study [25]. Differential scanning calorimetry (DSC) Candesartan cilexetil pure powder produced a sharp peak at 172.29 [26]. This reading corresponds with the measured melting point of candesartan cilexetil. DSC thermogram explained in fig. 1. Saturated solubility The preparation of stable nanoemulsion requires a suitable selection of components forming the formulas. Using a saturated solubility study of candesartan cilexetil in different oils, surfactants, and cosurfactants, the main components of the formulation can be selected. Hence, the formulation components which have the highest solubility for candesartan cilexetil were chosen as the main components in the preparation. Garlic oil had the highest solubility for the drug as compared with other oils used in this study, hence it was used as an oil phase in the formulation. Similarly, Tween 80 and polyethylene glycol 400 (PEG400) had the highest solubility for the candesartan cilexetil, hence tween 80 was used as surfactant and PEG400 was used as co-surfactant in the formulation [7]. The results of the saturated solubility of candesartan cilexetil in various oils, surfactants, and co-surfactants are explained in fig. 2. Construction of pseudo-ternary phase diagrams Pseudo-ternary phase diagrams were plotted using the component which had the highest solubility for candesartan cilexetil. Garlic oil was chosen as the oil phase both because of its high solubility to the drug and its benefits to patients with cardiovascular diseases, especially hypertension [27], which is one of the most important clinical indication of candesartan cilexetil. Tween 80 as a surfactant, polyethylene glycol 400 (PEG400) in S mix ratio (2:1, 3:1 and 4:1) and deionized water were selected as the aqueous phase in the formulation. Pseudo-ternary phase diagrams with different S mix ratios are shown in fig. 3-5, where the colored area in the plot was regarded as the area of nanoemulsion. Preparation of candesartan cilexetil loaded nanoemulsion Candesartan cilexetil loaded nanoemulsions were prepared by dissolving 0.16 g of the drug in the determined quantities of oil and S mix to prepare a formula of 100 g, which means that 8 mg of drug was in a formula of 5 g. Drug-loaded nanoemulsions are explained in table 1. Thermodynamic stability tests of the prepared nanoemulsions All preparations of drug-loaded nanoemulsions were successfully passed through the tests of dispersion stability, where the appearance of phase separation or cracking effect was not reported. Six nanoemulsions with different S mix ratios were selected for characterization study. These nanoemulsions were F1 (NE-1), F2 (NE-2), F6 (NE-3), F7 (NE-4), F11 (NE-5) and F12 (NE-6). The selection was made based on the low percentage of S mix and high percentage of deionized water [28]. Droplet size measurement The results of droplet size measurement of the six drug-loaded nanoemulsions are shown in table 2. The results indicate that all the nanoemulsions were observed in a nano-size scale. Furthermore, as the ratio of S mix increases, the droplet size decreases. Such observations can be attributed to the fact that the lipophilic tail of surfactant (tween 80) in the drug-loaded nanoemulsions is pulled toward the drug and the drug makes an insertion of co-surfactant between the cavities of surfactant causing a condensation of interfacial film, stabilization, and production of droplets in a small size [29]. According to the analysis of variance (ANOVA) test, there was a significant effect of S mix ratio on the droplet size (P-value<0.05). Polydispersity index (PDI) measurement The results of the PDI measurement of the six drug-loaded nanoemulsions are explained in table 2. A typical range of PDI of (0-1) indicates the uniformity of droplet size distribution within the formulations. In this study, the values of the drug-loaded nanoemulsions PDI were less than one, which explains the uniformity and distribution of the droplets dispersed in the garlic oil globules within the nanoemulsions [30]. Zeta Potential (ZP) Measurement The results of the ZP measurement of the six drug-loaded nanoemulsions are explained in table 2. Zeta potential is an important indicator of colloidal dispersions stability. Rule of thumb explains the zeta potential effect on the stability of nanoemulsion. This rule indicates that: fast droplets aggregation occurs when values of zeta potential are (-5 to+5 mV), values of (≤-20 to ≥+20 mV) indicate shortterm stability, while values of (≤-30 to ≥+30 mV) indicate good system stability. Excellent stability within formulation can be obtained when ZP values in the range of (-60 to+60 mV) [31]. In this study, NE-1 and NE-2 were within short stability, while NE-3, NE-4, NE-5, and NE-6 were in the range of good stability. Percent of transmittance measurement (%T) Percent of transmittance of the six drug-loaded nanoemulsions are illustrated in table 3. Values of all nanoemulsions were higher and closer to 100%, which indicates the clarity and transparency of the nanoemulsions [5]. The highest value of percent of transmittance was (99.313±0.011 %), which belongs to the formula (NE-4). The lower value of (%T) was (98.151±0.102), which belongs to the formula (NE-1). According to (ANOVA) test, there was no significant difference (P-value>0.05) in the percent of transmittance values among all the six drug-loaded nanoemulsions. pH measurement The results of the pH measurement of all six drug-loaded nanoemulsions are explained in table 3. The pH values of all nanoemulsions were higher than (5.5), which can be attributed to the high percentage of the aqueous phase and the slight basic properties of an oil phase (garlic oil). This could convey the suitability of the formulations for oral administration. According to (ANOVA) test, there was no significant difference (P-value>0.05) in pH values between all drug-loaded nanoemulsions. Viscosity measurement Viscosity values of all drug-loaded nanoemulsions are shown in table 3. The viscosity of all nanoemulsions was found in the range of (58.232 mPa. s) for NE-2 and (36.175 mPa. s) for NE-4. There was a significant difference (P-value<0.05) in the viscosity values of the nanoemulsions. This reveals that all the nanoemulsions are easily poured and are suitable for an oral administration. In vitro release study The release of the prepared candesartan cilexetil nanoemulsions (NE-1-NE-6) is illustrated in fig. 6. Drug release profile of the nanoemulsions (NE-1, NE-2, NE-3, NE-4, NE-5 and NE-6) in the dissolution medium (phosphate buffer, pH 6.8) signalized drug release in an order of: NE-4>NE-3>NE-1>NE-2>NE-6>NE-5. Higher release of candesartan cilexetil was observed in NE-4 with garlic oil: S mix: deionized water of (10: 35: 54.84), in which S mix was (3:1). In contrast, lower drug release was observed in NE-6 with garlic oil: S mix: deionized water of (10:40:49.84), in which S mix was (4:1). It has been further noticed that as S mix ratio increases, the release of the drug would increase, yet to a certain limit as was noted with S mix 2:1 and S mix 3:1. Furthermore, the release of the drug decreases with increase S mix as in 4:1, and this can be attributed both to a high concentration of surfactant, which would make drug molecules to challenge the retarding effect resulting from the surfactant, and to the increase in the surfactant concentration, which could raise the diffusion of a drug from dialysis bag to the dissolution medium [8]. There was a significant effect (P-value<0.05) between the concentration of surfactant and drug release. Kinetics and mechanism of drug release To determine the kinetics and mechanism of drug release, release data were fitted to various kinetic models (i.e. zero order, first order, Higuchi's and Korsmeyer's model). Higher regression coefficient (R 2 ) values could represent the kinetics of drug release from nanoemulsions. The mechanism of drug release was determined by fitting release data to Korsmeyer-Peppa's model (equation 1). Furthermore, diffusion exponent (n) values were further determined. According to the values of the diffusion exponent (n), the mechanism of drug release can determined as following: n value of 0.43 or less, the release of drug was Fickian release (diffusion)/(case I), n value larger than 0.43 but less than 0.89, the release of drug was Non-fickian release (diffusion and erosion), n value of 0.89, the release of drug was zero-order release (erosion)/(case II), and n value larger than 0.89, the release of drug would follow super release [23]. The values of the regression coefficient (R 2 ) and diffusion exponent (n) of candesartan cilexetil nanoemulsions were explained in table 4. In this study, higher regression coefficient (R 2 ) values were obtained in Higuchi's model; hence the kinetics of drug release in all nanoemulsions would follow Higuchi's model. The values of diffusion exponent (n) of all nanoemulsions were significantly lower than 0.43 (P-value<0.05), which indicates that mechanism of drug release from all nanoemulsions is Fickian release (diffusion)/(case I). Equation1 Where: F fraction of drug released at the time (t), Mt is the amount of drug released at the time (t), M is total amount in the dose age form, Km constant and (n) is the diffusion exponent describes the type of mechanism for drug release. These characterizations indicate that NE-4 formula was a suitable formula for oral administration. The optimized formula NE-4 had good stability based on the rule of thumb because of the zeta potential value, which was (-41.06 mV) as explained in fig. 8. Scanning probe microscopy (SPM) of the optimized formula The morphology of the optimized formula (NE-4) of candesartan cilexetil nanoemulsion was determined in this study, which was spherical in shape, the size of droplets was similar to size that obtained by particle size analyzer ABT-9000 nanolaser, aggregation doesn't present between the droplets. Hence, this optimized formula (NE-4) possess good stability. Droplets morphology of the optimized formula is explained in fig. 9, and the cumulative distribution chart of droplets within the optimized formula is shown in fig. 10. CONCLUSION In summary, the nanoemulsion delivery system can be considered as an innovative way of improving the water solubility of lipophilic. In this study, formula (NE-4) with S mix of (3:1) was the optimized formula, which has shown a high solubility of candesartan cilexetil in garlic oil, and a high percent cumulative of drug release as compared with other formulas. This formular could be a promising formula to improve water solubility of candesartan cilexetil and hence, bioavailability.
2019-11-14T17:10:13.003Z
2019-09-23T00:00:00.000
{ "year": 2019, "sha1": "853bcdc9e6c8463d482d1504a818d98189f89e28", "oa_license": "CCBY", "oa_url": "https://innovareacademics.in/journals/index.php/ijap/article/download/35066/22594", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b82d220ce3e74f478bd10d2efdce2b965ae05042", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }